Feb 13 15:38:28.010609 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 13:54:58 -00 2025 Feb 13 15:38:28.010655 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=cd73eba291b8356dfc2c39f651cabef9206685f772c8949188fd366788d672c2 Feb 13 15:38:28.010672 kernel: BIOS-provided physical RAM map: Feb 13 15:38:28.010684 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 13 15:38:28.010695 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 13 15:38:28.010707 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 13 15:38:28.010725 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Feb 13 15:38:28.010737 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Feb 13 15:38:28.010750 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Feb 13 15:38:28.010762 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 13 15:38:28.010775 kernel: NX (Execute Disable) protection: active Feb 13 15:38:28.010787 kernel: APIC: Static calls initialized Feb 13 15:38:28.010800 kernel: SMBIOS 2.7 present. Feb 13 15:38:28.010813 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Feb 13 15:38:28.010831 kernel: Hypervisor detected: KVM Feb 13 15:38:28.010846 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 15:38:28.010859 kernel: kvm-clock: using sched offset of 8906236438 cycles Feb 13 15:38:28.010874 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 15:38:28.010889 kernel: tsc: Detected 2499.996 MHz processor Feb 13 15:38:28.010903 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 15:38:28.010918 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 15:38:28.010934 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Feb 13 15:38:28.010949 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Feb 13 15:38:28.010963 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 15:38:28.010989 kernel: Using GB pages for direct mapping Feb 13 15:38:28.011004 kernel: ACPI: Early table checksum verification disabled Feb 13 15:38:28.011018 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Feb 13 15:38:28.011032 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Feb 13 15:38:28.011046 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Feb 13 15:38:28.011060 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Feb 13 15:38:28.011078 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Feb 13 15:38:28.011092 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Feb 13 15:38:28.011106 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Feb 13 15:38:28.011120 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Feb 13 15:38:28.011134 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Feb 13 15:38:28.011149 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Feb 13 15:38:28.011163 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Feb 13 15:38:28.011177 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Feb 13 15:38:28.011191 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Feb 13 15:38:28.011209 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Feb 13 15:38:28.011229 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Feb 13 15:38:28.011244 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Feb 13 15:38:28.011258 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Feb 13 15:38:28.011274 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Feb 13 15:38:28.011292 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Feb 13 15:38:28.011307 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Feb 13 15:38:28.011391 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Feb 13 15:38:28.011409 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Feb 13 15:38:28.011424 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 13 15:38:28.011440 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 13 15:38:28.011455 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Feb 13 15:38:28.011470 kernel: NUMA: Initialized distance table, cnt=1 Feb 13 15:38:28.011485 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Feb 13 15:38:28.011504 kernel: Zone ranges: Feb 13 15:38:28.011519 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 15:38:28.011534 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Feb 13 15:38:28.011549 kernel: Normal empty Feb 13 15:38:28.011564 kernel: Movable zone start for each node Feb 13 15:38:28.011579 kernel: Early memory node ranges Feb 13 15:38:28.011594 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 13 15:38:28.011610 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Feb 13 15:38:28.011625 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Feb 13 15:38:28.011643 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 15:38:28.011658 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 13 15:38:28.011673 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Feb 13 15:38:28.011688 kernel: ACPI: PM-Timer IO Port: 0xb008 Feb 13 15:38:28.011704 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 15:38:28.011719 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Feb 13 15:38:28.011734 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 15:38:28.011750 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 15:38:28.011765 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 15:38:28.011783 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 15:38:28.011799 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 15:38:28.011814 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 15:38:28.011829 kernel: TSC deadline timer available Feb 13 15:38:28.011844 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 13 15:38:28.011859 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 13 15:38:28.011874 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Feb 13 15:38:28.011890 kernel: Booting paravirtualized kernel on KVM Feb 13 15:38:28.011905 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 15:38:28.011920 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Feb 13 15:38:28.011939 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Feb 13 15:38:28.011954 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Feb 13 15:38:28.011969 kernel: pcpu-alloc: [0] 0 1 Feb 13 15:38:28.015922 kernel: kvm-guest: PV spinlocks enabled Feb 13 15:38:28.016251 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 15:38:28.016268 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=cd73eba291b8356dfc2c39f651cabef9206685f772c8949188fd366788d672c2 Feb 13 15:38:28.016282 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 15:38:28.016302 kernel: random: crng init done Feb 13 15:38:28.016316 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 15:38:28.016329 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 13 15:38:28.016342 kernel: Fallback order for Node 0: 0 Feb 13 15:38:28.016356 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Feb 13 15:38:28.016368 kernel: Policy zone: DMA32 Feb 13 15:38:28.016382 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 15:38:28.016396 kernel: Memory: 1932348K/2057760K available (12288K kernel code, 2299K rwdata, 22736K rodata, 42976K init, 2216K bss, 125152K reserved, 0K cma-reserved) Feb 13 15:38:28.016409 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 15:38:28.016425 kernel: Kernel/User page tables isolation: enabled Feb 13 15:38:28.016438 kernel: ftrace: allocating 37920 entries in 149 pages Feb 13 15:38:28.016450 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 15:38:28.016462 kernel: Dynamic Preempt: voluntary Feb 13 15:38:28.016475 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 15:38:28.016489 kernel: rcu: RCU event tracing is enabled. Feb 13 15:38:28.016501 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 15:38:28.016514 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 15:38:28.016527 kernel: Rude variant of Tasks RCU enabled. Feb 13 15:38:28.016540 kernel: Tracing variant of Tasks RCU enabled. Feb 13 15:38:28.016555 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 15:38:28.016568 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 15:38:28.016581 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 13 15:38:28.016593 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 15:38:28.016606 kernel: Console: colour VGA+ 80x25 Feb 13 15:38:28.016618 kernel: printk: console [ttyS0] enabled Feb 13 15:38:28.016631 kernel: ACPI: Core revision 20230628 Feb 13 15:38:28.016644 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Feb 13 15:38:28.016657 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 15:38:28.016672 kernel: x2apic enabled Feb 13 15:38:28.016685 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 15:38:28.016709 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Feb 13 15:38:28.016725 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Feb 13 15:38:28.016739 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 13 15:38:28.016752 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 13 15:38:28.016766 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 15:38:28.016779 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 15:38:28.016791 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 15:38:28.016804 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 15:38:28.016818 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Feb 13 15:38:28.016831 kernel: RETBleed: Vulnerable Feb 13 15:38:28.016848 kernel: Speculative Store Bypass: Vulnerable Feb 13 15:38:28.016862 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 15:38:28.016875 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 15:38:28.016888 kernel: GDS: Unknown: Dependent on hypervisor status Feb 13 15:38:28.016902 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 15:38:28.016915 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 15:38:28.016932 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 15:38:28.016945 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Feb 13 15:38:28.016958 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Feb 13 15:38:28.017046 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Feb 13 15:38:28.017060 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Feb 13 15:38:28.017073 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Feb 13 15:38:28.017086 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Feb 13 15:38:28.017194 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 15:38:28.017214 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Feb 13 15:38:28.017227 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Feb 13 15:38:28.017241 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Feb 13 15:38:28.017259 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Feb 13 15:38:28.017273 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Feb 13 15:38:28.017287 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Feb 13 15:38:28.017300 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Feb 13 15:38:28.017314 kernel: Freeing SMP alternatives memory: 32K Feb 13 15:38:28.017327 kernel: pid_max: default: 32768 minimum: 301 Feb 13 15:38:28.017340 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 15:38:28.017354 kernel: landlock: Up and running. Feb 13 15:38:28.017367 kernel: SELinux: Initializing. Feb 13 15:38:28.017380 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 15:38:28.017393 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 15:38:28.017406 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Feb 13 15:38:28.017423 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:38:28.017437 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:38:28.017450 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:38:28.017463 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Feb 13 15:38:28.017477 kernel: signal: max sigframe size: 3632 Feb 13 15:38:28.017491 kernel: rcu: Hierarchical SRCU implementation. Feb 13 15:38:28.017505 kernel: rcu: Max phase no-delay instances is 400. Feb 13 15:38:28.017519 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 13 15:38:28.017532 kernel: smp: Bringing up secondary CPUs ... Feb 13 15:38:28.017549 kernel: smpboot: x86: Booting SMP configuration: Feb 13 15:38:28.017562 kernel: .... node #0, CPUs: #1 Feb 13 15:38:28.017578 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Feb 13 15:38:28.017593 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 13 15:38:28.017605 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 15:38:28.017619 kernel: smpboot: Max logical packages: 1 Feb 13 15:38:28.017632 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Feb 13 15:38:28.017645 kernel: devtmpfs: initialized Feb 13 15:38:28.017662 kernel: x86/mm: Memory block size: 128MB Feb 13 15:38:28.017674 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 15:38:28.017688 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 15:38:28.017702 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 15:38:28.017716 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 15:38:28.017729 kernel: audit: initializing netlink subsys (disabled) Feb 13 15:38:28.017742 kernel: audit: type=2000 audit(1739461107.553:1): state=initialized audit_enabled=0 res=1 Feb 13 15:38:28.017755 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 15:38:28.017774 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 15:38:28.017792 kernel: cpuidle: using governor menu Feb 13 15:38:28.017865 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 15:38:28.017879 kernel: dca service started, version 1.12.1 Feb 13 15:38:28.017893 kernel: PCI: Using configuration type 1 for base access Feb 13 15:38:28.017906 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 15:38:28.017918 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 15:38:28.017932 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 15:38:28.017945 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 15:38:28.017957 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 15:38:28.019070 kernel: ACPI: Added _OSI(Module Device) Feb 13 15:38:28.019093 kernel: ACPI: Added _OSI(Processor Device) Feb 13 15:38:28.019107 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 15:38:28.019121 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 15:38:28.019136 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Feb 13 15:38:28.019150 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 15:38:28.019164 kernel: ACPI: Interpreter enabled Feb 13 15:38:28.019178 kernel: ACPI: PM: (supports S0 S5) Feb 13 15:38:28.019191 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 15:38:28.019211 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 15:38:28.019225 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 15:38:28.019239 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Feb 13 15:38:28.019251 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 15:38:28.019645 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 13 15:38:28.019792 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Feb 13 15:38:28.019924 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Feb 13 15:38:28.019942 kernel: acpiphp: Slot [3] registered Feb 13 15:38:28.019960 kernel: acpiphp: Slot [4] registered Feb 13 15:38:28.019985 kernel: acpiphp: Slot [5] registered Feb 13 15:38:28.022730 kernel: acpiphp: Slot [6] registered Feb 13 15:38:28.022747 kernel: acpiphp: Slot [7] registered Feb 13 15:38:28.022761 kernel: acpiphp: Slot [8] registered Feb 13 15:38:28.022775 kernel: acpiphp: Slot [9] registered Feb 13 15:38:28.022789 kernel: acpiphp: Slot [10] registered Feb 13 15:38:28.022803 kernel: acpiphp: Slot [11] registered Feb 13 15:38:28.022816 kernel: acpiphp: Slot [12] registered Feb 13 15:38:28.022837 kernel: acpiphp: Slot [13] registered Feb 13 15:38:28.022851 kernel: acpiphp: Slot [14] registered Feb 13 15:38:28.022865 kernel: acpiphp: Slot [15] registered Feb 13 15:38:28.022879 kernel: acpiphp: Slot [16] registered Feb 13 15:38:28.022892 kernel: acpiphp: Slot [17] registered Feb 13 15:38:28.022905 kernel: acpiphp: Slot [18] registered Feb 13 15:38:28.022919 kernel: acpiphp: Slot [19] registered Feb 13 15:38:28.022933 kernel: acpiphp: Slot [20] registered Feb 13 15:38:28.022946 kernel: acpiphp: Slot [21] registered Feb 13 15:38:28.022959 kernel: acpiphp: Slot [22] registered Feb 13 15:38:28.023016 kernel: acpiphp: Slot [23] registered Feb 13 15:38:28.023030 kernel: acpiphp: Slot [24] registered Feb 13 15:38:28.023044 kernel: acpiphp: Slot [25] registered Feb 13 15:38:28.023058 kernel: acpiphp: Slot [26] registered Feb 13 15:38:28.023071 kernel: acpiphp: Slot [27] registered Feb 13 15:38:28.023085 kernel: acpiphp: Slot [28] registered Feb 13 15:38:28.023098 kernel: acpiphp: Slot [29] registered Feb 13 15:38:28.023113 kernel: acpiphp: Slot [30] registered Feb 13 15:38:28.023126 kernel: acpiphp: Slot [31] registered Feb 13 15:38:28.023143 kernel: PCI host bridge to bus 0000:00 Feb 13 15:38:28.023328 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 15:38:28.023449 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 15:38:28.023709 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 15:38:28.023827 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Feb 13 15:38:28.023939 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 15:38:28.024447 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 13 15:38:28.024613 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 13 15:38:28.024750 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Feb 13 15:38:28.024878 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Feb 13 15:38:28.025023 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Feb 13 15:38:28.025155 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Feb 13 15:38:28.025280 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Feb 13 15:38:28.025406 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Feb 13 15:38:28.025538 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Feb 13 15:38:28.025662 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Feb 13 15:38:28.025793 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Feb 13 15:38:28.026358 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Feb 13 15:38:28.026496 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Feb 13 15:38:28.026623 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Feb 13 15:38:28.026747 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 15:38:28.026884 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Feb 13 15:38:28.028171 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Feb 13 15:38:28.028327 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Feb 13 15:38:28.028460 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Feb 13 15:38:28.028479 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 15:38:28.028493 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 15:38:28.028514 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 15:38:28.028529 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 15:38:28.028543 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 13 15:38:28.028557 kernel: iommu: Default domain type: Translated Feb 13 15:38:28.028571 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 15:38:28.028585 kernel: PCI: Using ACPI for IRQ routing Feb 13 15:38:28.028599 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 15:38:28.028613 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 13 15:38:28.028626 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Feb 13 15:38:28.028751 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Feb 13 15:38:28.028884 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Feb 13 15:38:28.029228 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 15:38:28.029250 kernel: vgaarb: loaded Feb 13 15:38:28.029264 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Feb 13 15:38:28.029278 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Feb 13 15:38:28.029291 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 15:38:28.029304 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 15:38:28.029319 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 15:38:28.029337 kernel: pnp: PnP ACPI init Feb 13 15:38:28.029351 kernel: pnp: PnP ACPI: found 5 devices Feb 13 15:38:28.029365 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 15:38:28.029379 kernel: NET: Registered PF_INET protocol family Feb 13 15:38:28.029392 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 15:38:28.029406 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 13 15:38:28.029420 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 15:38:28.029434 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 15:38:28.029448 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 15:38:28.029464 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 13 15:38:28.029477 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 15:38:28.029490 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 15:38:28.029503 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 15:38:28.029516 kernel: NET: Registered PF_XDP protocol family Feb 13 15:38:28.029782 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 15:38:28.029962 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 15:38:28.031161 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 15:38:28.031291 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Feb 13 15:38:28.031428 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 13 15:38:28.031446 kernel: PCI: CLS 0 bytes, default 64 Feb 13 15:38:28.031462 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 13 15:38:28.031476 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Feb 13 15:38:28.031490 kernel: clocksource: Switched to clocksource tsc Feb 13 15:38:28.031504 kernel: Initialise system trusted keyrings Feb 13 15:38:28.031518 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 13 15:38:28.031536 kernel: Key type asymmetric registered Feb 13 15:38:28.031549 kernel: Asymmetric key parser 'x509' registered Feb 13 15:38:28.031562 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 15:38:28.031576 kernel: io scheduler mq-deadline registered Feb 13 15:38:28.031589 kernel: io scheduler kyber registered Feb 13 15:38:28.031603 kernel: io scheduler bfq registered Feb 13 15:38:28.031617 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 15:38:28.031631 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 15:38:28.031645 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 15:38:28.031661 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 15:38:28.031675 kernel: i8042: Warning: Keylock active Feb 13 15:38:28.031688 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 15:38:28.031703 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 15:38:28.031838 kernel: rtc_cmos 00:00: RTC can wake from S4 Feb 13 15:38:28.031958 kernel: rtc_cmos 00:00: registered as rtc0 Feb 13 15:38:28.032221 kernel: rtc_cmos 00:00: setting system clock to 2025-02-13T15:38:27 UTC (1739461107) Feb 13 15:38:28.032345 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Feb 13 15:38:28.032367 kernel: intel_pstate: CPU model not supported Feb 13 15:38:28.032381 kernel: NET: Registered PF_INET6 protocol family Feb 13 15:38:28.032395 kernel: Segment Routing with IPv6 Feb 13 15:38:28.032408 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 15:38:28.032422 kernel: NET: Registered PF_PACKET protocol family Feb 13 15:38:28.032435 kernel: Key type dns_resolver registered Feb 13 15:38:28.032448 kernel: IPI shorthand broadcast: enabled Feb 13 15:38:28.032462 kernel: sched_clock: Marking stable (559001963, 201621638)->(834769442, -74145841) Feb 13 15:38:28.032475 kernel: registered taskstats version 1 Feb 13 15:38:28.032492 kernel: Loading compiled-in X.509 certificates Feb 13 15:38:28.032506 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 9ec780e1db69d46be90bbba73ae62b0106e27ae0' Feb 13 15:38:28.032519 kernel: Key type .fscrypt registered Feb 13 15:38:28.032533 kernel: Key type fscrypt-provisioning registered Feb 13 15:38:28.032547 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 15:38:28.032561 kernel: ima: Allocated hash algorithm: sha1 Feb 13 15:38:28.032575 kernel: ima: No architecture policies found Feb 13 15:38:28.032588 kernel: clk: Disabling unused clocks Feb 13 15:38:28.032602 kernel: Freeing unused kernel image (initmem) memory: 42976K Feb 13 15:38:28.032618 kernel: Write protecting the kernel read-only data: 36864k Feb 13 15:38:28.032631 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Feb 13 15:38:28.032645 kernel: Run /init as init process Feb 13 15:38:28.032659 kernel: with arguments: Feb 13 15:38:28.032672 kernel: /init Feb 13 15:38:28.032685 kernel: with environment: Feb 13 15:38:28.032698 kernel: HOME=/ Feb 13 15:38:28.032711 kernel: TERM=linux Feb 13 15:38:28.032725 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 15:38:28.032748 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:38:28.032779 systemd[1]: Detected virtualization amazon. Feb 13 15:38:28.032797 systemd[1]: Detected architecture x86-64. Feb 13 15:38:28.032811 systemd[1]: Running in initrd. Feb 13 15:38:28.032826 systemd[1]: No hostname configured, using default hostname. Feb 13 15:38:28.032843 systemd[1]: Hostname set to . Feb 13 15:38:28.032858 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:38:28.032873 systemd[1]: Queued start job for default target initrd.target. Feb 13 15:38:28.032888 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:38:28.032902 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:38:28.032919 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 15:38:28.032934 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:38:28.032949 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 15:38:28.032967 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 15:38:28.033264 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 15:38:28.033281 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 15:38:28.033296 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:38:28.033311 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:38:28.033725 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:38:28.033751 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:38:28.033773 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:38:28.033883 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:38:28.033901 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:38:28.033917 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:38:28.033932 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 15:38:28.033948 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 15:38:28.033963 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:38:28.033999 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:38:28.034020 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:38:28.034036 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:38:28.034051 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 15:38:28.034066 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:38:28.034081 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 15:38:28.034097 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 15:38:28.034112 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:38:28.034130 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:38:28.034146 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:38:28.034161 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 15:38:28.034176 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:38:28.034191 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 15:38:28.034211 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:38:28.034261 systemd-journald[179]: Collecting audit messages is disabled. Feb 13 15:38:28.034298 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 15:38:28.034318 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 15:38:28.034340 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:38:28.034355 kernel: Bridge firewalling registered Feb 13 15:38:28.034370 systemd-journald[179]: Journal started Feb 13 15:38:28.034400 systemd-journald[179]: Runtime Journal (/run/log/journal/ec2e0c190a81e019431bf822a02197eb) is 4.8M, max 38.6M, 33.7M free. Feb 13 15:38:27.975130 systemd-modules-load[180]: Inserted module 'overlay' Feb 13 15:38:28.199874 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:38:28.030476 systemd-modules-load[180]: Inserted module 'br_netfilter' Feb 13 15:38:28.203797 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:38:28.214505 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:38:28.228224 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:38:28.234189 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:38:28.237205 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:38:28.240829 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:38:28.266429 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:38:28.283032 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:38:28.299248 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 15:38:28.312636 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:38:28.357355 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:38:28.359524 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:38:28.375177 dracut-cmdline[211]: dracut-dracut-053 Feb 13 15:38:28.389394 dracut-cmdline[211]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=cd73eba291b8356dfc2c39f651cabef9206685f772c8949188fd366788d672c2 Feb 13 15:38:28.438267 systemd-resolved[214]: Positive Trust Anchors: Feb 13 15:38:28.438288 systemd-resolved[214]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:38:28.438336 systemd-resolved[214]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:38:28.451810 systemd-resolved[214]: Defaulting to hostname 'linux'. Feb 13 15:38:28.454458 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:38:28.456230 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:38:28.497002 kernel: SCSI subsystem initialized Feb 13 15:38:28.507012 kernel: Loading iSCSI transport class v2.0-870. Feb 13 15:38:28.518003 kernel: iscsi: registered transport (tcp) Feb 13 15:38:28.541141 kernel: iscsi: registered transport (qla4xxx) Feb 13 15:38:28.541230 kernel: QLogic iSCSI HBA Driver Feb 13 15:38:28.619683 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 15:38:28.627323 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 15:38:28.666395 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 15:38:28.666476 kernel: device-mapper: uevent: version 1.0.3 Feb 13 15:38:28.666498 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 15:38:28.738038 kernel: raid6: avx512x4 gen() 14664 MB/s Feb 13 15:38:28.761026 kernel: raid6: avx512x2 gen() 8345 MB/s Feb 13 15:38:28.780043 kernel: raid6: avx512x1 gen() 9505 MB/s Feb 13 15:38:28.797035 kernel: raid6: avx2x4 gen() 7149 MB/s Feb 13 15:38:28.814733 kernel: raid6: avx2x2 gen() 6840 MB/s Feb 13 15:38:28.831376 kernel: raid6: avx2x1 gen() 9332 MB/s Feb 13 15:38:28.831452 kernel: raid6: using algorithm avx512x4 gen() 14664 MB/s Feb 13 15:38:28.849012 kernel: raid6: .... xor() 6297 MB/s, rmw enabled Feb 13 15:38:28.849088 kernel: raid6: using avx512x2 recovery algorithm Feb 13 15:38:28.872002 kernel: xor: automatically using best checksumming function avx Feb 13 15:38:29.121999 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 15:38:29.148679 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:38:29.164651 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:38:29.179520 systemd-udevd[398]: Using default interface naming scheme 'v255'. Feb 13 15:38:29.185110 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:38:29.232592 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 15:38:29.284646 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Feb 13 15:38:29.355614 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:38:29.365255 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:38:29.451802 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:38:29.464511 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 15:38:29.507355 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 15:38:29.508455 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:38:29.514213 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:38:29.515906 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:38:29.528952 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 15:38:29.576669 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:38:29.579460 kernel: ena 0000:00:05.0: ENA device version: 0.10 Feb 13 15:38:29.630290 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Feb 13 15:38:29.630487 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Feb 13 15:38:29.630666 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 15:38:29.630687 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:ac:29:05:c4:1d Feb 13 15:38:29.635169 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 15:38:29.635237 kernel: AES CTR mode by8 optimization enabled Feb 13 15:38:29.637150 (udev-worker)[449]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:38:29.650563 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:38:29.651824 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:38:29.659955 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:38:29.661889 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:38:29.665265 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:38:29.669828 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:38:29.690429 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:38:29.703315 kernel: nvme nvme0: pci function 0000:00:04.0 Feb 13 15:38:29.703655 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Feb 13 15:38:29.717999 kernel: nvme nvme0: 2/0/0 default/read/poll queues Feb 13 15:38:29.725341 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 15:38:29.725964 kernel: GPT:9289727 != 16777215 Feb 13 15:38:29.726043 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 15:38:29.726067 kernel: GPT:9289727 != 16777215 Feb 13 15:38:29.726088 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 15:38:29.726111 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 15:38:29.818002 kernel: BTRFS: device fsid 966d6124-9067-4089-b000-5e99065fe7e2 devid 1 transid 37 /dev/nvme0n1p3 scanned by (udev-worker) (461) Feb 13 15:38:29.865446 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (456) Feb 13 15:38:29.869276 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:38:29.879222 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:38:29.948900 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:38:29.974738 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Feb 13 15:38:29.978085 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Feb 13 15:38:29.991650 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Feb 13 15:38:30.006690 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Feb 13 15:38:30.015418 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 15:38:30.028056 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 15:38:30.040536 disk-uuid[631]: Primary Header is updated. Feb 13 15:38:30.040536 disk-uuid[631]: Secondary Entries is updated. Feb 13 15:38:30.040536 disk-uuid[631]: Secondary Header is updated. Feb 13 15:38:30.052022 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 15:38:31.105027 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 15:38:31.106175 disk-uuid[632]: The operation has completed successfully. Feb 13 15:38:31.309498 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 15:38:31.309627 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 15:38:31.330207 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 15:38:31.347751 sh[892]: Success Feb 13 15:38:31.376025 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 15:38:31.472594 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 15:38:31.489112 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 15:38:31.491609 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 15:38:31.519097 kernel: BTRFS info (device dm-0): first mount of filesystem 966d6124-9067-4089-b000-5e99065fe7e2 Feb 13 15:38:31.519163 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:38:31.519183 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 15:38:31.520637 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 15:38:31.520663 kernel: BTRFS info (device dm-0): using free space tree Feb 13 15:38:31.639005 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 15:38:31.660710 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 15:38:31.664203 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 15:38:31.671196 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 15:38:31.686663 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 15:38:31.721764 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:38:31.721841 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:38:31.721869 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 15:38:31.733068 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 15:38:31.749238 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 15:38:31.752464 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:38:31.767087 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 15:38:31.777246 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 15:38:31.875149 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:38:31.882316 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:38:31.920139 systemd-networkd[1085]: lo: Link UP Feb 13 15:38:31.920151 systemd-networkd[1085]: lo: Gained carrier Feb 13 15:38:31.921760 systemd-networkd[1085]: Enumeration completed Feb 13 15:38:31.921893 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:38:31.922335 systemd-networkd[1085]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:38:31.922340 systemd-networkd[1085]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:38:31.924297 systemd[1]: Reached target network.target - Network. Feb 13 15:38:31.929407 systemd-networkd[1085]: eth0: Link UP Feb 13 15:38:31.929411 systemd-networkd[1085]: eth0: Gained carrier Feb 13 15:38:31.929423 systemd-networkd[1085]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:38:31.945080 systemd-networkd[1085]: eth0: DHCPv4 address 172.31.17.42/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 15:38:32.111411 ignition[1017]: Ignition 2.20.0 Feb 13 15:38:32.111896 ignition[1017]: Stage: fetch-offline Feb 13 15:38:32.112286 ignition[1017]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:38:32.112300 ignition[1017]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:38:32.113534 ignition[1017]: Ignition finished successfully Feb 13 15:38:32.120921 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:38:32.131343 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 15:38:32.160257 ignition[1095]: Ignition 2.20.0 Feb 13 15:38:32.160270 ignition[1095]: Stage: fetch Feb 13 15:38:32.160791 ignition[1095]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:38:32.160805 ignition[1095]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:38:32.160927 ignition[1095]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:38:32.196937 ignition[1095]: PUT result: OK Feb 13 15:38:32.201406 ignition[1095]: parsed url from cmdline: "" Feb 13 15:38:32.201419 ignition[1095]: no config URL provided Feb 13 15:38:32.201429 ignition[1095]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:38:32.201445 ignition[1095]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:38:32.201473 ignition[1095]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:38:32.202878 ignition[1095]: PUT result: OK Feb 13 15:38:32.202937 ignition[1095]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Feb 13 15:38:32.212173 ignition[1095]: GET result: OK Feb 13 15:38:32.216953 ignition[1095]: parsing config with SHA512: c45e8284308fb78f261c5bf5f9ebae79f0d25113c7e1b31b68ca32ef2b5ad507fbb6a75a53f31f0ef0788a0d1bf6edfe08cefc798ba61d1f0afe43a2740568f5 Feb 13 15:38:32.240804 unknown[1095]: fetched base config from "system" Feb 13 15:38:32.241294 unknown[1095]: fetched base config from "system" Feb 13 15:38:32.241304 unknown[1095]: fetched user config from "aws" Feb 13 15:38:32.242250 ignition[1095]: fetch: fetch complete Feb 13 15:38:32.246432 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 15:38:32.242258 ignition[1095]: fetch: fetch passed Feb 13 15:38:32.242332 ignition[1095]: Ignition finished successfully Feb 13 15:38:32.270257 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 15:38:32.315554 ignition[1101]: Ignition 2.20.0 Feb 13 15:38:32.315674 ignition[1101]: Stage: kargs Feb 13 15:38:32.316384 ignition[1101]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:38:32.316401 ignition[1101]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:38:32.316999 ignition[1101]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:38:32.318631 ignition[1101]: PUT result: OK Feb 13 15:38:32.324585 ignition[1101]: kargs: kargs passed Feb 13 15:38:32.324647 ignition[1101]: Ignition finished successfully Feb 13 15:38:32.327360 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 15:38:32.333300 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 15:38:32.369375 ignition[1108]: Ignition 2.20.0 Feb 13 15:38:32.369390 ignition[1108]: Stage: disks Feb 13 15:38:32.369873 ignition[1108]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:38:32.369887 ignition[1108]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:38:32.370021 ignition[1108]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:38:32.371599 ignition[1108]: PUT result: OK Feb 13 15:38:32.379813 ignition[1108]: disks: disks passed Feb 13 15:38:32.379894 ignition[1108]: Ignition finished successfully Feb 13 15:38:32.384649 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 15:38:32.385204 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 15:38:32.393890 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 15:38:32.397063 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:38:32.399624 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:38:32.402084 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:38:32.413532 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 15:38:32.457893 systemd-fsck[1116]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 15:38:32.465833 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 15:38:32.474157 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 15:38:32.690997 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 85ed0b0d-7f0f-4eeb-80d8-6213e9fcc55d r/w with ordered data mode. Quota mode: none. Feb 13 15:38:32.692791 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 15:38:32.695222 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 15:38:32.717240 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:38:32.734811 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 15:38:32.742260 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 15:38:32.742327 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 15:38:32.742363 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:38:32.770000 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1135) Feb 13 15:38:32.773467 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:38:32.773507 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:38:32.773527 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 15:38:32.787224 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 15:38:32.800871 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:38:32.802743 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 15:38:32.812437 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 15:38:33.241467 initrd-setup-root[1159]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 15:38:33.268341 initrd-setup-root[1166]: cut: /sysroot/etc/group: No such file or directory Feb 13 15:38:33.293575 initrd-setup-root[1173]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 15:38:33.322081 initrd-setup-root[1180]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 15:38:33.533697 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 15:38:33.538404 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 15:38:33.540814 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 15:38:33.563999 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:38:33.564037 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 15:38:33.597892 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 15:38:33.600224 ignition[1248]: INFO : Ignition 2.20.0 Feb 13 15:38:33.602880 ignition[1248]: INFO : Stage: mount Feb 13 15:38:33.602880 ignition[1248]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:38:33.602880 ignition[1248]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:38:33.602880 ignition[1248]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:38:33.608528 ignition[1248]: INFO : PUT result: OK Feb 13 15:38:33.611507 ignition[1248]: INFO : mount: mount passed Feb 13 15:38:33.613097 ignition[1248]: INFO : Ignition finished successfully Feb 13 15:38:33.614938 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 15:38:33.626160 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 15:38:33.651322 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:38:33.676020 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1260) Feb 13 15:38:33.678004 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:38:33.678060 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:38:33.679022 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 15:38:33.685006 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 15:38:33.687787 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:38:33.715244 ignition[1277]: INFO : Ignition 2.20.0 Feb 13 15:38:33.715244 ignition[1277]: INFO : Stage: files Feb 13 15:38:33.717576 ignition[1277]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:38:33.717576 ignition[1277]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:38:33.717576 ignition[1277]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:38:33.721622 ignition[1277]: INFO : PUT result: OK Feb 13 15:38:33.725181 ignition[1277]: DEBUG : files: compiled without relabeling support, skipping Feb 13 15:38:33.726546 ignition[1277]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 15:38:33.726546 ignition[1277]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 15:38:33.743395 systemd-networkd[1085]: eth0: Gained IPv6LL Feb 13 15:38:33.745110 ignition[1277]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 15:38:33.746940 ignition[1277]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 15:38:33.748713 unknown[1277]: wrote ssh authorized keys file for user: core Feb 13 15:38:33.750603 ignition[1277]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 15:38:33.752039 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 15:38:33.752039 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 13 15:38:34.358092 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 15:38:34.791834 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 15:38:34.791834 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 15:38:34.802849 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 15:38:34.802849 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:38:34.802849 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:38:34.802849 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:38:34.802849 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:38:34.802849 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:38:34.802849 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:38:34.830170 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:38:34.830170 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:38:34.830170 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 15:38:34.830170 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 15:38:34.830170 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 15:38:34.830170 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Feb 13 15:38:35.140398 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 15:38:35.566212 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 15:38:35.566212 ignition[1277]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 15:38:35.572517 ignition[1277]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:38:35.572517 ignition[1277]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:38:35.572517 ignition[1277]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 15:38:35.572517 ignition[1277]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Feb 13 15:38:35.572517 ignition[1277]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 15:38:35.572517 ignition[1277]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:38:35.572517 ignition[1277]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:38:35.572517 ignition[1277]: INFO : files: files passed Feb 13 15:38:35.572517 ignition[1277]: INFO : Ignition finished successfully Feb 13 15:38:35.570734 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 15:38:35.579281 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 15:38:35.582995 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 15:38:35.607413 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 15:38:35.607559 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 15:38:35.620860 initrd-setup-root-after-ignition[1306]: grep: Feb 13 15:38:35.622191 initrd-setup-root-after-ignition[1310]: grep: Feb 13 15:38:35.622191 initrd-setup-root-after-ignition[1306]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:38:35.622191 initrd-setup-root-after-ignition[1306]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:38:35.626611 initrd-setup-root-after-ignition[1310]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:38:35.629697 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:38:35.634154 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 15:38:35.641220 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 15:38:35.717301 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 15:38:35.717524 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 15:38:35.722683 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 15:38:35.724521 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 15:38:35.728333 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 15:38:35.734326 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 15:38:35.755369 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:38:35.773914 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 15:38:35.812897 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:38:35.815829 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:38:35.820772 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 15:38:35.822203 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 15:38:35.822349 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:38:35.831618 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 15:38:35.834368 systemd[1]: Stopped target basic.target - Basic System. Feb 13 15:38:35.835625 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 15:38:35.837022 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:38:35.855077 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 15:38:35.862559 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 15:38:35.876876 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:38:35.879955 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 15:38:35.882719 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 15:38:35.885041 systemd[1]: Stopped target swap.target - Swaps. Feb 13 15:38:35.885914 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 15:38:35.886099 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:38:35.891583 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:38:35.896252 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:38:35.898764 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 15:38:35.903677 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:38:35.910526 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 15:38:35.911016 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 15:38:35.920056 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 15:38:35.920404 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:38:35.926083 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 15:38:35.926228 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 15:38:35.935337 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 15:38:35.940479 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 15:38:35.942226 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 15:38:35.942388 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:38:35.946333 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 15:38:35.946691 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:38:35.963362 ignition[1330]: INFO : Ignition 2.20.0 Feb 13 15:38:35.963362 ignition[1330]: INFO : Stage: umount Feb 13 15:38:35.968752 ignition[1330]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:38:35.968752 ignition[1330]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:38:35.968752 ignition[1330]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:38:35.968752 ignition[1330]: INFO : PUT result: OK Feb 13 15:38:36.003140 ignition[1330]: INFO : umount: umount passed Feb 13 15:38:36.003140 ignition[1330]: INFO : Ignition finished successfully Feb 13 15:38:35.969341 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 15:38:35.969527 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 15:38:35.980970 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 15:38:35.982118 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 15:38:35.994297 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 15:38:35.994372 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 15:38:35.997252 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 15:38:35.997322 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 15:38:35.999309 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 15:38:35.999371 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 15:38:36.001885 systemd[1]: Stopped target network.target - Network. Feb 13 15:38:36.003474 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 15:38:36.003642 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:38:36.006257 systemd[1]: Stopped target paths.target - Path Units. Feb 13 15:38:36.008184 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 15:38:36.012879 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:38:36.021876 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 15:38:36.023026 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 15:38:36.044176 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 15:38:36.044234 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:38:36.046660 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 15:38:36.046709 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:38:36.048263 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 15:38:36.048341 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 15:38:36.051855 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 15:38:36.051921 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 15:38:36.054687 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 15:38:36.057286 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 15:38:36.059205 systemd-networkd[1085]: eth0: DHCPv6 lease lost Feb 13 15:38:36.065044 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 15:38:36.065628 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 15:38:36.065762 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 15:38:36.067421 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 15:38:36.067513 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 15:38:36.069280 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 15:38:36.069377 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 15:38:36.074848 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 15:38:36.074915 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:38:36.077435 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 15:38:36.077511 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 15:38:36.087137 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 15:38:36.087409 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 15:38:36.087483 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:38:36.087799 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:38:36.087844 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:38:36.087952 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 15:38:36.088004 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 15:38:36.088131 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 15:38:36.088169 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:38:36.088395 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:38:36.112782 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 15:38:36.116027 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:38:36.122638 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 15:38:36.122711 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 15:38:36.125226 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 15:38:36.125277 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:38:36.127544 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 15:38:36.127617 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:38:36.131441 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 15:38:36.131518 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 15:38:36.135229 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:38:36.135305 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:38:36.144277 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 15:38:36.146663 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 15:38:36.146774 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:38:36.150461 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 15:38:36.150540 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:38:36.156383 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 15:38:36.156458 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:38:36.158939 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:38:36.159019 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:38:36.160641 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 15:38:36.160738 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 15:38:36.162341 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 15:38:36.162447 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 15:38:36.167726 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 15:38:36.187282 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 15:38:36.203624 systemd[1]: Switching root. Feb 13 15:38:36.249234 systemd-journald[179]: Journal stopped Feb 13 15:38:39.168216 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). Feb 13 15:38:39.168532 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 15:38:39.168560 kernel: SELinux: policy capability open_perms=1 Feb 13 15:38:39.168590 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 15:38:39.168611 kernel: SELinux: policy capability always_check_network=0 Feb 13 15:38:39.168733 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 15:38:39.168756 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 15:38:39.168784 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 15:38:39.168805 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 15:38:39.168829 kernel: audit: type=1403 audit(1739461117.371:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 15:38:39.168856 systemd[1]: Successfully loaded SELinux policy in 52.690ms. Feb 13 15:38:39.168897 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 21.511ms. Feb 13 15:38:39.168924 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:38:39.168950 systemd[1]: Detected virtualization amazon. Feb 13 15:38:39.169055 systemd[1]: Detected architecture x86-64. Feb 13 15:38:39.169075 systemd[1]: Detected first boot. Feb 13 15:38:39.169099 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:38:39.169120 zram_generator::config[1372]: No configuration found. Feb 13 15:38:39.169141 systemd[1]: Populated /etc with preset unit settings. Feb 13 15:38:39.169159 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 15:38:39.169180 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 15:38:39.169208 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 15:38:39.169386 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 15:38:39.169409 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 15:38:39.169433 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 15:38:39.169453 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 15:38:39.169477 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 15:38:39.169498 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 15:38:39.169518 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 15:38:39.169543 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 15:38:39.169565 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:38:39.169588 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:38:39.169611 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 15:38:39.169638 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 15:38:39.169660 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 15:38:39.169681 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:38:39.169702 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 15:38:39.169732 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:38:39.169753 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 15:38:39.169777 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 15:38:39.169800 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 15:38:39.169825 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 15:38:39.169846 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:38:39.169867 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:38:39.169888 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:38:39.169909 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:38:39.169929 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 15:38:39.169951 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 15:38:39.170484 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:38:39.170523 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:38:39.170550 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:38:39.170572 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 15:38:39.170594 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 15:38:39.170616 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 15:38:39.170637 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 15:38:39.170739 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:38:39.170762 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 15:38:39.170783 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 15:38:39.170809 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 15:38:39.170832 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 15:38:39.170854 systemd[1]: Reached target machines.target - Containers. Feb 13 15:38:39.170875 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 15:38:39.170896 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:38:39.170918 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:38:39.170940 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 15:38:39.170961 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:38:39.170999 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:38:39.171032 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:38:39.171053 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 15:38:39.171075 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:38:39.171096 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 15:38:39.171118 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 15:38:39.171139 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 15:38:39.171160 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 15:38:39.171181 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 15:38:39.171206 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:38:39.171227 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:38:39.171248 kernel: loop: module loaded Feb 13 15:38:39.171269 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 15:38:39.171290 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 15:38:39.171311 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:38:39.171333 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 15:38:39.171355 systemd[1]: Stopped verity-setup.service. Feb 13 15:38:39.171377 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:38:39.171458 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 15:38:39.171486 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 15:38:39.171509 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 15:38:39.171531 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 15:38:39.171555 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 15:38:39.171580 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 15:38:39.171602 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:38:39.171663 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 15:38:39.171688 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 15:38:39.171710 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:38:39.171731 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:38:39.173095 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:38:39.173150 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:38:39.173174 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:38:39.173202 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:38:39.173227 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 15:38:39.173249 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:38:39.173313 systemd-journald[1451]: Collecting audit messages is disabled. Feb 13 15:38:39.173355 systemd-journald[1451]: Journal started Feb 13 15:38:39.173406 systemd-journald[1451]: Runtime Journal (/run/log/journal/ec2e0c190a81e019431bf822a02197eb) is 4.8M, max 38.6M, 33.7M free. Feb 13 15:38:38.523842 systemd[1]: Queued start job for default target multi-user.target. Feb 13 15:38:38.582550 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Feb 13 15:38:38.583396 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 15:38:39.195853 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:38:39.195950 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:38:39.195997 kernel: fuse: init (API version 7.39) Feb 13 15:38:39.196727 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:38:39.199024 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 15:38:39.216629 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 15:38:39.234309 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 15:38:39.234577 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 15:38:39.251904 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 15:38:39.303108 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 15:38:39.318082 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 15:38:39.321125 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 15:38:39.321186 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:38:39.329080 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 15:38:39.336393 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 15:38:39.346372 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 15:38:39.348052 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:38:39.358408 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 15:38:39.366514 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 15:38:39.371669 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:38:39.377476 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 15:38:39.384307 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:38:39.394501 systemd-tmpfiles[1464]: ACLs are not supported, ignoring. Feb 13 15:38:39.394531 systemd-tmpfiles[1464]: ACLs are not supported, ignoring. Feb 13 15:38:39.403117 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 15:38:39.406703 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 15:38:39.409492 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 15:38:39.438524 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 15:38:39.475754 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:38:39.487906 kernel: loop0: detected capacity change from 0 to 140992 Feb 13 15:38:39.478406 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:38:39.494195 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 15:38:39.499941 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 15:38:39.517666 systemd-journald[1451]: Time spent on flushing to /var/log/journal/ec2e0c190a81e019431bf822a02197eb is 72.123ms for 965 entries. Feb 13 15:38:39.517666 systemd-journald[1451]: System Journal (/var/log/journal/ec2e0c190a81e019431bf822a02197eb) is 8.0M, max 195.6M, 187.6M free. Feb 13 15:38:39.601946 systemd-journald[1451]: Received client request to flush runtime journal. Feb 13 15:38:39.602015 kernel: ACPI: bus type drm_connector registered Feb 13 15:38:39.522230 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:38:39.523066 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:38:39.545516 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 15:38:39.548678 udevadm[1508]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 15:38:39.550812 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 15:38:39.560238 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 15:38:39.585164 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:38:39.605088 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 15:38:39.625153 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 15:38:39.657072 kernel: loop1: detected capacity change from 0 to 211296 Feb 13 15:38:39.675256 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 15:38:39.686234 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:38:39.695212 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 15:38:39.696347 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 15:38:39.726491 systemd-tmpfiles[1520]: ACLs are not supported, ignoring. Feb 13 15:38:39.726929 systemd-tmpfiles[1520]: ACLs are not supported, ignoring. Feb 13 15:38:39.733027 kernel: loop2: detected capacity change from 0 to 62848 Feb 13 15:38:39.739474 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:38:39.827859 kernel: loop3: detected capacity change from 0 to 138184 Feb 13 15:38:39.985009 kernel: loop4: detected capacity change from 0 to 140992 Feb 13 15:38:40.036007 kernel: loop5: detected capacity change from 0 to 211296 Feb 13 15:38:40.068100 kernel: loop6: detected capacity change from 0 to 62848 Feb 13 15:38:40.119579 kernel: loop7: detected capacity change from 0 to 138184 Feb 13 15:38:40.196377 (sd-merge)[1528]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Feb 13 15:38:40.199388 (sd-merge)[1528]: Merged extensions into '/usr'. Feb 13 15:38:40.210797 systemd[1]: Reloading requested from client PID 1499 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 15:38:40.211323 systemd[1]: Reloading... Feb 13 15:38:40.424598 zram_generator::config[1554]: No configuration found. Feb 13 15:38:40.727483 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:38:40.849372 systemd[1]: Reloading finished in 636 ms. Feb 13 15:38:40.879065 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 15:38:40.894256 systemd[1]: Starting ensure-sysext.service... Feb 13 15:38:40.898205 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:38:40.920164 systemd[1]: Reloading requested from client PID 1602 ('systemctl') (unit ensure-sysext.service)... Feb 13 15:38:40.920190 systemd[1]: Reloading... Feb 13 15:38:40.979590 systemd-tmpfiles[1603]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 15:38:40.980154 systemd-tmpfiles[1603]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 15:38:40.983695 systemd-tmpfiles[1603]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 15:38:40.985215 systemd-tmpfiles[1603]: ACLs are not supported, ignoring. Feb 13 15:38:40.985315 systemd-tmpfiles[1603]: ACLs are not supported, ignoring. Feb 13 15:38:41.004621 systemd-tmpfiles[1603]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:38:41.004644 systemd-tmpfiles[1603]: Skipping /boot Feb 13 15:38:41.041524 zram_generator::config[1627]: No configuration found. Feb 13 15:38:41.073091 systemd-tmpfiles[1603]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:38:41.073108 systemd-tmpfiles[1603]: Skipping /boot Feb 13 15:38:41.362064 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:38:41.432156 ldconfig[1491]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 15:38:41.468499 systemd[1]: Reloading finished in 547 ms. Feb 13 15:38:41.491073 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 15:38:41.492919 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 15:38:41.499671 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:38:41.524153 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:38:41.541595 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 15:38:41.547072 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 15:38:41.552764 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:38:41.557200 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:38:41.569564 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 15:38:41.583437 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:38:41.583738 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:38:41.590485 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:38:41.596337 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:38:41.607945 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:38:41.609845 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:38:41.610612 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:38:41.630224 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 15:38:41.633586 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:38:41.634030 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:38:41.634295 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:38:41.634455 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:38:41.647850 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:38:41.648300 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:38:41.656338 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:38:41.657909 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:38:41.658411 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 15:38:41.661206 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:38:41.664125 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 15:38:41.673461 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:38:41.673899 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:38:41.687479 systemd[1]: Finished ensure-sysext.service. Feb 13 15:38:41.699994 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:38:41.700195 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:38:41.702040 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:38:41.725498 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:38:41.727153 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:38:41.729658 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:38:41.730615 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:38:41.735171 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:38:41.742663 systemd-udevd[1686]: Using default interface naming scheme 'v255'. Feb 13 15:38:41.756303 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 15:38:41.764633 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 15:38:41.788115 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 15:38:41.806526 augenrules[1725]: No rules Feb 13 15:38:41.808915 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:38:41.813265 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:38:41.817297 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 15:38:41.829657 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 15:38:41.834249 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 15:38:41.840662 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:38:41.864963 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:38:42.015132 (udev-worker)[1741]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:38:42.076595 systemd-resolved[1685]: Positive Trust Anchors: Feb 13 15:38:42.076621 systemd-resolved[1685]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:38:42.076670 systemd-resolved[1685]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:38:42.089260 systemd-networkd[1737]: lo: Link UP Feb 13 15:38:42.089272 systemd-networkd[1737]: lo: Gained carrier Feb 13 15:38:42.092615 systemd-networkd[1737]: Enumeration completed Feb 13 15:38:42.092774 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:38:42.093299 systemd-networkd[1737]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:38:42.093378 systemd-networkd[1737]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:38:42.100247 systemd-resolved[1685]: Defaulting to hostname 'linux'. Feb 13 15:38:42.102219 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 15:38:42.107004 systemd-networkd[1737]: eth0: Link UP Feb 13 15:38:42.107203 systemd-networkd[1737]: eth0: Gained carrier Feb 13 15:38:42.107240 systemd-networkd[1737]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:38:42.112168 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:38:42.113817 systemd[1]: Reached target network.target - Network. Feb 13 15:38:42.115136 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:38:42.125076 systemd-networkd[1737]: eth0: DHCPv4 address 172.31.17.42/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 15:38:42.130793 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 15:38:42.190006 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (1744) Feb 13 15:38:42.218017 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 13 15:38:42.230633 kernel: ACPI: button: Power Button [PWRF] Feb 13 15:38:42.230709 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Feb 13 15:38:42.236046 kernel: ACPI: button: Sleep Button [SLPF] Feb 13 15:38:42.244002 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Feb 13 15:38:42.247717 systemd-networkd[1737]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:38:42.346021 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Feb 13 15:38:42.464004 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 15:38:42.477956 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:38:42.489297 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 15:38:42.501433 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 15:38:42.507481 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 15:38:42.515791 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 15:38:42.531175 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 15:38:42.564383 lvm[1849]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:38:42.627547 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 15:38:42.628187 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:38:42.642484 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 15:38:42.671589 lvm[1854]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:38:42.710710 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 15:38:42.899755 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:38:42.901905 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:38:42.903615 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 15:38:42.905614 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 15:38:42.907384 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 15:38:42.908697 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 15:38:42.914404 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 15:38:42.919954 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 15:38:42.920082 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:38:42.925321 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:38:42.937730 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 15:38:42.952249 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 15:38:42.975312 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 15:38:42.978647 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 15:38:42.980307 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:38:42.981655 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:38:42.983035 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:38:42.983062 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:38:42.989163 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 15:38:42.995234 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 15:38:43.012939 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 15:38:43.036957 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 15:38:43.051406 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 15:38:43.054324 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 15:38:43.066387 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 15:38:43.080217 systemd[1]: Started ntpd.service - Network Time Service. Feb 13 15:38:43.104142 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 15:38:43.121163 jq[1864]: false Feb 13 15:38:43.121168 systemd[1]: Starting setup-oem.service - Setup OEM... Feb 13 15:38:43.124937 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 15:38:43.147101 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 15:38:43.171882 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 15:38:43.174913 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 15:38:43.175670 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 15:38:43.182833 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 15:38:43.193248 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 15:38:43.199466 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 15:38:43.201078 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 15:38:43.207464 jq[1877]: true Feb 13 15:38:43.236773 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 15:38:43.239587 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 15:38:43.326611 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 15:38:43.326850 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 15:38:43.333438 dbus-daemon[1863]: [system] SELinux support is enabled Feb 13 15:38:43.338600 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 15:38:43.346857 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 15:38:43.346903 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 15:38:43.348576 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 15:38:43.348614 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 15:38:43.358018 jq[1887]: true Feb 13 15:38:43.358905 extend-filesystems[1865]: Found loop4 Feb 13 15:38:43.360194 extend-filesystems[1865]: Found loop5 Feb 13 15:38:43.360194 extend-filesystems[1865]: Found loop6 Feb 13 15:38:43.360194 extend-filesystems[1865]: Found loop7 Feb 13 15:38:43.360194 extend-filesystems[1865]: Found nvme0n1 Feb 13 15:38:43.360194 extend-filesystems[1865]: Found nvme0n1p1 Feb 13 15:38:43.360194 extend-filesystems[1865]: Found nvme0n1p2 Feb 13 15:38:43.360194 extend-filesystems[1865]: Found nvme0n1p3 Feb 13 15:38:43.360194 extend-filesystems[1865]: Found usr Feb 13 15:38:43.360194 extend-filesystems[1865]: Found nvme0n1p4 Feb 13 15:38:43.402105 extend-filesystems[1865]: Found nvme0n1p6 Feb 13 15:38:43.402105 extend-filesystems[1865]: Found nvme0n1p7 Feb 13 15:38:43.402105 extend-filesystems[1865]: Found nvme0n1p9 Feb 13 15:38:43.402105 extend-filesystems[1865]: Checking size of /dev/nvme0n1p9 Feb 13 15:38:43.434204 ntpd[1867]: 13 Feb 15:38:43 ntpd[1867]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 13:21:05 UTC 2025 (1): Starting Feb 13 15:38:43.434204 ntpd[1867]: 13 Feb 15:38:43 ntpd[1867]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 15:38:43.434204 ntpd[1867]: 13 Feb 15:38:43 ntpd[1867]: ---------------------------------------------------- Feb 13 15:38:43.434204 ntpd[1867]: 13 Feb 15:38:43 ntpd[1867]: ntp-4 is maintained by Network Time Foundation, Feb 13 15:38:43.434204 ntpd[1867]: 13 Feb 15:38:43 ntpd[1867]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 15:38:43.434204 ntpd[1867]: 13 Feb 15:38:43 ntpd[1867]: corporation. Support and training for ntp-4 are Feb 13 15:38:43.434204 ntpd[1867]: 13 Feb 15:38:43 ntpd[1867]: available at https://www.nwtime.org/support Feb 13 15:38:43.434204 ntpd[1867]: 13 Feb 15:38:43 ntpd[1867]: ---------------------------------------------------- Feb 13 15:38:43.434204 ntpd[1867]: 13 Feb 15:38:43 ntpd[1867]: proto: precision = 0.059 usec (-24) Feb 13 15:38:43.434204 ntpd[1867]: 13 Feb 15:38:43 ntpd[1867]: basedate set to 2025-02-01 Feb 13 15:38:43.434204 ntpd[1867]: 13 Feb 15:38:43 ntpd[1867]: gps base set to 2025-02-02 (week 2352) Feb 13 15:38:43.376277 (ntainerd)[1897]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 15:38:43.443327 tar[1880]: linux-amd64/helm Feb 13 15:38:43.378523 ntpd[1867]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 13:21:05 UTC 2025 (1): Starting Feb 13 15:38:43.463551 update_engine[1876]: I20250213 15:38:43.408648 1876 main.cc:92] Flatcar Update Engine starting Feb 13 15:38:43.464384 ntpd[1867]: 13 Feb 15:38:43 ntpd[1867]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 15:38:43.464384 ntpd[1867]: 13 Feb 15:38:43 ntpd[1867]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 15:38:43.464384 ntpd[1867]: 13 Feb 15:38:43 ntpd[1867]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 15:38:43.464384 ntpd[1867]: 13 Feb 15:38:43 ntpd[1867]: Listen normally on 3 eth0 172.31.17.42:123 Feb 13 15:38:43.464384 ntpd[1867]: 13 Feb 15:38:43 ntpd[1867]: Listen normally on 4 lo [::1]:123 Feb 13 15:38:43.464384 ntpd[1867]: 13 Feb 15:38:43 ntpd[1867]: Listen normally on 5 eth0 [fe80::4ac:29ff:fe05:c41d%2]:123 Feb 13 15:38:43.464384 ntpd[1867]: 13 Feb 15:38:43 ntpd[1867]: Listening on routing socket on fd #22 for interface updates Feb 13 15:38:43.464384 ntpd[1867]: 13 Feb 15:38:43 ntpd[1867]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:38:43.464384 ntpd[1867]: 13 Feb 15:38:43 ntpd[1867]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:38:43.386678 systemd-logind[1875]: Watching system buttons on /dev/input/event1 (Power Button) Feb 13 15:38:43.378551 ntpd[1867]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 15:38:43.485355 coreos-metadata[1862]: Feb 13 15:38:43.458 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 15:38:43.485355 coreos-metadata[1862]: Feb 13 15:38:43.473 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Feb 13 15:38:43.486145 update_engine[1876]: I20250213 15:38:43.474691 1876 update_check_scheduler.cc:74] Next update check in 5m19s Feb 13 15:38:43.386706 systemd-logind[1875]: Watching system buttons on /dev/input/event2 (Sleep Button) Feb 13 15:38:43.378561 ntpd[1867]: ---------------------------------------------------- Feb 13 15:38:43.386728 systemd-logind[1875]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 15:38:43.378572 ntpd[1867]: ntp-4 is maintained by Network Time Foundation, Feb 13 15:38:43.389299 systemd-logind[1875]: New seat seat0. Feb 13 15:38:43.378582 ntpd[1867]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 15:38:43.394577 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 15:38:43.378592 ntpd[1867]: corporation. Support and training for ntp-4 are Feb 13 15:38:43.407657 systemd-networkd[1737]: eth0: Gained IPv6LL Feb 13 15:38:43.378601 ntpd[1867]: available at https://www.nwtime.org/support Feb 13 15:38:43.499231 coreos-metadata[1862]: Feb 13 15:38:43.487 INFO Fetch successful Feb 13 15:38:43.499231 coreos-metadata[1862]: Feb 13 15:38:43.487 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Feb 13 15:38:43.499231 coreos-metadata[1862]: Feb 13 15:38:43.498 INFO Fetch successful Feb 13 15:38:43.499231 coreos-metadata[1862]: Feb 13 15:38:43.498 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Feb 13 15:38:43.424210 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Feb 13 15:38:43.378614 ntpd[1867]: ---------------------------------------------------- Feb 13 15:38:43.433345 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 15:38:43.389299 dbus-daemon[1863]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1737 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 13 15:38:43.439993 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 15:38:43.409632 ntpd[1867]: proto: precision = 0.059 usec (-24) Feb 13 15:38:43.454319 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:38:43.428765 ntpd[1867]: basedate set to 2025-02-01 Feb 13 15:38:43.458540 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 15:38:43.428789 ntpd[1867]: gps base set to 2025-02-02 (week 2352) Feb 13 15:38:43.474374 systemd[1]: Started update-engine.service - Update Engine. Feb 13 15:38:43.436827 ntpd[1867]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 15:38:43.497678 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 15:38:43.436889 ntpd[1867]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 15:38:43.499903 systemd[1]: Finished setup-oem.service - Setup OEM. Feb 13 15:38:43.437100 ntpd[1867]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 15:38:43.437138 ntpd[1867]: Listen normally on 3 eth0 172.31.17.42:123 Feb 13 15:38:43.437180 ntpd[1867]: Listen normally on 4 lo [::1]:123 Feb 13 15:38:43.437220 ntpd[1867]: Listen normally on 5 eth0 [fe80::4ac:29ff:fe05:c41d%2]:123 Feb 13 15:38:43.437257 ntpd[1867]: Listening on routing socket on fd #22 for interface updates Feb 13 15:38:43.438750 ntpd[1867]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:38:43.438782 ntpd[1867]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:38:43.513561 extend-filesystems[1865]: Resized partition /dev/nvme0n1p9 Feb 13 15:38:43.514864 coreos-metadata[1862]: Feb 13 15:38:43.508 INFO Fetch successful Feb 13 15:38:43.514864 coreos-metadata[1862]: Feb 13 15:38:43.510 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Feb 13 15:38:43.515499 coreos-metadata[1862]: Feb 13 15:38:43.515 INFO Fetch successful Feb 13 15:38:43.515814 coreos-metadata[1862]: Feb 13 15:38:43.515 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Feb 13 15:38:43.527426 coreos-metadata[1862]: Feb 13 15:38:43.527 INFO Fetch failed with 404: resource not found Feb 13 15:38:43.527426 coreos-metadata[1862]: Feb 13 15:38:43.527 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Feb 13 15:38:43.530544 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Feb 13 15:38:43.535997 extend-filesystems[1930]: resize2fs 1.47.1 (20-May-2024) Feb 13 15:38:43.542258 coreos-metadata[1862]: Feb 13 15:38:43.542 INFO Fetch successful Feb 13 15:38:43.542258 coreos-metadata[1862]: Feb 13 15:38:43.542 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Feb 13 15:38:43.543163 coreos-metadata[1862]: Feb 13 15:38:43.542 INFO Fetch successful Feb 13 15:38:43.543321 coreos-metadata[1862]: Feb 13 15:38:43.543 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Feb 13 15:38:43.544993 coreos-metadata[1862]: Feb 13 15:38:43.544 INFO Fetch successful Feb 13 15:38:43.545248 coreos-metadata[1862]: Feb 13 15:38:43.545 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Feb 13 15:38:43.546914 coreos-metadata[1862]: Feb 13 15:38:43.545 INFO Fetch successful Feb 13 15:38:43.547129 coreos-metadata[1862]: Feb 13 15:38:43.546 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Feb 13 15:38:43.548806 coreos-metadata[1862]: Feb 13 15:38:43.548 INFO Fetch successful Feb 13 15:38:43.548998 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Feb 13 15:38:43.720668 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 15:38:43.723596 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 15:38:43.750736 amazon-ssm-agent[1934]: Initializing new seelog logger Feb 13 15:38:43.750736 amazon-ssm-agent[1934]: New Seelog Logger Creation Complete Feb 13 15:38:43.750736 amazon-ssm-agent[1934]: 2025/02/13 15:38:43 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:38:43.750736 amazon-ssm-agent[1934]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:38:43.750736 amazon-ssm-agent[1934]: 2025/02/13 15:38:43 processing appconfig overrides Feb 13 15:38:43.753690 amazon-ssm-agent[1934]: 2025/02/13 15:38:43 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:38:43.754473 amazon-ssm-agent[1934]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:38:43.754675 amazon-ssm-agent[1934]: 2025/02/13 15:38:43 processing appconfig overrides Feb 13 15:38:43.755398 amazon-ssm-agent[1934]: 2025/02/13 15:38:43 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:38:43.757055 amazon-ssm-agent[1934]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:38:43.757386 amazon-ssm-agent[1934]: 2025/02/13 15:38:43 processing appconfig overrides Feb 13 15:38:43.776421 amazon-ssm-agent[1934]: 2025-02-13 15:38:43 INFO Proxy environment variables: Feb 13 15:38:43.779266 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Feb 13 15:38:43.814670 amazon-ssm-agent[1934]: 2025/02/13 15:38:43 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:38:43.814670 amazon-ssm-agent[1934]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:38:43.814670 amazon-ssm-agent[1934]: 2025/02/13 15:38:43 processing appconfig overrides Feb 13 15:38:43.820167 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (1746) Feb 13 15:38:43.821617 extend-filesystems[1930]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Feb 13 15:38:43.821617 extend-filesystems[1930]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 15:38:43.821617 extend-filesystems[1930]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Feb 13 15:38:43.830930 extend-filesystems[1865]: Resized filesystem in /dev/nvme0n1p9 Feb 13 15:38:43.824291 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 15:38:43.824909 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 15:38:43.840544 bash[1940]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:38:43.852461 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 15:38:43.866394 systemd[1]: Starting sshkeys.service... Feb 13 15:38:43.895440 amazon-ssm-agent[1934]: 2025-02-13 15:38:43 INFO https_proxy: Feb 13 15:38:43.900826 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 15:38:43.915474 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 15:38:43.969400 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 15:38:44.029965 amazon-ssm-agent[1934]: 2025-02-13 15:38:43 INFO http_proxy: Feb 13 15:38:44.141921 amazon-ssm-agent[1934]: 2025-02-13 15:38:43 INFO no_proxy: Feb 13 15:38:44.248297 dbus-daemon[1863]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 13 15:38:44.249575 amazon-ssm-agent[1934]: 2025-02-13 15:38:43 INFO Checking if agent identity type OnPrem can be assumed Feb 13 15:38:44.249850 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Feb 13 15:38:44.260555 dbus-daemon[1863]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1910 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 13 15:38:44.275628 systemd[1]: Starting polkit.service - Authorization Manager... Feb 13 15:38:44.355375 amazon-ssm-agent[1934]: 2025-02-13 15:38:43 INFO Checking if agent identity type EC2 can be assumed Feb 13 15:38:44.387550 polkitd[2008]: Started polkitd version 121 Feb 13 15:38:44.429117 coreos-metadata[1979]: Feb 13 15:38:44.429 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 15:38:44.430658 coreos-metadata[1979]: Feb 13 15:38:44.430 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Feb 13 15:38:44.439838 coreos-metadata[1979]: Feb 13 15:38:44.439 INFO Fetch successful Feb 13 15:38:44.439838 coreos-metadata[1979]: Feb 13 15:38:44.439 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 13 15:38:44.443052 coreos-metadata[1979]: Feb 13 15:38:44.443 INFO Fetch successful Feb 13 15:38:44.453397 unknown[1979]: wrote ssh authorized keys file for user: core Feb 13 15:38:44.453661 polkitd[2008]: Loading rules from directory /etc/polkit-1/rules.d Feb 13 15:38:44.453768 polkitd[2008]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 13 15:38:44.459699 polkitd[2008]: Finished loading, compiling and executing 2 rules Feb 13 15:38:44.463239 amazon-ssm-agent[1934]: 2025-02-13 15:38:44 INFO Agent will take identity from EC2 Feb 13 15:38:44.463686 dbus-daemon[1863]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 13 15:38:44.463950 systemd[1]: Started polkit.service - Authorization Manager. Feb 13 15:38:44.482358 polkitd[2008]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 13 15:38:44.502999 sshd_keygen[1907]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 15:38:44.567544 amazon-ssm-agent[1934]: 2025-02-13 15:38:44 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 15:38:44.579011 update-ssh-keys[2064]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:38:44.581253 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 15:38:44.601074 systemd[1]: Finished sshkeys.service. Feb 13 15:38:44.664082 systemd-hostnamed[1910]: Hostname set to (transient) Feb 13 15:38:44.664205 systemd-resolved[1685]: System hostname changed to 'ip-172-31-17-42'. Feb 13 15:38:44.671817 amazon-ssm-agent[1934]: 2025-02-13 15:38:44 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 15:38:44.686606 locksmithd[1925]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 15:38:44.767886 amazon-ssm-agent[1934]: 2025-02-13 15:38:44 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 15:38:44.792875 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 15:38:44.806585 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 15:38:44.821376 containerd[1897]: time="2025-02-13T15:38:44.818633894Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 15:38:44.816030 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 15:38:44.816278 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 15:38:44.836250 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 15:38:44.866519 amazon-ssm-agent[1934]: 2025-02-13 15:38:44 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Feb 13 15:38:44.918114 amazon-ssm-agent[1934]: 2025-02-13 15:38:44 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Feb 13 15:38:44.918114 amazon-ssm-agent[1934]: 2025-02-13 15:38:44 INFO [amazon-ssm-agent] Starting Core Agent Feb 13 15:38:44.918114 amazon-ssm-agent[1934]: 2025-02-13 15:38:44 INFO [amazon-ssm-agent] registrar detected. Attempting registration Feb 13 15:38:44.918114 amazon-ssm-agent[1934]: 2025-02-13 15:38:44 INFO [Registrar] Starting registrar module Feb 13 15:38:44.918114 amazon-ssm-agent[1934]: 2025-02-13 15:38:44 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Feb 13 15:38:44.918114 amazon-ssm-agent[1934]: 2025-02-13 15:38:44 INFO [EC2Identity] EC2 registration was successful. Feb 13 15:38:44.918114 amazon-ssm-agent[1934]: 2025-02-13 15:38:44 INFO [CredentialRefresher] credentialRefresher has started Feb 13 15:38:44.918114 amazon-ssm-agent[1934]: 2025-02-13 15:38:44 INFO [CredentialRefresher] Starting credentials refresher loop Feb 13 15:38:44.918114 amazon-ssm-agent[1934]: 2025-02-13 15:38:44 INFO EC2RoleProvider Successfully connected with instance profile role credentials Feb 13 15:38:44.935851 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 15:38:44.950020 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 15:38:44.963411 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 15:38:44.965933 amazon-ssm-agent[1934]: 2025-02-13 15:38:44 INFO [CredentialRefresher] Next credential rotation will be in 30.858319230666666 minutes Feb 13 15:38:44.966454 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 15:38:44.986034 containerd[1897]: time="2025-02-13T15:38:44.985862580Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:38:44.989545 containerd[1897]: time="2025-02-13T15:38:44.989473852Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:38:44.989734 containerd[1897]: time="2025-02-13T15:38:44.989713605Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 15:38:44.989844 containerd[1897]: time="2025-02-13T15:38:44.989828800Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 15:38:44.990564 containerd[1897]: time="2025-02-13T15:38:44.990541618Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 15:38:44.990684 containerd[1897]: time="2025-02-13T15:38:44.990668521Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 15:38:44.991028 containerd[1897]: time="2025-02-13T15:38:44.991002412Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:38:44.991203 containerd[1897]: time="2025-02-13T15:38:44.991130127Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:38:44.997416 containerd[1897]: time="2025-02-13T15:38:44.997322727Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:38:44.997765 containerd[1897]: time="2025-02-13T15:38:44.997499765Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 15:38:44.998032 containerd[1897]: time="2025-02-13T15:38:44.997829582Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:38:44.998032 containerd[1897]: time="2025-02-13T15:38:44.997858311Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 15:38:45.001547 containerd[1897]: time="2025-02-13T15:38:45.000693749Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:38:45.001547 containerd[1897]: time="2025-02-13T15:38:45.001365319Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:38:45.004027 containerd[1897]: time="2025-02-13T15:38:45.003287420Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:38:45.004027 containerd[1897]: time="2025-02-13T15:38:45.003317229Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 15:38:45.004027 containerd[1897]: time="2025-02-13T15:38:45.003590368Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 15:38:45.004027 containerd[1897]: time="2025-02-13T15:38:45.003671987Z" level=info msg="metadata content store policy set" policy=shared Feb 13 15:38:45.027664 containerd[1897]: time="2025-02-13T15:38:45.026145920Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 15:38:45.027664 containerd[1897]: time="2025-02-13T15:38:45.026247656Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 15:38:45.027664 containerd[1897]: time="2025-02-13T15:38:45.026273097Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 15:38:45.027664 containerd[1897]: time="2025-02-13T15:38:45.026296735Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 15:38:45.027664 containerd[1897]: time="2025-02-13T15:38:45.026318295Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 15:38:45.027664 containerd[1897]: time="2025-02-13T15:38:45.026528779Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 15:38:45.027664 containerd[1897]: time="2025-02-13T15:38:45.026831284Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 15:38:45.027664 containerd[1897]: time="2025-02-13T15:38:45.026954968Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 15:38:45.027664 containerd[1897]: time="2025-02-13T15:38:45.027048781Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 15:38:45.027664 containerd[1897]: time="2025-02-13T15:38:45.027073919Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 15:38:45.027664 containerd[1897]: time="2025-02-13T15:38:45.027112887Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 15:38:45.027664 containerd[1897]: time="2025-02-13T15:38:45.027135114Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 15:38:45.027664 containerd[1897]: time="2025-02-13T15:38:45.027153626Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 15:38:45.027664 containerd[1897]: time="2025-02-13T15:38:45.027174270Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 15:38:45.028274 containerd[1897]: time="2025-02-13T15:38:45.027211546Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 15:38:45.028274 containerd[1897]: time="2025-02-13T15:38:45.027230470Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 15:38:45.028274 containerd[1897]: time="2025-02-13T15:38:45.027272840Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 15:38:45.028274 containerd[1897]: time="2025-02-13T15:38:45.027294423Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 15:38:45.028274 containerd[1897]: time="2025-02-13T15:38:45.027321815Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 15:38:45.028274 containerd[1897]: time="2025-02-13T15:38:45.027356767Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 15:38:45.028274 containerd[1897]: time="2025-02-13T15:38:45.027377447Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 15:38:45.028274 containerd[1897]: time="2025-02-13T15:38:45.027394776Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 15:38:45.028274 containerd[1897]: time="2025-02-13T15:38:45.027430104Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 15:38:45.028274 containerd[1897]: time="2025-02-13T15:38:45.027451832Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 15:38:45.028274 containerd[1897]: time="2025-02-13T15:38:45.027470020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 15:38:45.028274 containerd[1897]: time="2025-02-13T15:38:45.027489693Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 15:38:45.028274 containerd[1897]: time="2025-02-13T15:38:45.027524480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 15:38:45.028274 containerd[1897]: time="2025-02-13T15:38:45.027548567Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 15:38:45.028787 containerd[1897]: time="2025-02-13T15:38:45.027567613Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 15:38:45.028787 containerd[1897]: time="2025-02-13T15:38:45.027585217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 15:38:45.028787 containerd[1897]: time="2025-02-13T15:38:45.027612507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 15:38:45.028787 containerd[1897]: time="2025-02-13T15:38:45.027633637Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 15:38:45.030754 containerd[1897]: time="2025-02-13T15:38:45.029001350Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 15:38:45.030754 containerd[1897]: time="2025-02-13T15:38:45.029037370Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 15:38:45.030754 containerd[1897]: time="2025-02-13T15:38:45.029055501Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 15:38:45.030754 containerd[1897]: time="2025-02-13T15:38:45.029126858Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 15:38:45.030754 containerd[1897]: time="2025-02-13T15:38:45.029245527Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 15:38:45.030754 containerd[1897]: time="2025-02-13T15:38:45.029267814Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 15:38:45.030754 containerd[1897]: time="2025-02-13T15:38:45.029289079Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 15:38:45.030754 containerd[1897]: time="2025-02-13T15:38:45.029303634Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 15:38:45.030754 containerd[1897]: time="2025-02-13T15:38:45.029325063Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 15:38:45.030754 containerd[1897]: time="2025-02-13T15:38:45.029341165Z" level=info msg="NRI interface is disabled by configuration." Feb 13 15:38:45.030754 containerd[1897]: time="2025-02-13T15:38:45.029356942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 15:38:45.031265 containerd[1897]: time="2025-02-13T15:38:45.029796652Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 15:38:45.031265 containerd[1897]: time="2025-02-13T15:38:45.029868623Z" level=info msg="Connect containerd service" Feb 13 15:38:45.031265 containerd[1897]: time="2025-02-13T15:38:45.029912364Z" level=info msg="using legacy CRI server" Feb 13 15:38:45.031265 containerd[1897]: time="2025-02-13T15:38:45.029922122Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 15:38:45.031265 containerd[1897]: time="2025-02-13T15:38:45.030107640Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 15:38:45.032642 containerd[1897]: time="2025-02-13T15:38:45.032023289Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:38:45.032642 containerd[1897]: time="2025-02-13T15:38:45.032159651Z" level=info msg="Start subscribing containerd event" Feb 13 15:38:45.032642 containerd[1897]: time="2025-02-13T15:38:45.032207988Z" level=info msg="Start recovering state" Feb 13 15:38:45.032642 containerd[1897]: time="2025-02-13T15:38:45.032284100Z" level=info msg="Start event monitor" Feb 13 15:38:45.032642 containerd[1897]: time="2025-02-13T15:38:45.032310107Z" level=info msg="Start snapshots syncer" Feb 13 15:38:45.032642 containerd[1897]: time="2025-02-13T15:38:45.032322027Z" level=info msg="Start cni network conf syncer for default" Feb 13 15:38:45.032642 containerd[1897]: time="2025-02-13T15:38:45.032331768Z" level=info msg="Start streaming server" Feb 13 15:38:45.033333 containerd[1897]: time="2025-02-13T15:38:45.033311411Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 15:38:45.038696 containerd[1897]: time="2025-02-13T15:38:45.035751366Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 15:38:45.038696 containerd[1897]: time="2025-02-13T15:38:45.036781305Z" level=info msg="containerd successfully booted in 0.219470s" Feb 13 15:38:45.050634 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 15:38:45.054997 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 15:38:45.064598 systemd[1]: Started sshd@0-172.31.17.42:22-139.178.89.65:50710.service - OpenSSH per-connection server daemon (139.178.89.65:50710). Feb 13 15:38:45.294387 sshd[2102]: Accepted publickey for core from 139.178.89.65 port 50710 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:38:45.299803 sshd-session[2102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:38:45.324661 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 15:38:45.338482 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 15:38:45.348070 systemd-logind[1875]: New session 1 of user core. Feb 13 15:38:45.375968 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 15:38:45.391502 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 15:38:45.411066 (systemd)[2106]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 15:38:45.609900 tar[1880]: linux-amd64/LICENSE Feb 13 15:38:45.610490 tar[1880]: linux-amd64/README.md Feb 13 15:38:45.634941 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 15:38:45.690633 systemd[2106]: Queued start job for default target default.target. Feb 13 15:38:45.695879 systemd[2106]: Created slice app.slice - User Application Slice. Feb 13 15:38:45.695924 systemd[2106]: Reached target paths.target - Paths. Feb 13 15:38:45.695945 systemd[2106]: Reached target timers.target - Timers. Feb 13 15:38:45.701831 systemd[2106]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 15:38:45.720318 systemd[2106]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 15:38:45.721459 systemd[2106]: Reached target sockets.target - Sockets. Feb 13 15:38:45.721485 systemd[2106]: Reached target basic.target - Basic System. Feb 13 15:38:45.721548 systemd[2106]: Reached target default.target - Main User Target. Feb 13 15:38:45.721587 systemd[2106]: Startup finished in 297ms. Feb 13 15:38:45.722223 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 15:38:45.738515 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 15:38:45.959328 systemd[1]: Started sshd@1-172.31.17.42:22-139.178.89.65:50724.service - OpenSSH per-connection server daemon (139.178.89.65:50724). Feb 13 15:38:45.987882 amazon-ssm-agent[1934]: 2025-02-13 15:38:45 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Feb 13 15:38:46.089105 amazon-ssm-agent[1934]: 2025-02-13 15:38:45 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2122) started Feb 13 15:38:46.141661 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:38:46.143708 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 15:38:46.145403 systemd[1]: Startup finished in 699ms (kernel) + 9.661s (initrd) + 8.823s (userspace) = 19.184s. Feb 13 15:38:46.190848 amazon-ssm-agent[1934]: 2025-02-13 15:38:45 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Feb 13 15:38:46.226220 sshd[2121]: Accepted publickey for core from 139.178.89.65 port 50724 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:38:46.229744 sshd-session[2121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:38:46.240171 systemd-logind[1875]: New session 2 of user core. Feb 13 15:38:46.245742 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 15:38:46.273494 (kubelet)[2136]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:38:46.369813 sshd[2141]: Connection closed by 139.178.89.65 port 50724 Feb 13 15:38:46.374530 sshd-session[2121]: pam_unix(sshd:session): session closed for user core Feb 13 15:38:46.395566 systemd[1]: sshd@1-172.31.17.42:22-139.178.89.65:50724.service: Deactivated successfully. Feb 13 15:38:46.407721 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 15:38:46.409785 systemd-logind[1875]: Session 2 logged out. Waiting for processes to exit. Feb 13 15:38:46.420940 systemd[1]: Started sshd@2-172.31.17.42:22-139.178.89.65:50728.service - OpenSSH per-connection server daemon (139.178.89.65:50728). Feb 13 15:38:46.421765 systemd-logind[1875]: Removed session 2. Feb 13 15:38:46.600233 sshd[2150]: Accepted publickey for core from 139.178.89.65 port 50728 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:38:46.601795 sshd-session[2150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:38:46.609721 systemd-logind[1875]: New session 3 of user core. Feb 13 15:38:46.616185 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 15:38:46.734004 sshd[2156]: Connection closed by 139.178.89.65 port 50728 Feb 13 15:38:46.734138 sshd-session[2150]: pam_unix(sshd:session): session closed for user core Feb 13 15:38:46.740667 systemd-logind[1875]: Session 3 logged out. Waiting for processes to exit. Feb 13 15:38:46.741507 systemd[1]: sshd@2-172.31.17.42:22-139.178.89.65:50728.service: Deactivated successfully. Feb 13 15:38:46.747577 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 15:38:46.750278 systemd-logind[1875]: Removed session 3. Feb 13 15:38:46.774422 systemd[1]: Started sshd@3-172.31.17.42:22-139.178.89.65:50736.service - OpenSSH per-connection server daemon (139.178.89.65:50736). Feb 13 15:38:46.953873 sshd[2161]: Accepted publickey for core from 139.178.89.65 port 50736 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:38:46.954928 sshd-session[2161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:38:46.966647 systemd-logind[1875]: New session 4 of user core. Feb 13 15:38:46.976233 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 15:38:47.094500 kubelet[2136]: E0213 15:38:47.094418 2136 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:38:47.097925 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:38:47.098133 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:38:47.098780 systemd[1]: kubelet.service: Consumed 1.066s CPU time. Feb 13 15:38:47.104388 sshd[2164]: Connection closed by 139.178.89.65 port 50736 Feb 13 15:38:47.106097 sshd-session[2161]: pam_unix(sshd:session): session closed for user core Feb 13 15:38:47.109416 systemd[1]: sshd@3-172.31.17.42:22-139.178.89.65:50736.service: Deactivated successfully. Feb 13 15:38:47.111672 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 15:38:47.113378 systemd-logind[1875]: Session 4 logged out. Waiting for processes to exit. Feb 13 15:38:47.114617 systemd-logind[1875]: Removed session 4. Feb 13 15:38:47.142049 systemd[1]: Started sshd@4-172.31.17.42:22-139.178.89.65:50750.service - OpenSSH per-connection server daemon (139.178.89.65:50750). Feb 13 15:38:47.321482 sshd[2171]: Accepted publickey for core from 139.178.89.65 port 50750 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:38:47.324807 sshd-session[2171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:38:47.348064 systemd-logind[1875]: New session 5 of user core. Feb 13 15:38:47.355476 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 15:38:47.499346 sudo[2174]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 15:38:47.499772 sudo[2174]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:38:48.297769 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 15:38:48.298410 (dockerd)[2192]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 15:38:49.210229 dockerd[2192]: time="2025-02-13T15:38:49.210162037Z" level=info msg="Starting up" Feb 13 15:38:49.605904 dockerd[2192]: time="2025-02-13T15:38:49.605651557Z" level=info msg="Loading containers: start." Feb 13 15:38:49.883049 kernel: Initializing XFRM netlink socket Feb 13 15:38:49.933911 (udev-worker)[2300]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:38:50.021303 systemd-networkd[1737]: docker0: Link UP Feb 13 15:38:50.066449 dockerd[2192]: time="2025-02-13T15:38:50.065651928Z" level=info msg="Loading containers: done." Feb 13 15:38:50.101798 dockerd[2192]: time="2025-02-13T15:38:50.101738741Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 15:38:50.102027 dockerd[2192]: time="2025-02-13T15:38:50.101860604Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Feb 13 15:38:50.102090 dockerd[2192]: time="2025-02-13T15:38:50.102051005Z" level=info msg="Daemon has completed initialization" Feb 13 15:38:50.148813 dockerd[2192]: time="2025-02-13T15:38:50.148691714Z" level=info msg="API listen on /run/docker.sock" Feb 13 15:38:50.149074 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 15:38:51.195340 systemd-resolved[1685]: Clock change detected. Flushing caches. Feb 13 15:38:52.453551 containerd[1897]: time="2025-02-13T15:38:52.453512881Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.14\"" Feb 13 15:38:53.134213 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount597835379.mount: Deactivated successfully. Feb 13 15:38:56.255774 containerd[1897]: time="2025-02-13T15:38:56.255716077Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:38:56.257386 containerd[1897]: time="2025-02-13T15:38:56.257334035Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.14: active requests=0, bytes read=35142283" Feb 13 15:38:56.259449 containerd[1897]: time="2025-02-13T15:38:56.259407676Z" level=info msg="ImageCreate event name:\"sha256:41955df92b2799aec2c2840b2fc079945d248b6c88ab18062545d8065a0cd2ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:38:56.264070 containerd[1897]: time="2025-02-13T15:38:56.263724836Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1432b456b21015c99783d2b3a2010873fb67bf946c89d45e6d356449e083dcfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:38:56.265304 containerd[1897]: time="2025-02-13T15:38:56.265264692Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.14\" with image id \"sha256:41955df92b2799aec2c2840b2fc079945d248b6c88ab18062545d8065a0cd2ce\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.14\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1432b456b21015c99783d2b3a2010873fb67bf946c89d45e6d356449e083dcfb\", size \"35139083\" in 3.811714134s" Feb 13 15:38:56.265564 containerd[1897]: time="2025-02-13T15:38:56.265312808Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.14\" returns image reference \"sha256:41955df92b2799aec2c2840b2fc079945d248b6c88ab18062545d8065a0cd2ce\"" Feb 13 15:38:56.296193 containerd[1897]: time="2025-02-13T15:38:56.296140524Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.14\"" Feb 13 15:38:58.164745 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 15:38:58.171695 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:38:58.891599 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:38:58.894854 (kubelet)[2458]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:38:59.066138 kubelet[2458]: E0213 15:38:59.065545 2458 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:38:59.073951 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:38:59.074132 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:38:59.863729 containerd[1897]: time="2025-02-13T15:38:59.863675132Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:38:59.884414 containerd[1897]: time="2025-02-13T15:38:59.884329518Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.14: active requests=0, bytes read=32213164" Feb 13 15:38:59.912437 containerd[1897]: time="2025-02-13T15:38:59.912354604Z" level=info msg="ImageCreate event name:\"sha256:2c6e411a187e5df0e7d583a21e7ace20746e47cec95bf4cd597e0617e47f328b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:38:59.923173 containerd[1897]: time="2025-02-13T15:38:59.921938965Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:23ccdb5e7e2c317f5727652ef7e64ef91ead34a3c73dfa9c3ab23b3a5028e280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:38:59.923697 containerd[1897]: time="2025-02-13T15:38:59.923660100Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.14\" with image id \"sha256:2c6e411a187e5df0e7d583a21e7ace20746e47cec95bf4cd597e0617e47f328b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.14\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:23ccdb5e7e2c317f5727652ef7e64ef91ead34a3c73dfa9c3ab23b3a5028e280\", size \"33659710\" in 3.627099147s" Feb 13 15:38:59.923995 containerd[1897]: time="2025-02-13T15:38:59.923968460Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.14\" returns image reference \"sha256:2c6e411a187e5df0e7d583a21e7ace20746e47cec95bf4cd597e0617e47f328b\"" Feb 13 15:38:59.953795 containerd[1897]: time="2025-02-13T15:38:59.953748425Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.14\"" Feb 13 15:39:03.841995 containerd[1897]: time="2025-02-13T15:39:03.841922449Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:03.843535 containerd[1897]: time="2025-02-13T15:39:03.843470145Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.14: active requests=0, bytes read=17334056" Feb 13 15:39:03.846299 containerd[1897]: time="2025-02-13T15:39:03.846249889Z" level=info msg="ImageCreate event name:\"sha256:94dd66cb984e2a4209d2cb2cad88e199b7efb440fc198324ab2e12642de735fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:03.868694 containerd[1897]: time="2025-02-13T15:39:03.867911351Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf0046be3eb6c4831b6b2a1b3e24f18e27778663890144478f11a82622b48c48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:03.872328 containerd[1897]: time="2025-02-13T15:39:03.872265474Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.14\" with image id \"sha256:94dd66cb984e2a4209d2cb2cad88e199b7efb440fc198324ab2e12642de735fc\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.14\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf0046be3eb6c4831b6b2a1b3e24f18e27778663890144478f11a82622b48c48\", size \"18780620\" in 3.918473187s" Feb 13 15:39:03.872628 containerd[1897]: time="2025-02-13T15:39:03.872600725Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.14\" returns image reference \"sha256:94dd66cb984e2a4209d2cb2cad88e199b7efb440fc198324ab2e12642de735fc\"" Feb 13 15:39:04.054314 containerd[1897]: time="2025-02-13T15:39:04.054276321Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.14\"" Feb 13 15:39:05.471809 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4271493308.mount: Deactivated successfully. Feb 13 15:39:06.199729 containerd[1897]: time="2025-02-13T15:39:06.199677707Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:06.202453 containerd[1897]: time="2025-02-13T15:39:06.202402912Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.14: active requests=0, bytes read=28620592" Feb 13 15:39:06.204734 containerd[1897]: time="2025-02-13T15:39:06.204635282Z" level=info msg="ImageCreate event name:\"sha256:609f2866f1e52a5f0d2651e1206db6aeb38e8c3f91175abcfaf7e87381e5cce2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:06.209180 containerd[1897]: time="2025-02-13T15:39:06.209127314Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:197988595a902751e4e570a5e4d74182f12d83c1d175c1e79aa020f358f6535b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:06.210879 containerd[1897]: time="2025-02-13T15:39:06.210736062Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.14\" with image id \"sha256:609f2866f1e52a5f0d2651e1206db6aeb38e8c3f91175abcfaf7e87381e5cce2\", repo tag \"registry.k8s.io/kube-proxy:v1.29.14\", repo digest \"registry.k8s.io/kube-proxy@sha256:197988595a902751e4e570a5e4d74182f12d83c1d175c1e79aa020f358f6535b\", size \"28619611\" in 2.156419046s" Feb 13 15:39:06.211051 containerd[1897]: time="2025-02-13T15:39:06.210879743Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.14\" returns image reference \"sha256:609f2866f1e52a5f0d2651e1206db6aeb38e8c3f91175abcfaf7e87381e5cce2\"" Feb 13 15:39:06.252798 containerd[1897]: time="2025-02-13T15:39:06.252758608Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 15:39:06.993083 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1337217302.mount: Deactivated successfully. Feb 13 15:39:08.451073 containerd[1897]: time="2025-02-13T15:39:08.451019575Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:08.452876 containerd[1897]: time="2025-02-13T15:39:08.452729639Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Feb 13 15:39:08.454979 containerd[1897]: time="2025-02-13T15:39:08.454609335Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:08.458205 containerd[1897]: time="2025-02-13T15:39:08.458145113Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:08.459422 containerd[1897]: time="2025-02-13T15:39:08.459376668Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.206575055s" Feb 13 15:39:08.459422 containerd[1897]: time="2025-02-13T15:39:08.459422829Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Feb 13 15:39:08.487321 containerd[1897]: time="2025-02-13T15:39:08.487275065Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 15:39:09.001579 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4195042601.mount: Deactivated successfully. Feb 13 15:39:09.011665 containerd[1897]: time="2025-02-13T15:39:09.011604218Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:09.013141 containerd[1897]: time="2025-02-13T15:39:09.012935032Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Feb 13 15:39:09.015184 containerd[1897]: time="2025-02-13T15:39:09.014878026Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:09.018033 containerd[1897]: time="2025-02-13T15:39:09.017973506Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:09.019196 containerd[1897]: time="2025-02-13T15:39:09.018764877Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 531.44647ms" Feb 13 15:39:09.019196 containerd[1897]: time="2025-02-13T15:39:09.018801532Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 13 15:39:09.047019 containerd[1897]: time="2025-02-13T15:39:09.046978824Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Feb 13 15:39:09.324737 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 15:39:09.334726 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:39:10.031414 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:39:10.035944 (kubelet)[2561]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:39:10.081249 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1764543114.mount: Deactivated successfully. Feb 13 15:39:10.126606 kubelet[2561]: E0213 15:39:10.126492 2561 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:39:10.130101 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:39:10.130385 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:39:13.360271 containerd[1897]: time="2025-02-13T15:39:13.360213018Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:13.362074 containerd[1897]: time="2025-02-13T15:39:13.361982581Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Feb 13 15:39:13.364235 containerd[1897]: time="2025-02-13T15:39:13.363678493Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:13.367528 containerd[1897]: time="2025-02-13T15:39:13.367489446Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:13.368867 containerd[1897]: time="2025-02-13T15:39:13.368829262Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 4.321802494s" Feb 13 15:39:13.368972 containerd[1897]: time="2025-02-13T15:39:13.368876827Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Feb 13 15:39:15.516333 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 13 15:39:17.617950 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:39:17.626754 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:39:17.667382 systemd[1]: Reloading requested from client PID 2681 ('systemctl') (unit session-5.scope)... Feb 13 15:39:17.667402 systemd[1]: Reloading... Feb 13 15:39:17.829209 zram_generator::config[2724]: No configuration found. Feb 13 15:39:17.982514 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:39:18.132526 systemd[1]: Reloading finished in 464 ms. Feb 13 15:39:18.200780 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 15:39:18.200909 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 15:39:18.201595 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:39:18.210072 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:39:18.831566 (kubelet)[2778]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:39:18.834370 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:39:18.904415 kubelet[2778]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:39:18.904415 kubelet[2778]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:39:18.904415 kubelet[2778]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:39:18.908195 kubelet[2778]: I0213 15:39:18.907245 2778 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:39:19.818220 kubelet[2778]: I0213 15:39:19.818178 2778 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Feb 13 15:39:19.818220 kubelet[2778]: I0213 15:39:19.818219 2778 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:39:19.818514 kubelet[2778]: I0213 15:39:19.818493 2778 server.go:919] "Client rotation is on, will bootstrap in background" Feb 13 15:39:19.862568 kubelet[2778]: E0213 15:39:19.862509 2778 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.17.42:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.17.42:6443: connect: connection refused Feb 13 15:39:19.862716 kubelet[2778]: I0213 15:39:19.862650 2778 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:39:19.890475 kubelet[2778]: I0213 15:39:19.890438 2778 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:39:19.892808 kubelet[2778]: I0213 15:39:19.892770 2778 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:39:19.894503 kubelet[2778]: I0213 15:39:19.894462 2778 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:39:19.894503 kubelet[2778]: I0213 15:39:19.894505 2778 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:39:19.894760 kubelet[2778]: I0213 15:39:19.894520 2778 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:39:19.894760 kubelet[2778]: I0213 15:39:19.894681 2778 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:39:19.894835 kubelet[2778]: I0213 15:39:19.894817 2778 kubelet.go:396] "Attempting to sync node with API server" Feb 13 15:39:19.894881 kubelet[2778]: I0213 15:39:19.894836 2778 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:39:19.894881 kubelet[2778]: I0213 15:39:19.894870 2778 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:39:19.895196 kubelet[2778]: I0213 15:39:19.894891 2778 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:39:19.898991 kubelet[2778]: W0213 15:39:19.898404 2778 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.17.42:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.17.42:6443: connect: connection refused Feb 13 15:39:19.898991 kubelet[2778]: E0213 15:39:19.898471 2778 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.17.42:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.17.42:6443: connect: connection refused Feb 13 15:39:19.898991 kubelet[2778]: W0213 15:39:19.898919 2778 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.17.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-42&limit=500&resourceVersion=0": dial tcp 172.31.17.42:6443: connect: connection refused Feb 13 15:39:19.898991 kubelet[2778]: E0213 15:39:19.898967 2778 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.17.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-42&limit=500&resourceVersion=0": dial tcp 172.31.17.42:6443: connect: connection refused Feb 13 15:39:19.899631 kubelet[2778]: I0213 15:39:19.899613 2778 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:39:19.906570 kubelet[2778]: I0213 15:39:19.906530 2778 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:39:19.908783 kubelet[2778]: W0213 15:39:19.908740 2778 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 15:39:19.909491 kubelet[2778]: I0213 15:39:19.909457 2778 server.go:1256] "Started kubelet" Feb 13 15:39:19.909795 kubelet[2778]: I0213 15:39:19.909766 2778 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:39:19.911301 kubelet[2778]: I0213 15:39:19.910813 2778 server.go:461] "Adding debug handlers to kubelet server" Feb 13 15:39:19.913363 kubelet[2778]: I0213 15:39:19.913330 2778 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:39:19.922021 kubelet[2778]: I0213 15:39:19.921978 2778 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:39:19.922209 kubelet[2778]: I0213 15:39:19.922188 2778 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:39:19.925860 kubelet[2778]: E0213 15:39:19.925800 2778 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.17.42:6443/api/v1/namespaces/default/events\": dial tcp 172.31.17.42:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-17-42.1823ceb578fd53f0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-17-42,UID:ip-172-31-17-42,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-17-42,},FirstTimestamp:2025-02-13 15:39:19.909413872 +0000 UTC m=+1.068505516,LastTimestamp:2025-02-13 15:39:19.909413872 +0000 UTC m=+1.068505516,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-17-42,}" Feb 13 15:39:19.926522 kubelet[2778]: I0213 15:39:19.926492 2778 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:39:19.927027 kubelet[2778]: I0213 15:39:19.926996 2778 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 13 15:39:19.927105 kubelet[2778]: I0213 15:39:19.927077 2778 reconciler_new.go:29] "Reconciler: start to sync state" Feb 13 15:39:19.927667 kubelet[2778]: W0213 15:39:19.927607 2778 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.17.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.17.42:6443: connect: connection refused Feb 13 15:39:19.927752 kubelet[2778]: E0213 15:39:19.927680 2778 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.17.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.17.42:6443: connect: connection refused Feb 13 15:39:19.927801 kubelet[2778]: E0213 15:39:19.927767 2778 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-42?timeout=10s\": dial tcp 172.31.17.42:6443: connect: connection refused" interval="200ms" Feb 13 15:39:19.936269 kubelet[2778]: I0213 15:39:19.935587 2778 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:39:19.936269 kubelet[2778]: I0213 15:39:19.935615 2778 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:39:19.936269 kubelet[2778]: I0213 15:39:19.935730 2778 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:39:19.971371 kubelet[2778]: I0213 15:39:19.971321 2778 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:39:19.977247 kubelet[2778]: I0213 15:39:19.976889 2778 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:39:19.977247 kubelet[2778]: I0213 15:39:19.976938 2778 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:39:19.977247 kubelet[2778]: I0213 15:39:19.976964 2778 kubelet.go:2329] "Starting kubelet main sync loop" Feb 13 15:39:19.977247 kubelet[2778]: E0213 15:39:19.977061 2778 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:39:19.997776 kubelet[2778]: W0213 15:39:19.997684 2778 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.17.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.17.42:6443: connect: connection refused Feb 13 15:39:19.998020 kubelet[2778]: E0213 15:39:19.998000 2778 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.17.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.17.42:6443: connect: connection refused Feb 13 15:39:19.998142 kubelet[2778]: E0213 15:39:19.997902 2778 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:39:20.018661 kubelet[2778]: I0213 15:39:20.018626 2778 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:39:20.018661 kubelet[2778]: I0213 15:39:20.018651 2778 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:39:20.018844 kubelet[2778]: I0213 15:39:20.018674 2778 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:39:20.029129 kubelet[2778]: I0213 15:39:20.029105 2778 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-17-42" Feb 13 15:39:20.034462 kubelet[2778]: E0213 15:39:20.029838 2778 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.17.42:6443/api/v1/nodes\": dial tcp 172.31.17.42:6443: connect: connection refused" node="ip-172-31-17-42" Feb 13 15:39:20.035302 kubelet[2778]: I0213 15:39:20.035269 2778 policy_none.go:49] "None policy: Start" Feb 13 15:39:20.036430 kubelet[2778]: I0213 15:39:20.036407 2778 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:39:20.036528 kubelet[2778]: I0213 15:39:20.036439 2778 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:39:20.052182 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 15:39:20.067054 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 15:39:20.071422 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 15:39:20.077951 kubelet[2778]: E0213 15:39:20.077920 2778 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 15:39:20.082104 kubelet[2778]: I0213 15:39:20.081129 2778 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:39:20.082104 kubelet[2778]: I0213 15:39:20.081443 2778 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:39:20.086284 kubelet[2778]: E0213 15:39:20.085894 2778 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-17-42\" not found" Feb 13 15:39:20.128315 kubelet[2778]: E0213 15:39:20.128234 2778 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-42?timeout=10s\": dial tcp 172.31.17.42:6443: connect: connection refused" interval="400ms" Feb 13 15:39:20.234032 kubelet[2778]: I0213 15:39:20.234003 2778 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-17-42" Feb 13 15:39:20.234423 kubelet[2778]: E0213 15:39:20.234404 2778 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.17.42:6443/api/v1/nodes\": dial tcp 172.31.17.42:6443: connect: connection refused" node="ip-172-31-17-42" Feb 13 15:39:20.278828 kubelet[2778]: I0213 15:39:20.278789 2778 topology_manager.go:215] "Topology Admit Handler" podUID="a38aeccfeaf395331dc322105dde99be" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-17-42" Feb 13 15:39:20.280998 kubelet[2778]: I0213 15:39:20.280970 2778 topology_manager.go:215] "Topology Admit Handler" podUID="afbcf22a79d9635d226ba27383647e6e" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-17-42" Feb 13 15:39:20.292715 kubelet[2778]: I0213 15:39:20.292670 2778 topology_manager.go:215] "Topology Admit Handler" podUID="ca1cd57d160c3804c4c37c2277b6c976" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-17-42" Feb 13 15:39:20.329309 kubelet[2778]: I0213 15:39:20.329106 2778 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a38aeccfeaf395331dc322105dde99be-ca-certs\") pod \"kube-apiserver-ip-172-31-17-42\" (UID: \"a38aeccfeaf395331dc322105dde99be\") " pod="kube-system/kube-apiserver-ip-172-31-17-42" Feb 13 15:39:20.329309 kubelet[2778]: I0213 15:39:20.329192 2778 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a38aeccfeaf395331dc322105dde99be-k8s-certs\") pod \"kube-apiserver-ip-172-31-17-42\" (UID: \"a38aeccfeaf395331dc322105dde99be\") " pod="kube-system/kube-apiserver-ip-172-31-17-42" Feb 13 15:39:20.329309 kubelet[2778]: I0213 15:39:20.329229 2778 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/afbcf22a79d9635d226ba27383647e6e-ca-certs\") pod \"kube-controller-manager-ip-172-31-17-42\" (UID: \"afbcf22a79d9635d226ba27383647e6e\") " pod="kube-system/kube-controller-manager-ip-172-31-17-42" Feb 13 15:39:20.329309 kubelet[2778]: I0213 15:39:20.329256 2778 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/afbcf22a79d9635d226ba27383647e6e-k8s-certs\") pod \"kube-controller-manager-ip-172-31-17-42\" (UID: \"afbcf22a79d9635d226ba27383647e6e\") " pod="kube-system/kube-controller-manager-ip-172-31-17-42" Feb 13 15:39:20.329309 kubelet[2778]: I0213 15:39:20.329282 2778 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/afbcf22a79d9635d226ba27383647e6e-kubeconfig\") pod \"kube-controller-manager-ip-172-31-17-42\" (UID: \"afbcf22a79d9635d226ba27383647e6e\") " pod="kube-system/kube-controller-manager-ip-172-31-17-42" Feb 13 15:39:20.329603 kubelet[2778]: I0213 15:39:20.329311 2778 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ca1cd57d160c3804c4c37c2277b6c976-kubeconfig\") pod \"kube-scheduler-ip-172-31-17-42\" (UID: \"ca1cd57d160c3804c4c37c2277b6c976\") " pod="kube-system/kube-scheduler-ip-172-31-17-42" Feb 13 15:39:20.329603 kubelet[2778]: I0213 15:39:20.329342 2778 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a38aeccfeaf395331dc322105dde99be-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-17-42\" (UID: \"a38aeccfeaf395331dc322105dde99be\") " pod="kube-system/kube-apiserver-ip-172-31-17-42" Feb 13 15:39:20.329603 kubelet[2778]: I0213 15:39:20.329368 2778 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/afbcf22a79d9635d226ba27383647e6e-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-17-42\" (UID: \"afbcf22a79d9635d226ba27383647e6e\") " pod="kube-system/kube-controller-manager-ip-172-31-17-42" Feb 13 15:39:20.329603 kubelet[2778]: I0213 15:39:20.329401 2778 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/afbcf22a79d9635d226ba27383647e6e-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-17-42\" (UID: \"afbcf22a79d9635d226ba27383647e6e\") " pod="kube-system/kube-controller-manager-ip-172-31-17-42" Feb 13 15:39:20.331546 systemd[1]: Created slice kubepods-burstable-poda38aeccfeaf395331dc322105dde99be.slice - libcontainer container kubepods-burstable-poda38aeccfeaf395331dc322105dde99be.slice. Feb 13 15:39:20.356728 systemd[1]: Created slice kubepods-burstable-podafbcf22a79d9635d226ba27383647e6e.slice - libcontainer container kubepods-burstable-podafbcf22a79d9635d226ba27383647e6e.slice. Feb 13 15:39:20.370785 systemd[1]: Created slice kubepods-burstable-podca1cd57d160c3804c4c37c2277b6c976.slice - libcontainer container kubepods-burstable-podca1cd57d160c3804c4c37c2277b6c976.slice. Feb 13 15:39:20.529215 kubelet[2778]: E0213 15:39:20.529179 2778 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-42?timeout=10s\": dial tcp 172.31.17.42:6443: connect: connection refused" interval="800ms" Feb 13 15:39:20.637334 kubelet[2778]: I0213 15:39:20.637225 2778 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-17-42" Feb 13 15:39:20.637804 kubelet[2778]: E0213 15:39:20.637767 2778 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.17.42:6443/api/v1/nodes\": dial tcp 172.31.17.42:6443: connect: connection refused" node="ip-172-31-17-42" Feb 13 15:39:20.652022 containerd[1897]: time="2025-02-13T15:39:20.651970327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-17-42,Uid:a38aeccfeaf395331dc322105dde99be,Namespace:kube-system,Attempt:0,}" Feb 13 15:39:20.669224 containerd[1897]: time="2025-02-13T15:39:20.669177925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-17-42,Uid:afbcf22a79d9635d226ba27383647e6e,Namespace:kube-system,Attempt:0,}" Feb 13 15:39:20.674416 containerd[1897]: time="2025-02-13T15:39:20.674375104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-17-42,Uid:ca1cd57d160c3804c4c37c2277b6c976,Namespace:kube-system,Attempt:0,}" Feb 13 15:39:20.940005 kubelet[2778]: W0213 15:39:20.939693 2778 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.17.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.17.42:6443: connect: connection refused Feb 13 15:39:20.940005 kubelet[2778]: E0213 15:39:20.939928 2778 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.17.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.17.42:6443: connect: connection refused Feb 13 15:39:20.953914 kubelet[2778]: W0213 15:39:20.953676 2778 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.17.42:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.17.42:6443: connect: connection refused Feb 13 15:39:20.954040 kubelet[2778]: E0213 15:39:20.953926 2778 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.17.42:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.17.42:6443: connect: connection refused Feb 13 15:39:21.045338 kubelet[2778]: W0213 15:39:21.045235 2778 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.17.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.17.42:6443: connect: connection refused Feb 13 15:39:21.045338 kubelet[2778]: E0213 15:39:21.045342 2778 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.17.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.17.42:6443: connect: connection refused Feb 13 15:39:21.120954 kubelet[2778]: W0213 15:39:21.120890 2778 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.17.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-42&limit=500&resourceVersion=0": dial tcp 172.31.17.42:6443: connect: connection refused Feb 13 15:39:21.120954 kubelet[2778]: E0213 15:39:21.120959 2778 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.17.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-42&limit=500&resourceVersion=0": dial tcp 172.31.17.42:6443: connect: connection refused Feb 13 15:39:21.244947 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2533883183.mount: Deactivated successfully. Feb 13 15:39:21.260822 containerd[1897]: time="2025-02-13T15:39:21.260757818Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:39:21.267958 containerd[1897]: time="2025-02-13T15:39:21.267898051Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Feb 13 15:39:21.269373 containerd[1897]: time="2025-02-13T15:39:21.269330969Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:39:21.271053 containerd[1897]: time="2025-02-13T15:39:21.271007233Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:39:21.277970 containerd[1897]: time="2025-02-13T15:39:21.277263796Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:39:21.281339 containerd[1897]: time="2025-02-13T15:39:21.279940194Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:39:21.281339 containerd[1897]: time="2025-02-13T15:39:21.280469500Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:39:21.283119 containerd[1897]: time="2025-02-13T15:39:21.282808404Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:39:21.286248 containerd[1897]: time="2025-02-13T15:39:21.285870439Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 629.832243ms" Feb 13 15:39:21.297772 containerd[1897]: time="2025-02-13T15:39:21.297707991Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 628.426683ms" Feb 13 15:39:21.305114 containerd[1897]: time="2025-02-13T15:39:21.305046575Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 630.564391ms" Feb 13 15:39:21.330379 kubelet[2778]: E0213 15:39:21.330125 2778 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-42?timeout=10s\": dial tcp 172.31.17.42:6443: connect: connection refused" interval="1.6s" Feb 13 15:39:21.446951 kubelet[2778]: I0213 15:39:21.446519 2778 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-17-42" Feb 13 15:39:21.446951 kubelet[2778]: E0213 15:39:21.446923 2778 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.17.42:6443/api/v1/nodes\": dial tcp 172.31.17.42:6443: connect: connection refused" node="ip-172-31-17-42" Feb 13 15:39:21.686644 containerd[1897]: time="2025-02-13T15:39:21.685032116Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:39:21.687332 containerd[1897]: time="2025-02-13T15:39:21.686751662Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:39:21.687332 containerd[1897]: time="2025-02-13T15:39:21.687111128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:39:21.688952 containerd[1897]: time="2025-02-13T15:39:21.688705057Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:39:21.688952 containerd[1897]: time="2025-02-13T15:39:21.681634473Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:39:21.688952 containerd[1897]: time="2025-02-13T15:39:21.688882593Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:39:21.688952 containerd[1897]: time="2025-02-13T15:39:21.688910160Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:39:21.690999 containerd[1897]: time="2025-02-13T15:39:21.690933935Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:39:21.694872 containerd[1897]: time="2025-02-13T15:39:21.693973158Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:39:21.694872 containerd[1897]: time="2025-02-13T15:39:21.694180768Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:39:21.694872 containerd[1897]: time="2025-02-13T15:39:21.694207854Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:39:21.694872 containerd[1897]: time="2025-02-13T15:39:21.694336021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:39:21.737498 systemd[1]: Started cri-containerd-87575473fec19ea79bec9556b5fd88208a880b5df691814c0c631a99dbe61f2a.scope - libcontainer container 87575473fec19ea79bec9556b5fd88208a880b5df691814c0c631a99dbe61f2a. Feb 13 15:39:21.771366 systemd[1]: Started cri-containerd-10d113b8f011f812e1cb9f05fe84bf6f8bd63c521b433f9747ec82bf32495f64.scope - libcontainer container 10d113b8f011f812e1cb9f05fe84bf6f8bd63c521b433f9747ec82bf32495f64. Feb 13 15:39:21.774745 systemd[1]: Started cri-containerd-2a94b3c90364a81cdfd356322870df628dce16520d5e22347cd8525c49937aa5.scope - libcontainer container 2a94b3c90364a81cdfd356322870df628dce16520d5e22347cd8525c49937aa5. Feb 13 15:39:21.871987 containerd[1897]: time="2025-02-13T15:39:21.871808327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-17-42,Uid:afbcf22a79d9635d226ba27383647e6e,Namespace:kube-system,Attempt:0,} returns sandbox id \"87575473fec19ea79bec9556b5fd88208a880b5df691814c0c631a99dbe61f2a\"" Feb 13 15:39:21.882758 containerd[1897]: time="2025-02-13T15:39:21.882608749Z" level=info msg="CreateContainer within sandbox \"87575473fec19ea79bec9556b5fd88208a880b5df691814c0c631a99dbe61f2a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 15:39:21.915913 containerd[1897]: time="2025-02-13T15:39:21.915869855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-17-42,Uid:a38aeccfeaf395331dc322105dde99be,Namespace:kube-system,Attempt:0,} returns sandbox id \"10d113b8f011f812e1cb9f05fe84bf6f8bd63c521b433f9747ec82bf32495f64\"" Feb 13 15:39:21.923643 containerd[1897]: time="2025-02-13T15:39:21.922845371Z" level=info msg="CreateContainer within sandbox \"10d113b8f011f812e1cb9f05fe84bf6f8bd63c521b433f9747ec82bf32495f64\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 15:39:21.926373 containerd[1897]: time="2025-02-13T15:39:21.926231325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-17-42,Uid:ca1cd57d160c3804c4c37c2277b6c976,Namespace:kube-system,Attempt:0,} returns sandbox id \"2a94b3c90364a81cdfd356322870df628dce16520d5e22347cd8525c49937aa5\"" Feb 13 15:39:21.951919 containerd[1897]: time="2025-02-13T15:39:21.951791757Z" level=info msg="CreateContainer within sandbox \"2a94b3c90364a81cdfd356322870df628dce16520d5e22347cd8525c49937aa5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 15:39:21.969311 kubelet[2778]: E0213 15:39:21.969258 2778 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.17.42:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.17.42:6443: connect: connection refused Feb 13 15:39:22.182848 containerd[1897]: time="2025-02-13T15:39:22.182679805Z" level=info msg="CreateContainer within sandbox \"87575473fec19ea79bec9556b5fd88208a880b5df691814c0c631a99dbe61f2a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"48808864acd9c65bdfe481373b0d0788e280ad124659a89df605cb05aa7b048c\"" Feb 13 15:39:22.183723 containerd[1897]: time="2025-02-13T15:39:22.183537390Z" level=info msg="StartContainer for \"48808864acd9c65bdfe481373b0d0788e280ad124659a89df605cb05aa7b048c\"" Feb 13 15:39:22.190506 containerd[1897]: time="2025-02-13T15:39:22.190342915Z" level=info msg="CreateContainer within sandbox \"2a94b3c90364a81cdfd356322870df628dce16520d5e22347cd8525c49937aa5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"043b2ec4998622aafa1000dc96de75807d91431fd02a33afe4011851c1ad4104\"" Feb 13 15:39:22.192512 containerd[1897]: time="2025-02-13T15:39:22.192380547Z" level=info msg="CreateContainer within sandbox \"10d113b8f011f812e1cb9f05fe84bf6f8bd63c521b433f9747ec82bf32495f64\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"84b71f86f9696b2c391b4823e478a3f8f3370e379b74501515a0ce1e7bbd63dc\"" Feb 13 15:39:22.193035 containerd[1897]: time="2025-02-13T15:39:22.192849051Z" level=info msg="StartContainer for \"84b71f86f9696b2c391b4823e478a3f8f3370e379b74501515a0ce1e7bbd63dc\"" Feb 13 15:39:22.193035 containerd[1897]: time="2025-02-13T15:39:22.192957151Z" level=info msg="StartContainer for \"043b2ec4998622aafa1000dc96de75807d91431fd02a33afe4011851c1ad4104\"" Feb 13 15:39:22.240956 systemd[1]: run-containerd-runc-k8s.io-48808864acd9c65bdfe481373b0d0788e280ad124659a89df605cb05aa7b048c-runc.qnkByH.mount: Deactivated successfully. Feb 13 15:39:22.265559 systemd[1]: Started cri-containerd-48808864acd9c65bdfe481373b0d0788e280ad124659a89df605cb05aa7b048c.scope - libcontainer container 48808864acd9c65bdfe481373b0d0788e280ad124659a89df605cb05aa7b048c. Feb 13 15:39:22.280573 systemd[1]: Started cri-containerd-043b2ec4998622aafa1000dc96de75807d91431fd02a33afe4011851c1ad4104.scope - libcontainer container 043b2ec4998622aafa1000dc96de75807d91431fd02a33afe4011851c1ad4104. Feb 13 15:39:22.290569 systemd[1]: Started cri-containerd-84b71f86f9696b2c391b4823e478a3f8f3370e379b74501515a0ce1e7bbd63dc.scope - libcontainer container 84b71f86f9696b2c391b4823e478a3f8f3370e379b74501515a0ce1e7bbd63dc. Feb 13 15:39:22.384321 containerd[1897]: time="2025-02-13T15:39:22.384262122Z" level=info msg="StartContainer for \"48808864acd9c65bdfe481373b0d0788e280ad124659a89df605cb05aa7b048c\" returns successfully" Feb 13 15:39:22.422314 containerd[1897]: time="2025-02-13T15:39:22.421861647Z" level=info msg="StartContainer for \"84b71f86f9696b2c391b4823e478a3f8f3370e379b74501515a0ce1e7bbd63dc\" returns successfully" Feb 13 15:39:22.426192 containerd[1897]: time="2025-02-13T15:39:22.425722732Z" level=info msg="StartContainer for \"043b2ec4998622aafa1000dc96de75807d91431fd02a33afe4011851c1ad4104\" returns successfully" Feb 13 15:39:22.880663 kubelet[2778]: W0213 15:39:22.879982 2778 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.17.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.17.42:6443: connect: connection refused Feb 13 15:39:22.880663 kubelet[2778]: E0213 15:39:22.880039 2778 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.17.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.17.42:6443: connect: connection refused Feb 13 15:39:22.931879 kubelet[2778]: E0213 15:39:22.931840 2778 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-42?timeout=10s\": dial tcp 172.31.17.42:6443: connect: connection refused" interval="3.2s" Feb 13 15:39:23.057588 kubelet[2778]: I0213 15:39:23.057552 2778 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-17-42" Feb 13 15:39:23.061862 kubelet[2778]: E0213 15:39:23.061315 2778 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.17.42:6443/api/v1/nodes\": dial tcp 172.31.17.42:6443: connect: connection refused" node="ip-172-31-17-42" Feb 13 15:39:23.131479 kubelet[2778]: W0213 15:39:23.129825 2778 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.17.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.17.42:6443: connect: connection refused Feb 13 15:39:23.131479 kubelet[2778]: E0213 15:39:23.129874 2778 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.17.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.17.42:6443: connect: connection refused Feb 13 15:39:25.900687 kubelet[2778]: I0213 15:39:25.900636 2778 apiserver.go:52] "Watching apiserver" Feb 13 15:39:25.931192 kubelet[2778]: I0213 15:39:25.927264 2778 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 13 15:39:26.137641 kubelet[2778]: E0213 15:39:26.137571 2778 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-17-42\" not found" node="ip-172-31-17-42" Feb 13 15:39:26.138099 kubelet[2778]: E0213 15:39:26.138075 2778 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-17-42" not found Feb 13 15:39:26.265186 kubelet[2778]: I0213 15:39:26.264812 2778 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-17-42" Feb 13 15:39:26.281630 kubelet[2778]: I0213 15:39:26.276964 2778 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-17-42" Feb 13 15:39:29.075077 systemd[1]: Reloading requested from client PID 3054 ('systemctl') (unit session-5.scope)... Feb 13 15:39:29.075095 systemd[1]: Reloading... Feb 13 15:39:29.299228 zram_generator::config[3097]: No configuration found. Feb 13 15:39:29.562960 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:39:29.716180 systemd[1]: Reloading finished in 640 ms. Feb 13 15:39:29.790886 update_engine[1876]: I20250213 15:39:29.789693 1876 update_attempter.cc:509] Updating boot flags... Feb 13 15:39:29.795322 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:39:29.802224 kubelet[2778]: I0213 15:39:29.796228 2778 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:39:29.831663 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:39:29.831966 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:39:29.832031 systemd[1]: kubelet.service: Consumed 1.299s CPU time, 109.5M memory peak, 0B memory swap peak. Feb 13 15:39:29.848561 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:39:29.903609 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3158) Feb 13 15:39:30.203177 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3147) Feb 13 15:39:30.455235 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3147) Feb 13 15:39:30.759825 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:39:30.773674 (kubelet)[3418]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:39:30.884441 kubelet[3418]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:39:30.884441 kubelet[3418]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:39:30.884441 kubelet[3418]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:39:30.885492 kubelet[3418]: I0213 15:39:30.884517 3418 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:39:30.891674 kubelet[3418]: I0213 15:39:30.891639 3418 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Feb 13 15:39:30.891674 kubelet[3418]: I0213 15:39:30.891668 3418 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:39:30.891997 kubelet[3418]: I0213 15:39:30.891975 3418 server.go:919] "Client rotation is on, will bootstrap in background" Feb 13 15:39:30.893859 kubelet[3418]: I0213 15:39:30.893811 3418 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 15:39:30.918958 kubelet[3418]: I0213 15:39:30.918363 3418 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:39:30.930130 kubelet[3418]: I0213 15:39:30.930093 3418 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:39:30.933501 kubelet[3418]: I0213 15:39:30.933422 3418 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:39:30.934184 kubelet[3418]: I0213 15:39:30.933867 3418 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:39:30.934184 kubelet[3418]: I0213 15:39:30.933903 3418 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:39:30.934184 kubelet[3418]: I0213 15:39:30.933920 3418 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:39:30.934184 kubelet[3418]: I0213 15:39:30.933962 3418 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:39:30.934478 kubelet[3418]: I0213 15:39:30.934464 3418 kubelet.go:396] "Attempting to sync node with API server" Feb 13 15:39:30.936381 kubelet[3418]: I0213 15:39:30.936344 3418 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:39:30.939445 kubelet[3418]: I0213 15:39:30.939413 3418 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:39:30.939558 kubelet[3418]: I0213 15:39:30.939481 3418 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:39:30.943276 kubelet[3418]: I0213 15:39:30.943253 3418 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:39:30.948632 kubelet[3418]: I0213 15:39:30.948597 3418 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:39:30.951579 kubelet[3418]: I0213 15:39:30.951529 3418 server.go:1256] "Started kubelet" Feb 13 15:39:30.980179 kubelet[3418]: I0213 15:39:30.977847 3418 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:39:30.991473 kubelet[3418]: I0213 15:39:30.991401 3418 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:39:30.994648 kubelet[3418]: I0213 15:39:30.994620 3418 server.go:461] "Adding debug handlers to kubelet server" Feb 13 15:39:30.998884 kubelet[3418]: I0213 15:39:30.998853 3418 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:39:31.000426 kubelet[3418]: I0213 15:39:31.000401 3418 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:39:31.020488 kubelet[3418]: I0213 15:39:31.019217 3418 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:39:31.032889 kubelet[3418]: I0213 15:39:31.023017 3418 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 13 15:39:31.038515 kubelet[3418]: I0213 15:39:31.038217 3418 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:39:31.038515 kubelet[3418]: I0213 15:39:31.038458 3418 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:39:31.042547 kubelet[3418]: E0213 15:39:31.042237 3418 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:39:31.045235 kubelet[3418]: I0213 15:39:31.044017 3418 reconciler_new.go:29] "Reconciler: start to sync state" Feb 13 15:39:31.059933 kubelet[3418]: I0213 15:39:31.059873 3418 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:39:31.062078 kubelet[3418]: I0213 15:39:31.061994 3418 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:39:31.072027 kubelet[3418]: I0213 15:39:31.069384 3418 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:39:31.072348 kubelet[3418]: I0213 15:39:31.072054 3418 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:39:31.072348 kubelet[3418]: I0213 15:39:31.072226 3418 kubelet.go:2329] "Starting kubelet main sync loop" Feb 13 15:39:31.072348 kubelet[3418]: E0213 15:39:31.072290 3418 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:39:31.136749 kubelet[3418]: I0213 15:39:31.136700 3418 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-17-42" Feb 13 15:39:31.155732 kubelet[3418]: I0213 15:39:31.155688 3418 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-17-42" Feb 13 15:39:31.155974 kubelet[3418]: I0213 15:39:31.155960 3418 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-17-42" Feb 13 15:39:31.172667 kubelet[3418]: I0213 15:39:31.172491 3418 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:39:31.172876 kubelet[3418]: I0213 15:39:31.172865 3418 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:39:31.173085 kubelet[3418]: E0213 15:39:31.172561 3418 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 15:39:31.173085 kubelet[3418]: I0213 15:39:31.173012 3418 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:39:31.173512 kubelet[3418]: I0213 15:39:31.173454 3418 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 15:39:31.173512 kubelet[3418]: I0213 15:39:31.173487 3418 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 15:39:31.173512 kubelet[3418]: I0213 15:39:31.173497 3418 policy_none.go:49] "None policy: Start" Feb 13 15:39:31.175847 kubelet[3418]: I0213 15:39:31.174886 3418 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:39:31.175847 kubelet[3418]: I0213 15:39:31.174924 3418 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:39:31.175847 kubelet[3418]: I0213 15:39:31.175107 3418 state_mem.go:75] "Updated machine memory state" Feb 13 15:39:31.199085 kubelet[3418]: I0213 15:39:31.197939 3418 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:39:31.203951 kubelet[3418]: I0213 15:39:31.203924 3418 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:39:31.374053 kubelet[3418]: I0213 15:39:31.374015 3418 topology_manager.go:215] "Topology Admit Handler" podUID="afbcf22a79d9635d226ba27383647e6e" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-17-42" Feb 13 15:39:31.374647 kubelet[3418]: I0213 15:39:31.374128 3418 topology_manager.go:215] "Topology Admit Handler" podUID="ca1cd57d160c3804c4c37c2277b6c976" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-17-42" Feb 13 15:39:31.374647 kubelet[3418]: I0213 15:39:31.374192 3418 topology_manager.go:215] "Topology Admit Handler" podUID="a38aeccfeaf395331dc322105dde99be" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-17-42" Feb 13 15:39:31.389113 kubelet[3418]: E0213 15:39:31.388778 3418 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-17-42\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-17-42" Feb 13 15:39:31.389113 kubelet[3418]: E0213 15:39:31.388839 3418 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-17-42\" already exists" pod="kube-system/kube-apiserver-ip-172-31-17-42" Feb 13 15:39:31.448170 kubelet[3418]: I0213 15:39:31.448075 3418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/afbcf22a79d9635d226ba27383647e6e-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-17-42\" (UID: \"afbcf22a79d9635d226ba27383647e6e\") " pod="kube-system/kube-controller-manager-ip-172-31-17-42" Feb 13 15:39:31.448333 kubelet[3418]: I0213 15:39:31.448203 3418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/afbcf22a79d9635d226ba27383647e6e-ca-certs\") pod \"kube-controller-manager-ip-172-31-17-42\" (UID: \"afbcf22a79d9635d226ba27383647e6e\") " pod="kube-system/kube-controller-manager-ip-172-31-17-42" Feb 13 15:39:31.448333 kubelet[3418]: I0213 15:39:31.448241 3418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/afbcf22a79d9635d226ba27383647e6e-k8s-certs\") pod \"kube-controller-manager-ip-172-31-17-42\" (UID: \"afbcf22a79d9635d226ba27383647e6e\") " pod="kube-system/kube-controller-manager-ip-172-31-17-42" Feb 13 15:39:31.448333 kubelet[3418]: I0213 15:39:31.448271 3418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/afbcf22a79d9635d226ba27383647e6e-kubeconfig\") pod \"kube-controller-manager-ip-172-31-17-42\" (UID: \"afbcf22a79d9635d226ba27383647e6e\") " pod="kube-system/kube-controller-manager-ip-172-31-17-42" Feb 13 15:39:31.448333 kubelet[3418]: I0213 15:39:31.448304 3418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/afbcf22a79d9635d226ba27383647e6e-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-17-42\" (UID: \"afbcf22a79d9635d226ba27383647e6e\") " pod="kube-system/kube-controller-manager-ip-172-31-17-42" Feb 13 15:39:31.448333 kubelet[3418]: I0213 15:39:31.448333 3418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ca1cd57d160c3804c4c37c2277b6c976-kubeconfig\") pod \"kube-scheduler-ip-172-31-17-42\" (UID: \"ca1cd57d160c3804c4c37c2277b6c976\") " pod="kube-system/kube-scheduler-ip-172-31-17-42" Feb 13 15:39:31.448978 kubelet[3418]: I0213 15:39:31.448364 3418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a38aeccfeaf395331dc322105dde99be-ca-certs\") pod \"kube-apiserver-ip-172-31-17-42\" (UID: \"a38aeccfeaf395331dc322105dde99be\") " pod="kube-system/kube-apiserver-ip-172-31-17-42" Feb 13 15:39:31.448978 kubelet[3418]: I0213 15:39:31.448871 3418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a38aeccfeaf395331dc322105dde99be-k8s-certs\") pod \"kube-apiserver-ip-172-31-17-42\" (UID: \"a38aeccfeaf395331dc322105dde99be\") " pod="kube-system/kube-apiserver-ip-172-31-17-42" Feb 13 15:39:31.448978 kubelet[3418]: I0213 15:39:31.448920 3418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a38aeccfeaf395331dc322105dde99be-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-17-42\" (UID: \"a38aeccfeaf395331dc322105dde99be\") " pod="kube-system/kube-apiserver-ip-172-31-17-42" Feb 13 15:39:31.941370 kubelet[3418]: I0213 15:39:31.941224 3418 apiserver.go:52] "Watching apiserver" Feb 13 15:39:32.035584 kubelet[3418]: I0213 15:39:32.035463 3418 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 13 15:39:32.130574 kubelet[3418]: E0213 15:39:32.127218 3418 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-17-42\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-17-42" Feb 13 15:39:32.133068 kubelet[3418]: E0213 15:39:32.133041 3418 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-17-42\" already exists" pod="kube-system/kube-apiserver-ip-172-31-17-42" Feb 13 15:39:32.176109 kubelet[3418]: I0213 15:39:32.175857 3418 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-17-42" podStartSLOduration=5.175802786 podStartE2EDuration="5.175802786s" podCreationTimestamp="2025-02-13 15:39:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:39:32.159951604 +0000 UTC m=+1.369730103" watchObservedRunningTime="2025-02-13 15:39:32.175802786 +0000 UTC m=+1.385581286" Feb 13 15:39:32.190612 kubelet[3418]: I0213 15:39:32.190574 3418 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-17-42" podStartSLOduration=3.19052122 podStartE2EDuration="3.19052122s" podCreationTimestamp="2025-02-13 15:39:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:39:32.17732508 +0000 UTC m=+1.387103584" watchObservedRunningTime="2025-02-13 15:39:32.19052122 +0000 UTC m=+1.400299708" Feb 13 15:39:32.207302 kubelet[3418]: I0213 15:39:32.206373 3418 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-17-42" podStartSLOduration=1.206321025 podStartE2EDuration="1.206321025s" podCreationTimestamp="2025-02-13 15:39:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:39:32.191501744 +0000 UTC m=+1.401280244" watchObservedRunningTime="2025-02-13 15:39:32.206321025 +0000 UTC m=+1.416099526" Feb 13 15:39:32.536902 sudo[2174]: pam_unix(sudo:session): session closed for user root Feb 13 15:39:32.560510 sshd[2173]: Connection closed by 139.178.89.65 port 50750 Feb 13 15:39:32.565261 sshd-session[2171]: pam_unix(sshd:session): session closed for user core Feb 13 15:39:32.571711 systemd[1]: sshd@4-172.31.17.42:22-139.178.89.65:50750.service: Deactivated successfully. Feb 13 15:39:32.575416 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 15:39:32.575727 systemd[1]: session-5.scope: Consumed 4.589s CPU time, 183.1M memory peak, 0B memory swap peak. Feb 13 15:39:32.577920 systemd-logind[1875]: Session 5 logged out. Waiting for processes to exit. Feb 13 15:39:32.579883 systemd-logind[1875]: Removed session 5. Feb 13 15:39:41.555321 kubelet[3418]: I0213 15:39:41.555285 3418 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 15:39:41.556215 kubelet[3418]: I0213 15:39:41.556131 3418 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 15:39:41.556277 containerd[1897]: time="2025-02-13T15:39:41.555830326Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 15:39:41.612560 kubelet[3418]: I0213 15:39:41.612512 3418 topology_manager.go:215] "Topology Admit Handler" podUID="5e765428-fc44-48cb-9e63-bda4ffc4b032" podNamespace="kube-system" podName="kube-proxy-6ld55" Feb 13 15:39:41.627686 kubelet[3418]: I0213 15:39:41.627471 3418 topology_manager.go:215] "Topology Admit Handler" podUID="4c41f370-0e29-4a97-bcd9-c484fbf8b4ce" podNamespace="kube-flannel" podName="kube-flannel-ds-gqbjx" Feb 13 15:39:41.632744 systemd[1]: Created slice kubepods-besteffort-pod5e765428_fc44_48cb_9e63_bda4ffc4b032.slice - libcontainer container kubepods-besteffort-pod5e765428_fc44_48cb_9e63_bda4ffc4b032.slice. Feb 13 15:39:41.643143 kubelet[3418]: W0213 15:39:41.642794 3418 reflector.go:539] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-17-42" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-17-42' and this object Feb 13 15:39:41.643143 kubelet[3418]: E0213 15:39:41.642844 3418 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-17-42" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-17-42' and this object Feb 13 15:39:41.644197 kubelet[3418]: W0213 15:39:41.643311 3418 reflector.go:539] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-17-42" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-17-42' and this object Feb 13 15:39:41.644197 kubelet[3418]: E0213 15:39:41.643337 3418 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-17-42" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-17-42' and this object Feb 13 15:39:41.644197 kubelet[3418]: W0213 15:39:41.643388 3418 reflector.go:539] object-"kube-flannel"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-17-42" cannot list resource "configmaps" in API group "" in the namespace "kube-flannel": no relationship found between node 'ip-172-31-17-42' and this object Feb 13 15:39:41.644197 kubelet[3418]: E0213 15:39:41.643404 3418 reflector.go:147] object-"kube-flannel"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-17-42" cannot list resource "configmaps" in API group "" in the namespace "kube-flannel": no relationship found between node 'ip-172-31-17-42' and this object Feb 13 15:39:41.644197 kubelet[3418]: W0213 15:39:41.643473 3418 reflector.go:539] object-"kube-flannel"/"kube-flannel-cfg": failed to list *v1.ConfigMap: configmaps "kube-flannel-cfg" is forbidden: User "system:node:ip-172-31-17-42" cannot list resource "configmaps" in API group "" in the namespace "kube-flannel": no relationship found between node 'ip-172-31-17-42' and this object Feb 13 15:39:41.645728 kubelet[3418]: E0213 15:39:41.643500 3418 reflector.go:147] object-"kube-flannel"/"kube-flannel-cfg": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-flannel-cfg" is forbidden: User "system:node:ip-172-31-17-42" cannot list resource "configmaps" in API group "" in the namespace "kube-flannel": no relationship found between node 'ip-172-31-17-42' and this object Feb 13 15:39:41.651940 systemd[1]: Created slice kubepods-burstable-pod4c41f370_0e29_4a97_bcd9_c484fbf8b4ce.slice - libcontainer container kubepods-burstable-pod4c41f370_0e29_4a97_bcd9_c484fbf8b4ce.slice. Feb 13 15:39:41.739364 kubelet[3418]: I0213 15:39:41.739323 3418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/4c41f370-0e29-4a97-bcd9-c484fbf8b4ce-flannel-cfg\") pod \"kube-flannel-ds-gqbjx\" (UID: \"4c41f370-0e29-4a97-bcd9-c484fbf8b4ce\") " pod="kube-flannel/kube-flannel-ds-gqbjx" Feb 13 15:39:41.739545 kubelet[3418]: I0213 15:39:41.739388 3418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzd9h\" (UniqueName: \"kubernetes.io/projected/5e765428-fc44-48cb-9e63-bda4ffc4b032-kube-api-access-dzd9h\") pod \"kube-proxy-6ld55\" (UID: \"5e765428-fc44-48cb-9e63-bda4ffc4b032\") " pod="kube-system/kube-proxy-6ld55" Feb 13 15:39:41.739545 kubelet[3418]: I0213 15:39:41.739415 3418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4c41f370-0e29-4a97-bcd9-c484fbf8b4ce-xtables-lock\") pod \"kube-flannel-ds-gqbjx\" (UID: \"4c41f370-0e29-4a97-bcd9-c484fbf8b4ce\") " pod="kube-flannel/kube-flannel-ds-gqbjx" Feb 13 15:39:41.739545 kubelet[3418]: I0213 15:39:41.739443 3418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvqqq\" (UniqueName: \"kubernetes.io/projected/4c41f370-0e29-4a97-bcd9-c484fbf8b4ce-kube-api-access-nvqqq\") pod \"kube-flannel-ds-gqbjx\" (UID: \"4c41f370-0e29-4a97-bcd9-c484fbf8b4ce\") " pod="kube-flannel/kube-flannel-ds-gqbjx" Feb 13 15:39:41.739545 kubelet[3418]: I0213 15:39:41.739470 3418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5e765428-fc44-48cb-9e63-bda4ffc4b032-kube-proxy\") pod \"kube-proxy-6ld55\" (UID: \"5e765428-fc44-48cb-9e63-bda4ffc4b032\") " pod="kube-system/kube-proxy-6ld55" Feb 13 15:39:41.739545 kubelet[3418]: I0213 15:39:41.739496 3418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5e765428-fc44-48cb-9e63-bda4ffc4b032-lib-modules\") pod \"kube-proxy-6ld55\" (UID: \"5e765428-fc44-48cb-9e63-bda4ffc4b032\") " pod="kube-system/kube-proxy-6ld55" Feb 13 15:39:41.739755 kubelet[3418]: I0213 15:39:41.739521 3418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/4c41f370-0e29-4a97-bcd9-c484fbf8b4ce-run\") pod \"kube-flannel-ds-gqbjx\" (UID: \"4c41f370-0e29-4a97-bcd9-c484fbf8b4ce\") " pod="kube-flannel/kube-flannel-ds-gqbjx" Feb 13 15:39:41.739755 kubelet[3418]: I0213 15:39:41.739549 3418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/4c41f370-0e29-4a97-bcd9-c484fbf8b4ce-cni\") pod \"kube-flannel-ds-gqbjx\" (UID: \"4c41f370-0e29-4a97-bcd9-c484fbf8b4ce\") " pod="kube-flannel/kube-flannel-ds-gqbjx" Feb 13 15:39:41.739755 kubelet[3418]: I0213 15:39:41.739585 3418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5e765428-fc44-48cb-9e63-bda4ffc4b032-xtables-lock\") pod \"kube-proxy-6ld55\" (UID: \"5e765428-fc44-48cb-9e63-bda4ffc4b032\") " pod="kube-system/kube-proxy-6ld55" Feb 13 15:39:41.739755 kubelet[3418]: I0213 15:39:41.739615 3418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/4c41f370-0e29-4a97-bcd9-c484fbf8b4ce-cni-plugin\") pod \"kube-flannel-ds-gqbjx\" (UID: \"4c41f370-0e29-4a97-bcd9-c484fbf8b4ce\") " pod="kube-flannel/kube-flannel-ds-gqbjx" Feb 13 15:39:42.841715 kubelet[3418]: E0213 15:39:42.841670 3418 configmap.go:199] Couldn't get configMap kube-flannel/kube-flannel-cfg: failed to sync configmap cache: timed out waiting for the condition Feb 13 15:39:42.842253 kubelet[3418]: E0213 15:39:42.841781 3418 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4c41f370-0e29-4a97-bcd9-c484fbf8b4ce-flannel-cfg podName:4c41f370-0e29-4a97-bcd9-c484fbf8b4ce nodeName:}" failed. No retries permitted until 2025-02-13 15:39:43.34175531 +0000 UTC m=+12.551533801 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "flannel-cfg" (UniqueName: "kubernetes.io/configmap/4c41f370-0e29-4a97-bcd9-c484fbf8b4ce-flannel-cfg") pod "kube-flannel-ds-gqbjx" (UID: "4c41f370-0e29-4a97-bcd9-c484fbf8b4ce") : failed to sync configmap cache: timed out waiting for the condition Feb 13 15:39:42.843853 kubelet[3418]: E0213 15:39:42.843668 3418 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 13 15:39:42.843979 kubelet[3418]: E0213 15:39:42.843910 3418 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5e765428-fc44-48cb-9e63-bda4ffc4b032-kube-proxy podName:5e765428-fc44-48cb-9e63-bda4ffc4b032 nodeName:}" failed. No retries permitted until 2025-02-13 15:39:43.34388553 +0000 UTC m=+12.553664036 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/5e765428-fc44-48cb-9e63-bda4ffc4b032-kube-proxy") pod "kube-proxy-6ld55" (UID: "5e765428-fc44-48cb-9e63-bda4ffc4b032") : failed to sync configmap cache: timed out waiting for the condition Feb 13 15:39:42.860342 kubelet[3418]: E0213 15:39:42.860286 3418 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 13 15:39:42.860342 kubelet[3418]: E0213 15:39:42.860341 3418 projected.go:200] Error preparing data for projected volume kube-api-access-dzd9h for pod kube-system/kube-proxy-6ld55: failed to sync configmap cache: timed out waiting for the condition Feb 13 15:39:42.860554 kubelet[3418]: E0213 15:39:42.860426 3418 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5e765428-fc44-48cb-9e63-bda4ffc4b032-kube-api-access-dzd9h podName:5e765428-fc44-48cb-9e63-bda4ffc4b032 nodeName:}" failed. No retries permitted until 2025-02-13 15:39:43.36040372 +0000 UTC m=+12.570182211 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-dzd9h" (UniqueName: "kubernetes.io/projected/5e765428-fc44-48cb-9e63-bda4ffc4b032-kube-api-access-dzd9h") pod "kube-proxy-6ld55" (UID: "5e765428-fc44-48cb-9e63-bda4ffc4b032") : failed to sync configmap cache: timed out waiting for the condition Feb 13 15:39:43.446193 containerd[1897]: time="2025-02-13T15:39:43.446132595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6ld55,Uid:5e765428-fc44-48cb-9e63-bda4ffc4b032,Namespace:kube-system,Attempt:0,}" Feb 13 15:39:43.460719 containerd[1897]: time="2025-02-13T15:39:43.460656772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-gqbjx,Uid:4c41f370-0e29-4a97-bcd9-c484fbf8b4ce,Namespace:kube-flannel,Attempt:0,}" Feb 13 15:39:43.537824 containerd[1897]: time="2025-02-13T15:39:43.537670043Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:39:43.537824 containerd[1897]: time="2025-02-13T15:39:43.537749255Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:39:43.537824 containerd[1897]: time="2025-02-13T15:39:43.537772313Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:39:43.538856 containerd[1897]: time="2025-02-13T15:39:43.537902786Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:39:43.566933 containerd[1897]: time="2025-02-13T15:39:43.566594780Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:39:43.566933 containerd[1897]: time="2025-02-13T15:39:43.566689576Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:39:43.566933 containerd[1897]: time="2025-02-13T15:39:43.566708064Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:39:43.568478 containerd[1897]: time="2025-02-13T15:39:43.568357957Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:39:43.610905 systemd[1]: Started cri-containerd-2b3695233c413ca705a45448f2df3afb79f4403b24b8729f4ae6a2d2d0bb98cd.scope - libcontainer container 2b3695233c413ca705a45448f2df3afb79f4403b24b8729f4ae6a2d2d0bb98cd. Feb 13 15:39:43.623651 systemd[1]: Started cri-containerd-04b9268a3f5f32916f1168511d84f9c50f3dc67fd1d1633b0dd5deb8d5f703af.scope - libcontainer container 04b9268a3f5f32916f1168511d84f9c50f3dc67fd1d1633b0dd5deb8d5f703af. Feb 13 15:39:43.680693 containerd[1897]: time="2025-02-13T15:39:43.680648913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6ld55,Uid:5e765428-fc44-48cb-9e63-bda4ffc4b032,Namespace:kube-system,Attempt:0,} returns sandbox id \"2b3695233c413ca705a45448f2df3afb79f4403b24b8729f4ae6a2d2d0bb98cd\"" Feb 13 15:39:43.689217 containerd[1897]: time="2025-02-13T15:39:43.688957512Z" level=info msg="CreateContainer within sandbox \"2b3695233c413ca705a45448f2df3afb79f4403b24b8729f4ae6a2d2d0bb98cd\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 15:39:43.733945 containerd[1897]: time="2025-02-13T15:39:43.731112891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-gqbjx,Uid:4c41f370-0e29-4a97-bcd9-c484fbf8b4ce,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"04b9268a3f5f32916f1168511d84f9c50f3dc67fd1d1633b0dd5deb8d5f703af\"" Feb 13 15:39:43.733945 containerd[1897]: time="2025-02-13T15:39:43.733276148Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 15:39:43.745754 containerd[1897]: time="2025-02-13T15:39:43.745702773Z" level=info msg="CreateContainer within sandbox \"2b3695233c413ca705a45448f2df3afb79f4403b24b8729f4ae6a2d2d0bb98cd\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"29e681cd1ee995a87d28a6ce26f96724dd9f77e35551b512857786fddf2514af\"" Feb 13 15:39:43.750373 containerd[1897]: time="2025-02-13T15:39:43.746898299Z" level=info msg="StartContainer for \"29e681cd1ee995a87d28a6ce26f96724dd9f77e35551b512857786fddf2514af\"" Feb 13 15:39:43.811609 systemd[1]: Started cri-containerd-29e681cd1ee995a87d28a6ce26f96724dd9f77e35551b512857786fddf2514af.scope - libcontainer container 29e681cd1ee995a87d28a6ce26f96724dd9f77e35551b512857786fddf2514af. Feb 13 15:39:43.869252 containerd[1897]: time="2025-02-13T15:39:43.869208133Z" level=info msg="StartContainer for \"29e681cd1ee995a87d28a6ce26f96724dd9f77e35551b512857786fddf2514af\" returns successfully" Feb 13 15:39:44.200922 kubelet[3418]: I0213 15:39:44.199968 3418 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-6ld55" podStartSLOduration=3.199916117 podStartE2EDuration="3.199916117s" podCreationTimestamp="2025-02-13 15:39:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:39:44.199756384 +0000 UTC m=+13.409534885" watchObservedRunningTime="2025-02-13 15:39:44.199916117 +0000 UTC m=+13.409694618" Feb 13 15:39:45.755525 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4045386081.mount: Deactivated successfully. Feb 13 15:39:45.885397 containerd[1897]: time="2025-02-13T15:39:45.885340277Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:45.890512 containerd[1897]: time="2025-02-13T15:39:45.890384788Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3852936" Feb 13 15:39:45.894859 containerd[1897]: time="2025-02-13T15:39:45.893571379Z" level=info msg="ImageCreate event name:\"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:45.896617 containerd[1897]: time="2025-02-13T15:39:45.896565155Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:45.898315 containerd[1897]: time="2025-02-13T15:39:45.898262125Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3842055\" in 2.164946825s" Feb 13 15:39:45.898487 containerd[1897]: time="2025-02-13T15:39:45.898464806Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" Feb 13 15:39:45.901453 containerd[1897]: time="2025-02-13T15:39:45.901415089Z" level=info msg="CreateContainer within sandbox \"04b9268a3f5f32916f1168511d84f9c50f3dc67fd1d1633b0dd5deb8d5f703af\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Feb 13 15:39:45.932969 containerd[1897]: time="2025-02-13T15:39:45.932590824Z" level=info msg="CreateContainer within sandbox \"04b9268a3f5f32916f1168511d84f9c50f3dc67fd1d1633b0dd5deb8d5f703af\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"989bbb4d2435f0fd96e6f86975c52561aad1531a0de397bb633e2b736695015b\"" Feb 13 15:39:45.935264 containerd[1897]: time="2025-02-13T15:39:45.934056564Z" level=info msg="StartContainer for \"989bbb4d2435f0fd96e6f86975c52561aad1531a0de397bb633e2b736695015b\"" Feb 13 15:39:45.990407 systemd[1]: Started cri-containerd-989bbb4d2435f0fd96e6f86975c52561aad1531a0de397bb633e2b736695015b.scope - libcontainer container 989bbb4d2435f0fd96e6f86975c52561aad1531a0de397bb633e2b736695015b. Feb 13 15:39:46.045344 systemd[1]: cri-containerd-989bbb4d2435f0fd96e6f86975c52561aad1531a0de397bb633e2b736695015b.scope: Deactivated successfully. Feb 13 15:39:46.050911 containerd[1897]: time="2025-02-13T15:39:46.050845462Z" level=info msg="StartContainer for \"989bbb4d2435f0fd96e6f86975c52561aad1531a0de397bb633e2b736695015b\" returns successfully" Feb 13 15:39:46.161903 containerd[1897]: time="2025-02-13T15:39:46.161823308Z" level=info msg="shim disconnected" id=989bbb4d2435f0fd96e6f86975c52561aad1531a0de397bb633e2b736695015b namespace=k8s.io Feb 13 15:39:46.161903 containerd[1897]: time="2025-02-13T15:39:46.161887019Z" level=warning msg="cleaning up after shim disconnected" id=989bbb4d2435f0fd96e6f86975c52561aad1531a0de397bb633e2b736695015b namespace=k8s.io Feb 13 15:39:46.161903 containerd[1897]: time="2025-02-13T15:39:46.161900686Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:39:46.198797 containerd[1897]: time="2025-02-13T15:39:46.195017875Z" level=warning msg="cleanup warnings time=\"2025-02-13T15:39:46Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 15:39:46.627019 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-989bbb4d2435f0fd96e6f86975c52561aad1531a0de397bb633e2b736695015b-rootfs.mount: Deactivated successfully. Feb 13 15:39:47.188494 containerd[1897]: time="2025-02-13T15:39:47.188447802Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Feb 13 15:39:49.483403 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount74005475.mount: Deactivated successfully. Feb 13 15:39:53.043091 containerd[1897]: time="2025-02-13T15:39:53.043015929Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:53.044596 containerd[1897]: time="2025-02-13T15:39:53.044433166Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26866357" Feb 13 15:39:53.046682 containerd[1897]: time="2025-02-13T15:39:53.046089996Z" level=info msg="ImageCreate event name:\"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:53.050996 containerd[1897]: time="2025-02-13T15:39:53.050948296Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:53.052482 containerd[1897]: time="2025-02-13T15:39:53.052384598Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26855532\" in 5.863890817s" Feb 13 15:39:53.052482 containerd[1897]: time="2025-02-13T15:39:53.052437310Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" Feb 13 15:39:53.058857 containerd[1897]: time="2025-02-13T15:39:53.058814261Z" level=info msg="CreateContainer within sandbox \"04b9268a3f5f32916f1168511d84f9c50f3dc67fd1d1633b0dd5deb8d5f703af\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 15:39:53.085802 containerd[1897]: time="2025-02-13T15:39:53.085757169Z" level=info msg="CreateContainer within sandbox \"04b9268a3f5f32916f1168511d84f9c50f3dc67fd1d1633b0dd5deb8d5f703af\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3b49d19b397a34ce151ea14a93a0f0880ded35b0365ab5419360ae7327ef48bb\"" Feb 13 15:39:53.087854 containerd[1897]: time="2025-02-13T15:39:53.087793646Z" level=info msg="StartContainer for \"3b49d19b397a34ce151ea14a93a0f0880ded35b0365ab5419360ae7327ef48bb\"" Feb 13 15:39:53.137639 systemd[1]: Started cri-containerd-3b49d19b397a34ce151ea14a93a0f0880ded35b0365ab5419360ae7327ef48bb.scope - libcontainer container 3b49d19b397a34ce151ea14a93a0f0880ded35b0365ab5419360ae7327ef48bb. Feb 13 15:39:53.170171 systemd[1]: cri-containerd-3b49d19b397a34ce151ea14a93a0f0880ded35b0365ab5419360ae7327ef48bb.scope: Deactivated successfully. Feb 13 15:39:53.173686 containerd[1897]: time="2025-02-13T15:39:53.173582163Z" level=info msg="StartContainer for \"3b49d19b397a34ce151ea14a93a0f0880ded35b0365ab5419360ae7327ef48bb\" returns successfully" Feb 13 15:39:53.198991 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3b49d19b397a34ce151ea14a93a0f0880ded35b0365ab5419360ae7327ef48bb-rootfs.mount: Deactivated successfully. Feb 13 15:39:53.213990 containerd[1897]: time="2025-02-13T15:39:53.213918450Z" level=info msg="shim disconnected" id=3b49d19b397a34ce151ea14a93a0f0880ded35b0365ab5419360ae7327ef48bb namespace=k8s.io Feb 13 15:39:53.213990 containerd[1897]: time="2025-02-13T15:39:53.213982540Z" level=warning msg="cleaning up after shim disconnected" id=3b49d19b397a34ce151ea14a93a0f0880ded35b0365ab5419360ae7327ef48bb namespace=k8s.io Feb 13 15:39:53.213990 containerd[1897]: time="2025-02-13T15:39:53.213993893Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:39:53.240215 kubelet[3418]: I0213 15:39:53.238368 3418 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 15:39:53.282052 kubelet[3418]: I0213 15:39:53.281907 3418 topology_manager.go:215] "Topology Admit Handler" podUID="f45d603e-b8ce-4de5-8fdb-d5c0e91b2dd8" podNamespace="kube-system" podName="coredns-76f75df574-r9jkz" Feb 13 15:39:53.284127 kubelet[3418]: I0213 15:39:53.283319 3418 topology_manager.go:215] "Topology Admit Handler" podUID="07816203-e7d0-47ee-9c70-206926764611" podNamespace="kube-system" podName="coredns-76f75df574-jlwdx" Feb 13 15:39:53.295023 systemd[1]: Created slice kubepods-burstable-podf45d603e_b8ce_4de5_8fdb_d5c0e91b2dd8.slice - libcontainer container kubepods-burstable-podf45d603e_b8ce_4de5_8fdb_d5c0e91b2dd8.slice. Feb 13 15:39:53.316281 systemd[1]: Created slice kubepods-burstable-pod07816203_e7d0_47ee_9c70_206926764611.slice - libcontainer container kubepods-burstable-pod07816203_e7d0_47ee_9c70_206926764611.slice. Feb 13 15:39:53.364269 kubelet[3418]: I0213 15:39:53.363923 3418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/07816203-e7d0-47ee-9c70-206926764611-config-volume\") pod \"coredns-76f75df574-jlwdx\" (UID: \"07816203-e7d0-47ee-9c70-206926764611\") " pod="kube-system/coredns-76f75df574-jlwdx" Feb 13 15:39:53.364439 kubelet[3418]: I0213 15:39:53.364332 3418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2c5mn\" (UniqueName: \"kubernetes.io/projected/07816203-e7d0-47ee-9c70-206926764611-kube-api-access-2c5mn\") pod \"coredns-76f75df574-jlwdx\" (UID: \"07816203-e7d0-47ee-9c70-206926764611\") " pod="kube-system/coredns-76f75df574-jlwdx" Feb 13 15:39:53.364496 kubelet[3418]: I0213 15:39:53.364450 3418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f45d603e-b8ce-4de5-8fdb-d5c0e91b2dd8-config-volume\") pod \"coredns-76f75df574-r9jkz\" (UID: \"f45d603e-b8ce-4de5-8fdb-d5c0e91b2dd8\") " pod="kube-system/coredns-76f75df574-r9jkz" Feb 13 15:39:53.364545 kubelet[3418]: I0213 15:39:53.364496 3418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpqxj\" (UniqueName: \"kubernetes.io/projected/f45d603e-b8ce-4de5-8fdb-d5c0e91b2dd8-kube-api-access-bpqxj\") pod \"coredns-76f75df574-r9jkz\" (UID: \"f45d603e-b8ce-4de5-8fdb-d5c0e91b2dd8\") " pod="kube-system/coredns-76f75df574-r9jkz" Feb 13 15:39:53.619015 containerd[1897]: time="2025-02-13T15:39:53.618579487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-r9jkz,Uid:f45d603e-b8ce-4de5-8fdb-d5c0e91b2dd8,Namespace:kube-system,Attempt:0,}" Feb 13 15:39:53.709994 containerd[1897]: time="2025-02-13T15:39:53.709938958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-jlwdx,Uid:07816203-e7d0-47ee-9c70-206926764611,Namespace:kube-system,Attempt:0,}" Feb 13 15:39:53.842543 containerd[1897]: time="2025-02-13T15:39:53.842490295Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-jlwdx,Uid:07816203-e7d0-47ee-9c70-206926764611,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b218e81b6ab73b4bbf6635016f7f855a5a8e4612642da63d2d942182d42111b9\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 15:39:53.843017 kubelet[3418]: E0213 15:39:53.842968 3418 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b218e81b6ab73b4bbf6635016f7f855a5a8e4612642da63d2d942182d42111b9\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 15:39:53.843691 kubelet[3418]: E0213 15:39:53.843253 3418 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b218e81b6ab73b4bbf6635016f7f855a5a8e4612642da63d2d942182d42111b9\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-jlwdx" Feb 13 15:39:53.843691 kubelet[3418]: E0213 15:39:53.843337 3418 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b218e81b6ab73b4bbf6635016f7f855a5a8e4612642da63d2d942182d42111b9\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-jlwdx" Feb 13 15:39:53.845086 containerd[1897]: time="2025-02-13T15:39:53.844301346Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-r9jkz,Uid:f45d603e-b8ce-4de5-8fdb-d5c0e91b2dd8,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9208ea688f40536186d8fc3ec4f5b94c2a226dabd28c36a7e2a322fc196434d0\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 15:39:53.846206 kubelet[3418]: E0213 15:39:53.846171 3418 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9208ea688f40536186d8fc3ec4f5b94c2a226dabd28c36a7e2a322fc196434d0\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 15:39:53.846306 kubelet[3418]: E0213 15:39:53.846245 3418 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9208ea688f40536186d8fc3ec4f5b94c2a226dabd28c36a7e2a322fc196434d0\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-r9jkz" Feb 13 15:39:53.846306 kubelet[3418]: E0213 15:39:53.846284 3418 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9208ea688f40536186d8fc3ec4f5b94c2a226dabd28c36a7e2a322fc196434d0\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-r9jkz" Feb 13 15:39:53.851305 kubelet[3418]: E0213 15:39:53.850935 3418 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-jlwdx_kube-system(07816203-e7d0-47ee-9c70-206926764611)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-jlwdx_kube-system(07816203-e7d0-47ee-9c70-206926764611)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b218e81b6ab73b4bbf6635016f7f855a5a8e4612642da63d2d942182d42111b9\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-76f75df574-jlwdx" podUID="07816203-e7d0-47ee-9c70-206926764611" Feb 13 15:39:53.852382 kubelet[3418]: E0213 15:39:53.852341 3418 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-r9jkz_kube-system(f45d603e-b8ce-4de5-8fdb-d5c0e91b2dd8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-r9jkz_kube-system(f45d603e-b8ce-4de5-8fdb-d5c0e91b2dd8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9208ea688f40536186d8fc3ec4f5b94c2a226dabd28c36a7e2a322fc196434d0\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-76f75df574-r9jkz" podUID="f45d603e-b8ce-4de5-8fdb-d5c0e91b2dd8" Feb 13 15:39:54.230756 containerd[1897]: time="2025-02-13T15:39:54.230709896Z" level=info msg="CreateContainer within sandbox \"04b9268a3f5f32916f1168511d84f9c50f3dc67fd1d1633b0dd5deb8d5f703af\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Feb 13 15:39:54.285480 containerd[1897]: time="2025-02-13T15:39:54.285416546Z" level=info msg="CreateContainer within sandbox \"04b9268a3f5f32916f1168511d84f9c50f3dc67fd1d1633b0dd5deb8d5f703af\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"1d4f085ffc5c54c624d592e4348a9c871f495c4c1415b6b1e2a591a6f134c138\"" Feb 13 15:39:54.286271 containerd[1897]: time="2025-02-13T15:39:54.286209331Z" level=info msg="StartContainer for \"1d4f085ffc5c54c624d592e4348a9c871f495c4c1415b6b1e2a591a6f134c138\"" Feb 13 15:39:54.443507 systemd[1]: Started cri-containerd-1d4f085ffc5c54c624d592e4348a9c871f495c4c1415b6b1e2a591a6f134c138.scope - libcontainer container 1d4f085ffc5c54c624d592e4348a9c871f495c4c1415b6b1e2a591a6f134c138. Feb 13 15:39:54.616682 containerd[1897]: time="2025-02-13T15:39:54.616496791Z" level=info msg="StartContainer for \"1d4f085ffc5c54c624d592e4348a9c871f495c4c1415b6b1e2a591a6f134c138\" returns successfully" Feb 13 15:39:55.730089 (udev-worker)[3957]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:39:55.766524 systemd-networkd[1737]: flannel.1: Link UP Feb 13 15:39:55.766537 systemd-networkd[1737]: flannel.1: Gained carrier Feb 13 15:39:57.631530 systemd-networkd[1737]: flannel.1: Gained IPv6LL Feb 13 15:40:00.195184 ntpd[1867]: Listen normally on 6 flannel.1 192.168.0.0:123 Feb 13 15:40:00.195290 ntpd[1867]: Listen normally on 7 flannel.1 [fe80::940b:2bff:feb1:c937%4]:123 Feb 13 15:40:00.195723 ntpd[1867]: 13 Feb 15:40:00 ntpd[1867]: Listen normally on 6 flannel.1 192.168.0.0:123 Feb 13 15:40:00.195723 ntpd[1867]: 13 Feb 15:40:00 ntpd[1867]: Listen normally on 7 flannel.1 [fe80::940b:2bff:feb1:c937%4]:123 Feb 13 15:40:05.074215 containerd[1897]: time="2025-02-13T15:40:05.074167087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-jlwdx,Uid:07816203-e7d0-47ee-9c70-206926764611,Namespace:kube-system,Attempt:0,}" Feb 13 15:40:05.198841 systemd-networkd[1737]: cni0: Link UP Feb 13 15:40:05.198851 systemd-networkd[1737]: cni0: Gained carrier Feb 13 15:40:05.206902 systemd-networkd[1737]: cni0: Lost carrier Feb 13 15:40:05.208912 (udev-worker)[4073]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:40:05.296895 kernel: cni0: port 1(vethbdc4fae4) entered blocking state Feb 13 15:40:05.297049 kernel: cni0: port 1(vethbdc4fae4) entered disabled state Feb 13 15:40:05.296668 systemd-networkd[1737]: vethbdc4fae4: Link UP Feb 13 15:40:05.312537 kernel: vethbdc4fae4: entered allmulticast mode Feb 13 15:40:05.312696 kernel: vethbdc4fae4: entered promiscuous mode Feb 13 15:40:05.316594 kernel: cni0: port 1(vethbdc4fae4) entered blocking state Feb 13 15:40:05.316675 kernel: cni0: port 1(vethbdc4fae4) entered forwarding state Feb 13 15:40:05.318165 kernel: cni0: port 1(vethbdc4fae4) entered disabled state Feb 13 15:40:05.364804 kernel: cni0: port 1(vethbdc4fae4) entered blocking state Feb 13 15:40:05.364931 kernel: cni0: port 1(vethbdc4fae4) entered forwarding state Feb 13 15:40:05.367797 systemd-networkd[1737]: vethbdc4fae4: Gained carrier Feb 13 15:40:05.370917 systemd-networkd[1737]: cni0: Gained carrier Feb 13 15:40:05.408125 containerd[1897]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009a8e8), "name":"cbr0", "type":"bridge"} Feb 13 15:40:05.408125 containerd[1897]: delegateAdd: netconf sent to delegate plugin: Feb 13 15:40:05.531808 containerd[1897]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":8951,"name":"cbr0","type":"bridge"}time="2025-02-13T15:40:05.531254712Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:40:05.531808 containerd[1897]: time="2025-02-13T15:40:05.531333676Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:40:05.531808 containerd[1897]: time="2025-02-13T15:40:05.531358577Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:40:05.531808 containerd[1897]: time="2025-02-13T15:40:05.531575066Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:40:05.596951 systemd[1]: run-containerd-runc-k8s.io-efec3b9835fbf9921dd44962d4c989f23e6c909e869a9abda67b560318393bb2-runc.0D5Pbk.mount: Deactivated successfully. Feb 13 15:40:05.608466 systemd[1]: Started cri-containerd-efec3b9835fbf9921dd44962d4c989f23e6c909e869a9abda67b560318393bb2.scope - libcontainer container efec3b9835fbf9921dd44962d4c989f23e6c909e869a9abda67b560318393bb2. Feb 13 15:40:05.747561 containerd[1897]: time="2025-02-13T15:40:05.747509680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-jlwdx,Uid:07816203-e7d0-47ee-9c70-206926764611,Namespace:kube-system,Attempt:0,} returns sandbox id \"efec3b9835fbf9921dd44962d4c989f23e6c909e869a9abda67b560318393bb2\"" Feb 13 15:40:05.770579 containerd[1897]: time="2025-02-13T15:40:05.770530713Z" level=info msg="CreateContainer within sandbox \"efec3b9835fbf9921dd44962d4c989f23e6c909e869a9abda67b560318393bb2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:40:05.907456 containerd[1897]: time="2025-02-13T15:40:05.907339153Z" level=info msg="CreateContainer within sandbox \"efec3b9835fbf9921dd44962d4c989f23e6c909e869a9abda67b560318393bb2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d0d87eb8b92f8a06934fa3e41b08121187cf5ee7c9f038da86ee1e8d915c1be9\"" Feb 13 15:40:05.915040 containerd[1897]: time="2025-02-13T15:40:05.908603076Z" level=info msg="StartContainer for \"d0d87eb8b92f8a06934fa3e41b08121187cf5ee7c9f038da86ee1e8d915c1be9\"" Feb 13 15:40:05.997410 systemd[1]: Started cri-containerd-d0d87eb8b92f8a06934fa3e41b08121187cf5ee7c9f038da86ee1e8d915c1be9.scope - libcontainer container d0d87eb8b92f8a06934fa3e41b08121187cf5ee7c9f038da86ee1e8d915c1be9. Feb 13 15:40:06.043753 containerd[1897]: time="2025-02-13T15:40:06.043708651Z" level=info msg="StartContainer for \"d0d87eb8b92f8a06934fa3e41b08121187cf5ee7c9f038da86ee1e8d915c1be9\" returns successfully" Feb 13 15:40:06.073919 containerd[1897]: time="2025-02-13T15:40:06.073876205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-r9jkz,Uid:f45d603e-b8ce-4de5-8fdb-d5c0e91b2dd8,Namespace:kube-system,Attempt:0,}" Feb 13 15:40:06.113171 kernel: cni0: port 2(veth64795bea) entered blocking state Feb 13 15:40:06.113285 kernel: cni0: port 2(veth64795bea) entered disabled state Feb 13 15:40:06.109056 systemd-networkd[1737]: veth64795bea: Link UP Feb 13 15:40:06.111481 (udev-worker)[4075]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:40:06.121195 kernel: veth64795bea: entered allmulticast mode Feb 13 15:40:06.124526 kernel: veth64795bea: entered promiscuous mode Feb 13 15:40:06.124626 kernel: cni0: port 2(veth64795bea) entered blocking state Feb 13 15:40:06.124651 kernel: cni0: port 2(veth64795bea) entered forwarding state Feb 13 15:40:06.138720 systemd-networkd[1737]: veth64795bea: Gained carrier Feb 13 15:40:06.140612 containerd[1897]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009c8e8), "name":"cbr0", "type":"bridge"} Feb 13 15:40:06.140612 containerd[1897]: delegateAdd: netconf sent to delegate plugin: Feb 13 15:40:06.179804 containerd[1897]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":8951,"name":"cbr0","type":"bridge"}time="2025-02-13T15:40:06.179698453Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:40:06.181088 containerd[1897]: time="2025-02-13T15:40:06.180792429Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:40:06.181261 containerd[1897]: time="2025-02-13T15:40:06.181077182Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:40:06.187197 containerd[1897]: time="2025-02-13T15:40:06.187097565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:40:06.270202 systemd[1]: run-containerd-runc-k8s.io-f6cc59786ab1f31c88f5b38559e8a6c23bc5abd8831b12e00420d4d15d02a2b5-runc.rgrJ3u.mount: Deactivated successfully. Feb 13 15:40:06.281812 systemd[1]: Started cri-containerd-f6cc59786ab1f31c88f5b38559e8a6c23bc5abd8831b12e00420d4d15d02a2b5.scope - libcontainer container f6cc59786ab1f31c88f5b38559e8a6c23bc5abd8831b12e00420d4d15d02a2b5. Feb 13 15:40:06.289505 kubelet[3418]: I0213 15:40:06.289441 3418 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-gqbjx" podStartSLOduration=15.969259144 podStartE2EDuration="25.289369856s" podCreationTimestamp="2025-02-13 15:39:41 +0000 UTC" firstStartedPulling="2025-02-13 15:39:43.73256031 +0000 UTC m=+12.942338789" lastFinishedPulling="2025-02-13 15:39:53.052671009 +0000 UTC m=+22.262449501" observedRunningTime="2025-02-13 15:39:55.243029984 +0000 UTC m=+24.452808478" watchObservedRunningTime="2025-02-13 15:40:06.289369856 +0000 UTC m=+35.499148354" Feb 13 15:40:06.416805 containerd[1897]: time="2025-02-13T15:40:06.416761580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-r9jkz,Uid:f45d603e-b8ce-4de5-8fdb-d5c0e91b2dd8,Namespace:kube-system,Attempt:0,} returns sandbox id \"f6cc59786ab1f31c88f5b38559e8a6c23bc5abd8831b12e00420d4d15d02a2b5\"" Feb 13 15:40:06.423378 containerd[1897]: time="2025-02-13T15:40:06.423253577Z" level=info msg="CreateContainer within sandbox \"f6cc59786ab1f31c88f5b38559e8a6c23bc5abd8831b12e00420d4d15d02a2b5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:40:06.571351 containerd[1897]: time="2025-02-13T15:40:06.571208093Z" level=info msg="CreateContainer within sandbox \"f6cc59786ab1f31c88f5b38559e8a6c23bc5abd8831b12e00420d4d15d02a2b5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2d9875f1fc2325aad9c7475f2a1e87c1e89da0c7c55498b0fd4b75e2a176340a\"" Feb 13 15:40:06.573214 containerd[1897]: time="2025-02-13T15:40:06.572356405Z" level=info msg="StartContainer for \"2d9875f1fc2325aad9c7475f2a1e87c1e89da0c7c55498b0fd4b75e2a176340a\"" Feb 13 15:40:06.631313 systemd[1]: Started cri-containerd-2d9875f1fc2325aad9c7475f2a1e87c1e89da0c7c55498b0fd4b75e2a176340a.scope - libcontainer container 2d9875f1fc2325aad9c7475f2a1e87c1e89da0c7c55498b0fd4b75e2a176340a. Feb 13 15:40:06.679843 containerd[1897]: time="2025-02-13T15:40:06.679722781Z" level=info msg="StartContainer for \"2d9875f1fc2325aad9c7475f2a1e87c1e89da0c7c55498b0fd4b75e2a176340a\" returns successfully" Feb 13 15:40:06.719419 systemd-networkd[1737]: cni0: Gained IPv6LL Feb 13 15:40:06.847415 systemd-networkd[1737]: vethbdc4fae4: Gained IPv6LL Feb 13 15:40:07.111904 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2065125632.mount: Deactivated successfully. Feb 13 15:40:07.314323 kubelet[3418]: I0213 15:40:07.313776 3418 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-jlwdx" podStartSLOduration=25.313724374 podStartE2EDuration="25.313724374s" podCreationTimestamp="2025-02-13 15:39:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:40:06.29231554 +0000 UTC m=+35.502094039" watchObservedRunningTime="2025-02-13 15:40:07.313724374 +0000 UTC m=+36.523502857" Feb 13 15:40:07.362875 kubelet[3418]: I0213 15:40:07.361389 3418 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-r9jkz" podStartSLOduration=25.361332162 podStartE2EDuration="25.361332162s" podCreationTimestamp="2025-02-13 15:39:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:40:07.361132923 +0000 UTC m=+36.570911424" watchObservedRunningTime="2025-02-13 15:40:07.361332162 +0000 UTC m=+36.571110662" Feb 13 15:40:07.551481 systemd-networkd[1737]: veth64795bea: Gained IPv6LL Feb 13 15:40:10.195279 ntpd[1867]: Listen normally on 8 cni0 192.168.0.1:123 Feb 13 15:40:10.195795 ntpd[1867]: 13 Feb 15:40:10 ntpd[1867]: Listen normally on 8 cni0 192.168.0.1:123 Feb 13 15:40:10.195795 ntpd[1867]: 13 Feb 15:40:10 ntpd[1867]: Listen normally on 9 cni0 [fe80::bc4a:7ff:feff:f94f%5]:123 Feb 13 15:40:10.195795 ntpd[1867]: 13 Feb 15:40:10 ntpd[1867]: Listen normally on 10 vethbdc4fae4 [fe80::7427:cdff:fe63:392b%6]:123 Feb 13 15:40:10.195795 ntpd[1867]: 13 Feb 15:40:10 ntpd[1867]: Listen normally on 11 veth64795bea [fe80::8093:17ff:fe79:7ac%7]:123 Feb 13 15:40:10.195379 ntpd[1867]: Listen normally on 9 cni0 [fe80::bc4a:7ff:feff:f94f%5]:123 Feb 13 15:40:10.195436 ntpd[1867]: Listen normally on 10 vethbdc4fae4 [fe80::7427:cdff:fe63:392b%6]:123 Feb 13 15:40:10.195477 ntpd[1867]: Listen normally on 11 veth64795bea [fe80::8093:17ff:fe79:7ac%7]:123 Feb 13 15:40:21.276544 systemd[1]: Started sshd@5-172.31.17.42:22-139.178.89.65:47430.service - OpenSSH per-connection server daemon (139.178.89.65:47430). Feb 13 15:40:21.473417 sshd[4370]: Accepted publickey for core from 139.178.89.65 port 47430 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:40:21.474999 sshd-session[4370]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:40:21.488714 systemd-logind[1875]: New session 6 of user core. Feb 13 15:40:21.496383 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 15:40:21.762804 sshd[4372]: Connection closed by 139.178.89.65 port 47430 Feb 13 15:40:21.763190 sshd-session[4370]: pam_unix(sshd:session): session closed for user core Feb 13 15:40:21.768838 systemd-logind[1875]: Session 6 logged out. Waiting for processes to exit. Feb 13 15:40:21.769955 systemd[1]: sshd@5-172.31.17.42:22-139.178.89.65:47430.service: Deactivated successfully. Feb 13 15:40:21.772546 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 15:40:21.774596 systemd-logind[1875]: Removed session 6. Feb 13 15:40:26.804833 systemd[1]: Started sshd@6-172.31.17.42:22-139.178.89.65:47754.service - OpenSSH per-connection server daemon (139.178.89.65:47754). Feb 13 15:40:27.010047 sshd[4405]: Accepted publickey for core from 139.178.89.65 port 47754 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:40:27.012128 sshd-session[4405]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:40:27.023431 systemd-logind[1875]: New session 7 of user core. Feb 13 15:40:27.030458 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 15:40:27.267202 sshd[4407]: Connection closed by 139.178.89.65 port 47754 Feb 13 15:40:27.269201 sshd-session[4405]: pam_unix(sshd:session): session closed for user core Feb 13 15:40:27.273347 systemd[1]: sshd@6-172.31.17.42:22-139.178.89.65:47754.service: Deactivated successfully. Feb 13 15:40:27.276469 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 15:40:27.277419 systemd-logind[1875]: Session 7 logged out. Waiting for processes to exit. Feb 13 15:40:27.279419 systemd-logind[1875]: Removed session 7. Feb 13 15:40:32.308541 systemd[1]: Started sshd@7-172.31.17.42:22-139.178.89.65:47768.service - OpenSSH per-connection server daemon (139.178.89.65:47768). Feb 13 15:40:32.479192 sshd[4442]: Accepted publickey for core from 139.178.89.65 port 47768 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:40:32.479911 sshd-session[4442]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:40:32.487211 systemd-logind[1875]: New session 8 of user core. Feb 13 15:40:32.494423 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 15:40:32.711256 sshd[4444]: Connection closed by 139.178.89.65 port 47768 Feb 13 15:40:32.712382 sshd-session[4442]: pam_unix(sshd:session): session closed for user core Feb 13 15:40:32.716970 systemd[1]: sshd@7-172.31.17.42:22-139.178.89.65:47768.service: Deactivated successfully. Feb 13 15:40:32.720035 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 15:40:32.722070 systemd-logind[1875]: Session 8 logged out. Waiting for processes to exit. Feb 13 15:40:32.723893 systemd-logind[1875]: Removed session 8. Feb 13 15:40:32.750617 systemd[1]: Started sshd@8-172.31.17.42:22-139.178.89.65:47780.service - OpenSSH per-connection server daemon (139.178.89.65:47780). Feb 13 15:40:32.945876 sshd[4455]: Accepted publickey for core from 139.178.89.65 port 47780 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:40:32.947441 sshd-session[4455]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:40:32.951806 systemd-logind[1875]: New session 9 of user core. Feb 13 15:40:32.963410 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 15:40:33.218387 sshd[4457]: Connection closed by 139.178.89.65 port 47780 Feb 13 15:40:33.221027 sshd-session[4455]: pam_unix(sshd:session): session closed for user core Feb 13 15:40:33.227988 systemd-logind[1875]: Session 9 logged out. Waiting for processes to exit. Feb 13 15:40:33.231537 systemd[1]: sshd@8-172.31.17.42:22-139.178.89.65:47780.service: Deactivated successfully. Feb 13 15:40:33.236058 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 15:40:33.248369 systemd-logind[1875]: Removed session 9. Feb 13 15:40:33.256444 systemd[1]: Started sshd@9-172.31.17.42:22-139.178.89.65:47796.service - OpenSSH per-connection server daemon (139.178.89.65:47796). Feb 13 15:40:33.479381 sshd[4466]: Accepted publickey for core from 139.178.89.65 port 47796 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:40:33.481716 sshd-session[4466]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:40:33.489854 systemd-logind[1875]: New session 10 of user core. Feb 13 15:40:33.495372 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 15:40:33.747770 sshd[4468]: Connection closed by 139.178.89.65 port 47796 Feb 13 15:40:33.748703 sshd-session[4466]: pam_unix(sshd:session): session closed for user core Feb 13 15:40:33.756745 systemd[1]: sshd@9-172.31.17.42:22-139.178.89.65:47796.service: Deactivated successfully. Feb 13 15:40:33.761541 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 15:40:33.763720 systemd-logind[1875]: Session 10 logged out. Waiting for processes to exit. Feb 13 15:40:33.765523 systemd-logind[1875]: Removed session 10. Feb 13 15:40:38.792654 systemd[1]: Started sshd@10-172.31.17.42:22-139.178.89.65:58500.service - OpenSSH per-connection server daemon (139.178.89.65:58500). Feb 13 15:40:38.961974 sshd[4499]: Accepted publickey for core from 139.178.89.65 port 58500 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:40:38.963529 sshd-session[4499]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:40:38.970581 systemd-logind[1875]: New session 11 of user core. Feb 13 15:40:38.979373 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 15:40:39.206893 sshd[4501]: Connection closed by 139.178.89.65 port 58500 Feb 13 15:40:39.208841 sshd-session[4499]: pam_unix(sshd:session): session closed for user core Feb 13 15:40:39.214498 systemd-logind[1875]: Session 11 logged out. Waiting for processes to exit. Feb 13 15:40:39.215633 systemd[1]: sshd@10-172.31.17.42:22-139.178.89.65:58500.service: Deactivated successfully. Feb 13 15:40:39.218416 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 15:40:39.219672 systemd-logind[1875]: Removed session 11. Feb 13 15:40:44.246589 systemd[1]: Started sshd@11-172.31.17.42:22-139.178.89.65:58506.service - OpenSSH per-connection server daemon (139.178.89.65:58506). Feb 13 15:40:44.430196 sshd[4535]: Accepted publickey for core from 139.178.89.65 port 58506 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:40:44.433007 sshd-session[4535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:40:44.441251 systemd-logind[1875]: New session 12 of user core. Feb 13 15:40:44.446396 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 15:40:44.683641 sshd[4537]: Connection closed by 139.178.89.65 port 58506 Feb 13 15:40:44.685210 sshd-session[4535]: pam_unix(sshd:session): session closed for user core Feb 13 15:40:44.690536 systemd[1]: sshd@11-172.31.17.42:22-139.178.89.65:58506.service: Deactivated successfully. Feb 13 15:40:44.694977 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 15:40:44.695837 systemd-logind[1875]: Session 12 logged out. Waiting for processes to exit. Feb 13 15:40:44.697425 systemd-logind[1875]: Removed session 12. Feb 13 15:40:49.731304 systemd[1]: Started sshd@12-172.31.17.42:22-139.178.89.65:43220.service - OpenSSH per-connection server daemon (139.178.89.65:43220). Feb 13 15:40:49.948325 sshd[4570]: Accepted publickey for core from 139.178.89.65 port 43220 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:40:49.949970 sshd-session[4570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:40:49.955046 systemd-logind[1875]: New session 13 of user core. Feb 13 15:40:49.959358 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 15:40:50.185637 sshd[4572]: Connection closed by 139.178.89.65 port 43220 Feb 13 15:40:50.187398 sshd-session[4570]: pam_unix(sshd:session): session closed for user core Feb 13 15:40:50.195693 systemd[1]: sshd@12-172.31.17.42:22-139.178.89.65:43220.service: Deactivated successfully. Feb 13 15:40:50.198550 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 15:40:50.199594 systemd-logind[1875]: Session 13 logged out. Waiting for processes to exit. Feb 13 15:40:50.201217 systemd-logind[1875]: Removed session 13. Feb 13 15:40:55.229598 systemd[1]: Started sshd@13-172.31.17.42:22-139.178.89.65:47630.service - OpenSSH per-connection server daemon (139.178.89.65:47630). Feb 13 15:40:55.444566 sshd[4604]: Accepted publickey for core from 139.178.89.65 port 47630 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:40:55.446419 sshd-session[4604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:40:55.466219 systemd-logind[1875]: New session 14 of user core. Feb 13 15:40:55.478859 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 15:40:55.729228 sshd[4606]: Connection closed by 139.178.89.65 port 47630 Feb 13 15:40:55.733631 sshd-session[4604]: pam_unix(sshd:session): session closed for user core Feb 13 15:40:55.737324 systemd[1]: sshd@13-172.31.17.42:22-139.178.89.65:47630.service: Deactivated successfully. Feb 13 15:40:55.740630 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 15:40:55.744085 systemd-logind[1875]: Session 14 logged out. Waiting for processes to exit. Feb 13 15:40:55.746128 systemd-logind[1875]: Removed session 14. Feb 13 15:40:55.762171 systemd[1]: Started sshd@14-172.31.17.42:22-139.178.89.65:47646.service - OpenSSH per-connection server daemon (139.178.89.65:47646). Feb 13 15:40:55.995108 sshd[4617]: Accepted publickey for core from 139.178.89.65 port 47646 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:40:55.997024 sshd-session[4617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:40:56.011629 systemd-logind[1875]: New session 15 of user core. Feb 13 15:40:56.016830 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 15:40:56.602516 sshd[4619]: Connection closed by 139.178.89.65 port 47646 Feb 13 15:40:56.604466 sshd-session[4617]: pam_unix(sshd:session): session closed for user core Feb 13 15:40:56.610866 systemd[1]: sshd@14-172.31.17.42:22-139.178.89.65:47646.service: Deactivated successfully. Feb 13 15:40:56.614504 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 15:40:56.615955 systemd-logind[1875]: Session 15 logged out. Waiting for processes to exit. Feb 13 15:40:56.617466 systemd-logind[1875]: Removed session 15. Feb 13 15:40:56.636908 systemd[1]: Started sshd@15-172.31.17.42:22-139.178.89.65:47660.service - OpenSSH per-connection server daemon (139.178.89.65:47660). Feb 13 15:40:56.833741 sshd[4648]: Accepted publickey for core from 139.178.89.65 port 47660 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:40:56.834789 sshd-session[4648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:40:56.841323 systemd-logind[1875]: New session 16 of user core. Feb 13 15:40:56.845465 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 15:40:58.968782 sshd[4650]: Connection closed by 139.178.89.65 port 47660 Feb 13 15:40:58.974532 sshd-session[4648]: pam_unix(sshd:session): session closed for user core Feb 13 15:40:58.983851 systemd[1]: sshd@15-172.31.17.42:22-139.178.89.65:47660.service: Deactivated successfully. Feb 13 15:40:58.993589 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 15:40:59.016419 systemd-logind[1875]: Session 16 logged out. Waiting for processes to exit. Feb 13 15:40:59.028454 systemd[1]: Started sshd@16-172.31.17.42:22-139.178.89.65:47666.service - OpenSSH per-connection server daemon (139.178.89.65:47666). Feb 13 15:40:59.033983 systemd-logind[1875]: Removed session 16. Feb 13 15:40:59.206552 sshd[4666]: Accepted publickey for core from 139.178.89.65 port 47666 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:40:59.208077 sshd-session[4666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:40:59.213619 systemd-logind[1875]: New session 17 of user core. Feb 13 15:40:59.224400 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 15:40:59.603107 sshd[4668]: Connection closed by 139.178.89.65 port 47666 Feb 13 15:40:59.604769 sshd-session[4666]: pam_unix(sshd:session): session closed for user core Feb 13 15:40:59.608821 systemd[1]: sshd@16-172.31.17.42:22-139.178.89.65:47666.service: Deactivated successfully. Feb 13 15:40:59.610911 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 15:40:59.612508 systemd-logind[1875]: Session 17 logged out. Waiting for processes to exit. Feb 13 15:40:59.615365 systemd-logind[1875]: Removed session 17. Feb 13 15:40:59.639664 systemd[1]: Started sshd@17-172.31.17.42:22-139.178.89.65:47676.service - OpenSSH per-connection server daemon (139.178.89.65:47676). Feb 13 15:40:59.834090 sshd[4676]: Accepted publickey for core from 139.178.89.65 port 47676 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:40:59.834892 sshd-session[4676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:40:59.858263 systemd-logind[1875]: New session 18 of user core. Feb 13 15:40:59.865454 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 15:41:00.261661 sshd[4678]: Connection closed by 139.178.89.65 port 47676 Feb 13 15:41:00.265530 sshd-session[4676]: pam_unix(sshd:session): session closed for user core Feb 13 15:41:00.271857 systemd[1]: sshd@17-172.31.17.42:22-139.178.89.65:47676.service: Deactivated successfully. Feb 13 15:41:00.274595 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 15:41:00.276183 systemd-logind[1875]: Session 18 logged out. Waiting for processes to exit. Feb 13 15:41:00.277916 systemd-logind[1875]: Removed session 18. Feb 13 15:41:05.310496 systemd[1]: Started sshd@18-172.31.17.42:22-139.178.89.65:39530.service - OpenSSH per-connection server daemon (139.178.89.65:39530). Feb 13 15:41:05.502951 sshd[4710]: Accepted publickey for core from 139.178.89.65 port 39530 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:41:05.504654 sshd-session[4710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:41:05.510960 systemd-logind[1875]: New session 19 of user core. Feb 13 15:41:05.521444 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 15:41:05.755240 sshd[4712]: Connection closed by 139.178.89.65 port 39530 Feb 13 15:41:05.758766 sshd-session[4710]: pam_unix(sshd:session): session closed for user core Feb 13 15:41:05.762915 systemd-logind[1875]: Session 19 logged out. Waiting for processes to exit. Feb 13 15:41:05.765270 systemd[1]: sshd@18-172.31.17.42:22-139.178.89.65:39530.service: Deactivated successfully. Feb 13 15:41:05.767885 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 15:41:05.769245 systemd-logind[1875]: Removed session 19. Feb 13 15:41:10.803502 systemd[1]: Started sshd@19-172.31.17.42:22-139.178.89.65:39532.service - OpenSSH per-connection server daemon (139.178.89.65:39532). Feb 13 15:41:10.993735 sshd[4749]: Accepted publickey for core from 139.178.89.65 port 39532 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:41:10.995334 sshd-session[4749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:41:11.004220 systemd-logind[1875]: New session 20 of user core. Feb 13 15:41:11.011474 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 15:41:11.249897 sshd[4751]: Connection closed by 139.178.89.65 port 39532 Feb 13 15:41:11.254197 sshd-session[4749]: pam_unix(sshd:session): session closed for user core Feb 13 15:41:11.258023 systemd[1]: sshd@19-172.31.17.42:22-139.178.89.65:39532.service: Deactivated successfully. Feb 13 15:41:11.262038 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 15:41:11.264529 systemd-logind[1875]: Session 20 logged out. Waiting for processes to exit. Feb 13 15:41:11.266008 systemd-logind[1875]: Removed session 20. Feb 13 15:41:16.290594 systemd[1]: Started sshd@20-172.31.17.42:22-139.178.89.65:55706.service - OpenSSH per-connection server daemon (139.178.89.65:55706). Feb 13 15:41:16.472762 sshd[4785]: Accepted publickey for core from 139.178.89.65 port 55706 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:41:16.474359 sshd-session[4785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:41:16.495259 systemd-logind[1875]: New session 21 of user core. Feb 13 15:41:16.500396 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 15:41:16.758552 sshd[4793]: Connection closed by 139.178.89.65 port 55706 Feb 13 15:41:16.757386 sshd-session[4785]: pam_unix(sshd:session): session closed for user core Feb 13 15:41:16.765170 systemd[1]: sshd@20-172.31.17.42:22-139.178.89.65:55706.service: Deactivated successfully. Feb 13 15:41:16.775453 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 15:41:16.798050 systemd-logind[1875]: Session 21 logged out. Waiting for processes to exit. Feb 13 15:41:16.807731 systemd-logind[1875]: Removed session 21. Feb 13 15:41:21.801785 systemd[1]: Started sshd@21-172.31.17.42:22-139.178.89.65:55718.service - OpenSSH per-connection server daemon (139.178.89.65:55718). Feb 13 15:41:21.971176 sshd[4826]: Accepted publickey for core from 139.178.89.65 port 55718 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:41:21.972958 sshd-session[4826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:41:21.979013 systemd-logind[1875]: New session 22 of user core. Feb 13 15:41:21.988605 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 15:41:22.275184 sshd[4843]: Connection closed by 139.178.89.65 port 55718 Feb 13 15:41:22.276432 sshd-session[4826]: pam_unix(sshd:session): session closed for user core Feb 13 15:41:22.279916 systemd[1]: sshd@21-172.31.17.42:22-139.178.89.65:55718.service: Deactivated successfully. Feb 13 15:41:22.282787 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 15:41:22.284768 systemd-logind[1875]: Session 22 logged out. Waiting for processes to exit. Feb 13 15:41:22.286722 systemd-logind[1875]: Removed session 22. Feb 13 15:41:36.817795 systemd[1]: cri-containerd-48808864acd9c65bdfe481373b0d0788e280ad124659a89df605cb05aa7b048c.scope: Deactivated successfully. Feb 13 15:41:36.819102 systemd[1]: cri-containerd-48808864acd9c65bdfe481373b0d0788e280ad124659a89df605cb05aa7b048c.scope: Consumed 3.009s CPU time, 29.9M memory peak, 0B memory swap peak. Feb 13 15:41:36.854447 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-48808864acd9c65bdfe481373b0d0788e280ad124659a89df605cb05aa7b048c-rootfs.mount: Deactivated successfully. Feb 13 15:41:36.863957 containerd[1897]: time="2025-02-13T15:41:36.863572554Z" level=info msg="shim disconnected" id=48808864acd9c65bdfe481373b0d0788e280ad124659a89df605cb05aa7b048c namespace=k8s.io Feb 13 15:41:36.863957 containerd[1897]: time="2025-02-13T15:41:36.863955799Z" level=warning msg="cleaning up after shim disconnected" id=48808864acd9c65bdfe481373b0d0788e280ad124659a89df605cb05aa7b048c namespace=k8s.io Feb 13 15:41:36.865006 containerd[1897]: time="2025-02-13T15:41:36.863971149Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:41:37.546616 kubelet[3418]: I0213 15:41:37.546582 3418 scope.go:117] "RemoveContainer" containerID="48808864acd9c65bdfe481373b0d0788e280ad124659a89df605cb05aa7b048c" Feb 13 15:41:37.553011 containerd[1897]: time="2025-02-13T15:41:37.552966203Z" level=info msg="CreateContainer within sandbox \"87575473fec19ea79bec9556b5fd88208a880b5df691814c0c631a99dbe61f2a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Feb 13 15:41:37.583092 containerd[1897]: time="2025-02-13T15:41:37.583045479Z" level=info msg="CreateContainer within sandbox \"87575473fec19ea79bec9556b5fd88208a880b5df691814c0c631a99dbe61f2a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"255c98f514cbaa996371a29603fcb6e6692713a30a926d7c73a575f3ad27f045\"" Feb 13 15:41:37.583652 containerd[1897]: time="2025-02-13T15:41:37.583608514Z" level=info msg="StartContainer for \"255c98f514cbaa996371a29603fcb6e6692713a30a926d7c73a575f3ad27f045\"" Feb 13 15:41:37.635481 systemd[1]: Started cri-containerd-255c98f514cbaa996371a29603fcb6e6692713a30a926d7c73a575f3ad27f045.scope - libcontainer container 255c98f514cbaa996371a29603fcb6e6692713a30a926d7c73a575f3ad27f045. Feb 13 15:41:37.784444 containerd[1897]: time="2025-02-13T15:41:37.784222193Z" level=info msg="StartContainer for \"255c98f514cbaa996371a29603fcb6e6692713a30a926d7c73a575f3ad27f045\" returns successfully" Feb 13 15:41:37.857253 systemd[1]: run-containerd-runc-k8s.io-255c98f514cbaa996371a29603fcb6e6692713a30a926d7c73a575f3ad27f045-runc.7Jo3MJ.mount: Deactivated successfully. Feb 13 15:41:41.933833 systemd[1]: cri-containerd-043b2ec4998622aafa1000dc96de75807d91431fd02a33afe4011851c1ad4104.scope: Deactivated successfully. Feb 13 15:41:41.935019 systemd[1]: cri-containerd-043b2ec4998622aafa1000dc96de75807d91431fd02a33afe4011851c1ad4104.scope: Consumed 1.534s CPU time, 16.5M memory peak, 0B memory swap peak. Feb 13 15:41:41.992491 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-043b2ec4998622aafa1000dc96de75807d91431fd02a33afe4011851c1ad4104-rootfs.mount: Deactivated successfully. Feb 13 15:41:42.005809 containerd[1897]: time="2025-02-13T15:41:42.005710051Z" level=info msg="shim disconnected" id=043b2ec4998622aafa1000dc96de75807d91431fd02a33afe4011851c1ad4104 namespace=k8s.io Feb 13 15:41:42.006476 containerd[1897]: time="2025-02-13T15:41:42.005823212Z" level=warning msg="cleaning up after shim disconnected" id=043b2ec4998622aafa1000dc96de75807d91431fd02a33afe4011851c1ad4104 namespace=k8s.io Feb 13 15:41:42.006476 containerd[1897]: time="2025-02-13T15:41:42.005836892Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:41:42.560029 kubelet[3418]: I0213 15:41:42.559999 3418 scope.go:117] "RemoveContainer" containerID="043b2ec4998622aafa1000dc96de75807d91431fd02a33afe4011851c1ad4104" Feb 13 15:41:42.564748 containerd[1897]: time="2025-02-13T15:41:42.564707489Z" level=info msg="CreateContainer within sandbox \"2a94b3c90364a81cdfd356322870df628dce16520d5e22347cd8525c49937aa5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Feb 13 15:41:42.603065 containerd[1897]: time="2025-02-13T15:41:42.603019322Z" level=info msg="CreateContainer within sandbox \"2a94b3c90364a81cdfd356322870df628dce16520d5e22347cd8525c49937aa5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"44b710bc1b4827b5bad1a2040b3474d7c5420837e5e051ae6ce3aba5db30d7a1\"" Feb 13 15:41:42.603724 containerd[1897]: time="2025-02-13T15:41:42.603687563Z" level=info msg="StartContainer for \"44b710bc1b4827b5bad1a2040b3474d7c5420837e5e051ae6ce3aba5db30d7a1\"" Feb 13 15:41:42.705401 systemd[1]: Started cri-containerd-44b710bc1b4827b5bad1a2040b3474d7c5420837e5e051ae6ce3aba5db30d7a1.scope - libcontainer container 44b710bc1b4827b5bad1a2040b3474d7c5420837e5e051ae6ce3aba5db30d7a1. Feb 13 15:41:42.766055 containerd[1897]: time="2025-02-13T15:41:42.765931119Z" level=info msg="StartContainer for \"44b710bc1b4827b5bad1a2040b3474d7c5420837e5e051ae6ce3aba5db30d7a1\" returns successfully" Feb 13 15:41:43.457829 kubelet[3418]: E0213 15:41:43.457472 3418 controller.go:195] "Failed to update lease" err="Put \"https://172.31.17.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-42?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 13 15:41:53.459128 kubelet[3418]: E0213 15:41:53.458651 3418 controller.go:195] "Failed to update lease" err="Put \"https://172.31.17.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-42?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"