Jan 29 12:03:55.132960 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 29 12:03:55.133001 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 12:03:55.133017 kernel: BIOS-provided physical RAM map: Jan 29 12:03:55.133029 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 29 12:03:55.133041 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 29 12:03:55.133052 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 29 12:03:55.133070 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Jan 29 12:03:55.133082 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Jan 29 12:03:55.133095 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Jan 29 12:03:55.133107 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 29 12:03:55.133119 kernel: NX (Execute Disable) protection: active Jan 29 12:03:55.133132 kernel: APIC: Static calls initialized Jan 29 12:03:55.133144 kernel: SMBIOS 2.7 present. Jan 29 12:03:55.133157 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jan 29 12:03:55.133175 kernel: Hypervisor detected: KVM Jan 29 12:03:55.133189 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 29 12:03:55.133203 kernel: kvm-clock: using sched offset of 6183787354 cycles Jan 29 12:03:55.133217 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 29 12:03:55.133232 kernel: tsc: Detected 2499.996 MHz processor Jan 29 12:03:55.133246 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 12:03:55.133271 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 12:03:55.133289 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Jan 29 12:03:55.133303 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 29 12:03:55.133317 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 12:03:55.133331 kernel: Using GB pages for direct mapping Jan 29 12:03:55.133345 kernel: ACPI: Early table checksum verification disabled Jan 29 12:03:55.133359 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Jan 29 12:03:55.133373 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Jan 29 12:03:55.133387 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 29 12:03:55.133401 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 29 12:03:55.133418 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Jan 29 12:03:55.133432 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 29 12:03:55.133445 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 29 12:03:55.133459 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jan 29 12:03:55.133473 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 29 12:03:55.133487 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jan 29 12:03:55.133501 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jan 29 12:03:55.133515 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 29 12:03:55.133529 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Jan 29 12:03:55.133546 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Jan 29 12:03:55.133567 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Jan 29 12:03:55.133582 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Jan 29 12:03:55.133596 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Jan 29 12:03:55.133611 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Jan 29 12:03:55.133629 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Jan 29 12:03:55.133644 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Jan 29 12:03:55.133659 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Jan 29 12:03:55.133673 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Jan 29 12:03:55.133688 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 29 12:03:55.133703 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 29 12:03:55.133718 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jan 29 12:03:55.133733 kernel: NUMA: Initialized distance table, cnt=1 Jan 29 12:03:55.133748 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Jan 29 12:03:55.133766 kernel: Zone ranges: Jan 29 12:03:55.133780 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 12:03:55.133795 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Jan 29 12:03:55.133810 kernel: Normal empty Jan 29 12:03:55.133825 kernel: Movable zone start for each node Jan 29 12:03:55.133840 kernel: Early memory node ranges Jan 29 12:03:55.133855 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 29 12:03:55.133870 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Jan 29 12:03:55.133885 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Jan 29 12:03:55.133899 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 12:03:55.133917 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 29 12:03:55.133932 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Jan 29 12:03:55.133947 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 29 12:03:55.133962 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 29 12:03:55.133977 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jan 29 12:03:55.133992 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 29 12:03:55.134007 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 29 12:03:55.134022 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 29 12:03:55.134037 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 29 12:03:55.134054 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 12:03:55.134069 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 29 12:03:55.134084 kernel: TSC deadline timer available Jan 29 12:03:55.134099 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 29 12:03:55.134114 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 29 12:03:55.134129 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Jan 29 12:03:55.134144 kernel: Booting paravirtualized kernel on KVM Jan 29 12:03:55.134159 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 12:03:55.134174 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 29 12:03:55.134192 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 29 12:03:55.134207 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 29 12:03:55.134222 kernel: pcpu-alloc: [0] 0 1 Jan 29 12:03:55.134236 kernel: kvm-guest: PV spinlocks enabled Jan 29 12:03:55.134251 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 29 12:03:55.138396 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 12:03:55.138449 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 12:03:55.138480 kernel: random: crng init done Jan 29 12:03:55.138519 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 12:03:55.138533 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 29 12:03:55.138547 kernel: Fallback order for Node 0: 0 Jan 29 12:03:55.138561 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Jan 29 12:03:55.138577 kernel: Policy zone: DMA32 Jan 29 12:03:55.138592 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 12:03:55.138607 kernel: Memory: 1932344K/2057760K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 125156K reserved, 0K cma-reserved) Jan 29 12:03:55.138623 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 29 12:03:55.138638 kernel: Kernel/User page tables isolation: enabled Jan 29 12:03:55.138656 kernel: ftrace: allocating 37921 entries in 149 pages Jan 29 12:03:55.138670 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 12:03:55.138685 kernel: Dynamic Preempt: voluntary Jan 29 12:03:55.138699 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 12:03:55.138712 kernel: rcu: RCU event tracing is enabled. Jan 29 12:03:55.138726 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 29 12:03:55.138741 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 12:03:55.138756 kernel: Rude variant of Tasks RCU enabled. Jan 29 12:03:55.138769 kernel: Tracing variant of Tasks RCU enabled. Jan 29 12:03:55.138786 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 12:03:55.138798 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 29 12:03:55.138810 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 29 12:03:55.138822 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 12:03:55.138848 kernel: Console: colour VGA+ 80x25 Jan 29 12:03:55.138867 kernel: printk: console [ttyS0] enabled Jan 29 12:03:55.138880 kernel: ACPI: Core revision 20230628 Jan 29 12:03:55.138893 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jan 29 12:03:55.138907 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 12:03:55.138923 kernel: x2apic enabled Jan 29 12:03:55.138939 kernel: APIC: Switched APIC routing to: physical x2apic Jan 29 12:03:55.138967 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Jan 29 12:03:55.138986 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Jan 29 12:03:55.139003 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 29 12:03:55.139020 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 29 12:03:55.139036 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 12:03:55.139051 kernel: Spectre V2 : Mitigation: Retpolines Jan 29 12:03:55.139067 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 12:03:55.139083 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 29 12:03:55.139100 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 29 12:03:55.139116 kernel: RETBleed: Vulnerable Jan 29 12:03:55.139135 kernel: Speculative Store Bypass: Vulnerable Jan 29 12:03:55.139151 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jan 29 12:03:55.139167 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 29 12:03:55.139183 kernel: GDS: Unknown: Dependent on hypervisor status Jan 29 12:03:55.139199 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 29 12:03:55.139215 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 29 12:03:55.139231 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 29 12:03:55.139250 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jan 29 12:03:55.139289 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jan 29 12:03:55.139305 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 29 12:03:55.139321 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 29 12:03:55.139337 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 29 12:03:55.139354 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 29 12:03:55.139370 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 29 12:03:55.139385 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jan 29 12:03:55.139401 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jan 29 12:03:55.139417 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jan 29 12:03:55.139436 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jan 29 12:03:55.139452 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jan 29 12:03:55.139468 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jan 29 12:03:55.139484 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jan 29 12:03:55.139501 kernel: Freeing SMP alternatives memory: 32K Jan 29 12:03:55.139515 kernel: pid_max: default: 32768 minimum: 301 Jan 29 12:03:55.139531 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 12:03:55.139547 kernel: landlock: Up and running. Jan 29 12:03:55.139563 kernel: SELinux: Initializing. Jan 29 12:03:55.139580 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 29 12:03:55.139596 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 29 12:03:55.139612 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 29 12:03:55.139631 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 12:03:55.139648 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 12:03:55.139664 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 12:03:55.139681 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 29 12:03:55.139697 kernel: signal: max sigframe size: 3632 Jan 29 12:03:55.139714 kernel: rcu: Hierarchical SRCU implementation. Jan 29 12:03:55.139731 kernel: rcu: Max phase no-delay instances is 400. Jan 29 12:03:55.139747 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 29 12:03:55.139763 kernel: smp: Bringing up secondary CPUs ... Jan 29 12:03:55.139783 kernel: smpboot: x86: Booting SMP configuration: Jan 29 12:03:55.139800 kernel: .... node #0, CPUs: #1 Jan 29 12:03:55.139817 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 29 12:03:55.139835 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 29 12:03:55.139851 kernel: smp: Brought up 1 node, 2 CPUs Jan 29 12:03:55.139868 kernel: smpboot: Max logical packages: 1 Jan 29 12:03:55.139884 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Jan 29 12:03:55.139901 kernel: devtmpfs: initialized Jan 29 12:03:55.139921 kernel: x86/mm: Memory block size: 128MB Jan 29 12:03:55.139937 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 12:03:55.139953 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 29 12:03:55.139970 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 12:03:55.139986 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 12:03:55.140002 kernel: audit: initializing netlink subsys (disabled) Jan 29 12:03:55.140018 kernel: audit: type=2000 audit(1738152233.803:1): state=initialized audit_enabled=0 res=1 Jan 29 12:03:55.140033 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 12:03:55.140049 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 12:03:55.140068 kernel: cpuidle: using governor menu Jan 29 12:03:55.140084 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 12:03:55.140100 kernel: dca service started, version 1.12.1 Jan 29 12:03:55.140116 kernel: PCI: Using configuration type 1 for base access Jan 29 12:03:55.140132 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 12:03:55.140148 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 12:03:55.140164 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 12:03:55.140180 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 12:03:55.140196 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 12:03:55.140215 kernel: ACPI: Added _OSI(Module Device) Jan 29 12:03:55.140231 kernel: ACPI: Added _OSI(Processor Device) Jan 29 12:03:55.140247 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 12:03:55.142479 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 12:03:55.142503 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 29 12:03:55.142519 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 29 12:03:55.142536 kernel: ACPI: Interpreter enabled Jan 29 12:03:55.142551 kernel: ACPI: PM: (supports S0 S5) Jan 29 12:03:55.142566 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 12:03:55.142589 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 12:03:55.142605 kernel: PCI: Using E820 reservations for host bridge windows Jan 29 12:03:55.142622 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jan 29 12:03:55.142638 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 12:03:55.142864 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 29 12:03:55.143188 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 29 12:03:55.143352 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 29 12:03:55.143370 kernel: acpiphp: Slot [3] registered Jan 29 12:03:55.143391 kernel: acpiphp: Slot [4] registered Jan 29 12:03:55.143405 kernel: acpiphp: Slot [5] registered Jan 29 12:03:55.143418 kernel: acpiphp: Slot [6] registered Jan 29 12:03:55.143432 kernel: acpiphp: Slot [7] registered Jan 29 12:03:55.143446 kernel: acpiphp: Slot [8] registered Jan 29 12:03:55.143460 kernel: acpiphp: Slot [9] registered Jan 29 12:03:55.143472 kernel: acpiphp: Slot [10] registered Jan 29 12:03:55.143486 kernel: acpiphp: Slot [11] registered Jan 29 12:03:55.143499 kernel: acpiphp: Slot [12] registered Jan 29 12:03:55.143516 kernel: acpiphp: Slot [13] registered Jan 29 12:03:55.143529 kernel: acpiphp: Slot [14] registered Jan 29 12:03:55.143542 kernel: acpiphp: Slot [15] registered Jan 29 12:03:55.143555 kernel: acpiphp: Slot [16] registered Jan 29 12:03:55.143568 kernel: acpiphp: Slot [17] registered Jan 29 12:03:55.143582 kernel: acpiphp: Slot [18] registered Jan 29 12:03:55.143596 kernel: acpiphp: Slot [19] registered Jan 29 12:03:55.143609 kernel: acpiphp: Slot [20] registered Jan 29 12:03:55.143622 kernel: acpiphp: Slot [21] registered Jan 29 12:03:55.143638 kernel: acpiphp: Slot [22] registered Jan 29 12:03:55.143650 kernel: acpiphp: Slot [23] registered Jan 29 12:03:55.143663 kernel: acpiphp: Slot [24] registered Jan 29 12:03:55.143677 kernel: acpiphp: Slot [25] registered Jan 29 12:03:55.143690 kernel: acpiphp: Slot [26] registered Jan 29 12:03:55.143704 kernel: acpiphp: Slot [27] registered Jan 29 12:03:55.143718 kernel: acpiphp: Slot [28] registered Jan 29 12:03:55.143731 kernel: acpiphp: Slot [29] registered Jan 29 12:03:55.143745 kernel: acpiphp: Slot [30] registered Jan 29 12:03:55.143757 kernel: acpiphp: Slot [31] registered Jan 29 12:03:55.143773 kernel: PCI host bridge to bus 0000:00 Jan 29 12:03:55.143902 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 29 12:03:55.144018 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 29 12:03:55.144222 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 29 12:03:55.144400 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 29 12:03:55.144513 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 12:03:55.144661 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 29 12:03:55.144801 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 29 12:03:55.144948 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Jan 29 12:03:55.145073 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 29 12:03:55.145197 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Jan 29 12:03:55.145340 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jan 29 12:03:55.145461 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jan 29 12:03:55.145583 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jan 29 12:03:55.145713 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jan 29 12:03:55.145831 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jan 29 12:03:55.145954 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jan 29 12:03:55.146092 kernel: pci 0000:00:01.3: quirk_piix4_acpi+0x0/0x180 took 11718 usecs Jan 29 12:03:55.146226 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Jan 29 12:03:55.146411 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Jan 29 12:03:55.146542 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jan 29 12:03:55.146661 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 29 12:03:55.146789 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 29 12:03:55.146937 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Jan 29 12:03:55.147069 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 29 12:03:55.147197 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Jan 29 12:03:55.147215 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 29 12:03:55.147232 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 29 12:03:55.147246 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 29 12:03:55.147270 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 29 12:03:55.147291 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 29 12:03:55.147305 kernel: iommu: Default domain type: Translated Jan 29 12:03:55.147319 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 12:03:55.147333 kernel: PCI: Using ACPI for IRQ routing Jan 29 12:03:55.147346 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 29 12:03:55.147360 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 29 12:03:55.147377 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Jan 29 12:03:55.147501 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jan 29 12:03:55.147628 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jan 29 12:03:55.147754 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 29 12:03:55.147771 kernel: vgaarb: loaded Jan 29 12:03:55.147785 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jan 29 12:03:55.147799 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jan 29 12:03:55.147813 kernel: clocksource: Switched to clocksource kvm-clock Jan 29 12:03:55.147831 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 12:03:55.147845 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 12:03:55.147859 kernel: pnp: PnP ACPI init Jan 29 12:03:55.147872 kernel: pnp: PnP ACPI: found 5 devices Jan 29 12:03:55.147886 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 12:03:55.147898 kernel: NET: Registered PF_INET protocol family Jan 29 12:03:55.147912 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 12:03:55.147926 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 29 12:03:55.147940 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 12:03:55.147956 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 29 12:03:55.147970 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 29 12:03:55.147982 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 29 12:03:55.147995 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 29 12:03:55.148009 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 29 12:03:55.148022 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 12:03:55.148037 kernel: NET: Registered PF_XDP protocol family Jan 29 12:03:55.148162 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 29 12:03:55.148326 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 29 12:03:55.148440 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 29 12:03:55.148548 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 29 12:03:55.148672 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 29 12:03:55.148688 kernel: PCI: CLS 0 bytes, default 64 Jan 29 12:03:55.148701 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 29 12:03:55.148714 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Jan 29 12:03:55.148727 kernel: clocksource: Switched to clocksource tsc Jan 29 12:03:55.148740 kernel: Initialise system trusted keyrings Jan 29 12:03:55.148756 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 29 12:03:55.148769 kernel: Key type asymmetric registered Jan 29 12:03:55.148782 kernel: Asymmetric key parser 'x509' registered Jan 29 12:03:55.148795 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 12:03:55.148808 kernel: io scheduler mq-deadline registered Jan 29 12:03:55.148821 kernel: io scheduler kyber registered Jan 29 12:03:55.148833 kernel: io scheduler bfq registered Jan 29 12:03:55.148846 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 12:03:55.148860 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 12:03:55.148876 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 12:03:55.148890 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 29 12:03:55.148903 kernel: i8042: Warning: Keylock active Jan 29 12:03:55.148916 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 29 12:03:55.148928 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 29 12:03:55.149062 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 29 12:03:55.149176 kernel: rtc_cmos 00:00: registered as rtc0 Jan 29 12:03:55.149310 kernel: rtc_cmos 00:00: setting system clock to 2025-01-29T12:03:54 UTC (1738152234) Jan 29 12:03:55.149424 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 29 12:03:55.149441 kernel: intel_pstate: CPU model not supported Jan 29 12:03:55.149454 kernel: NET: Registered PF_INET6 protocol family Jan 29 12:03:55.149466 kernel: Segment Routing with IPv6 Jan 29 12:03:55.149480 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 12:03:55.149493 kernel: NET: Registered PF_PACKET protocol family Jan 29 12:03:55.149506 kernel: Key type dns_resolver registered Jan 29 12:03:55.149519 kernel: IPI shorthand broadcast: enabled Jan 29 12:03:55.149532 kernel: sched_clock: Marking stable (703002545, 344790147)->(1160198349, -112405657) Jan 29 12:03:55.149548 kernel: registered taskstats version 1 Jan 29 12:03:55.149560 kernel: Loading compiled-in X.509 certificates Jan 29 12:03:55.149573 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 29 12:03:55.149586 kernel: Key type .fscrypt registered Jan 29 12:03:55.149599 kernel: Key type fscrypt-provisioning registered Jan 29 12:03:55.149612 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 12:03:55.149626 kernel: ima: Allocated hash algorithm: sha1 Jan 29 12:03:55.149639 kernel: ima: No architecture policies found Jan 29 12:03:55.149654 kernel: clk: Disabling unused clocks Jan 29 12:03:55.149667 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 29 12:03:55.149679 kernel: Write protecting the kernel read-only data: 36864k Jan 29 12:03:55.149692 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 29 12:03:55.149706 kernel: Run /init as init process Jan 29 12:03:55.149718 kernel: with arguments: Jan 29 12:03:55.149731 kernel: /init Jan 29 12:03:55.149743 kernel: with environment: Jan 29 12:03:55.149755 kernel: HOME=/ Jan 29 12:03:55.149768 kernel: TERM=linux Jan 29 12:03:55.149783 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 12:03:55.149824 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 12:03:55.149840 systemd[1]: Detected virtualization amazon. Jan 29 12:03:55.149854 systemd[1]: Detected architecture x86-64. Jan 29 12:03:55.149867 systemd[1]: Running in initrd. Jan 29 12:03:55.149880 systemd[1]: No hostname configured, using default hostname. Jan 29 12:03:55.149894 systemd[1]: Hostname set to . Jan 29 12:03:55.149912 systemd[1]: Initializing machine ID from VM UUID. Jan 29 12:03:55.149926 systemd[1]: Queued start job for default target initrd.target. Jan 29 12:03:55.149941 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 12:03:55.149955 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 12:03:55.149970 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 12:03:55.149985 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 12:03:55.149999 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 12:03:55.150034 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 12:03:55.150050 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 12:03:55.150065 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 12:03:55.150080 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 12:03:55.150095 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 12:03:55.150110 systemd[1]: Reached target paths.target - Path Units. Jan 29 12:03:55.150124 systemd[1]: Reached target slices.target - Slice Units. Jan 29 12:03:55.150141 systemd[1]: Reached target swap.target - Swaps. Jan 29 12:03:55.150155 systemd[1]: Reached target timers.target - Timer Units. Jan 29 12:03:55.150170 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 12:03:55.150184 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 12:03:55.150199 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 12:03:55.150213 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 12:03:55.150229 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 12:03:55.150243 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 12:03:55.150314 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 12:03:55.150333 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 12:03:55.150348 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 12:03:55.150363 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 12:03:55.150378 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 29 12:03:55.150393 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 12:03:55.150414 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 12:03:55.150429 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 12:03:55.150444 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 12:03:55.150459 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:03:55.150474 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 12:03:55.150489 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 12:03:55.150505 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 12:03:55.150525 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 12:03:55.150568 systemd-journald[178]: Collecting audit messages is disabled. Jan 29 12:03:55.150604 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 12:03:55.150619 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 12:03:55.150634 kernel: Bridge firewalling registered Jan 29 12:03:55.150649 systemd-journald[178]: Journal started Jan 29 12:03:55.150681 systemd-journald[178]: Runtime Journal (/run/log/journal/ec29efe4d141eae066407749a94970fd) is 4.8M, max 38.6M, 33.7M free. Jan 29 12:03:55.097614 systemd-modules-load[179]: Inserted module 'overlay' Jan 29 12:03:55.268183 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 12:03:55.142061 systemd-modules-load[179]: Inserted module 'br_netfilter' Jan 29 12:03:55.266128 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 12:03:55.269745 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:03:55.279419 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 12:03:55.282828 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 12:03:55.290502 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 12:03:55.306494 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 12:03:55.345378 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:03:55.355758 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 12:03:55.359381 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:03:55.367730 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 12:03:55.369176 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 12:03:55.380459 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 12:03:55.401732 dracut-cmdline[210]: dracut-dracut-053 Jan 29 12:03:55.406432 dracut-cmdline[210]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 12:03:55.438705 systemd-resolved[212]: Positive Trust Anchors: Jan 29 12:03:55.438726 systemd-resolved[212]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 12:03:55.438785 systemd-resolved[212]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 12:03:55.456070 systemd-resolved[212]: Defaulting to hostname 'linux'. Jan 29 12:03:55.459686 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 12:03:55.463607 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 12:03:55.521286 kernel: SCSI subsystem initialized Jan 29 12:03:55.532290 kernel: Loading iSCSI transport class v2.0-870. Jan 29 12:03:55.543289 kernel: iscsi: registered transport (tcp) Jan 29 12:03:55.567392 kernel: iscsi: registered transport (qla4xxx) Jan 29 12:03:55.567468 kernel: QLogic iSCSI HBA Driver Jan 29 12:03:55.612889 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 12:03:55.620474 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 12:03:55.668696 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 12:03:55.668783 kernel: device-mapper: uevent: version 1.0.3 Jan 29 12:03:55.668804 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 12:03:55.727310 kernel: raid6: avx512x4 gen() 7970 MB/s Jan 29 12:03:55.745313 kernel: raid6: avx512x2 gen() 13182 MB/s Jan 29 12:03:55.764315 kernel: raid6: avx512x1 gen() 9357 MB/s Jan 29 12:03:55.781415 kernel: raid6: avx2x4 gen() 5460 MB/s Jan 29 12:03:55.798308 kernel: raid6: avx2x2 gen() 15036 MB/s Jan 29 12:03:55.815294 kernel: raid6: avx2x1 gen() 11906 MB/s Jan 29 12:03:55.815368 kernel: raid6: using algorithm avx2x2 gen() 15036 MB/s Jan 29 12:03:55.833539 kernel: raid6: .... xor() 14878 MB/s, rmw enabled Jan 29 12:03:55.833607 kernel: raid6: using avx512x2 recovery algorithm Jan 29 12:03:55.878641 kernel: xor: automatically using best checksumming function avx Jan 29 12:03:56.151645 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 12:03:56.166046 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 12:03:56.172524 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 12:03:56.211213 systemd-udevd[396]: Using default interface naming scheme 'v255'. Jan 29 12:03:56.221212 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 12:03:56.240501 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 12:03:56.275087 dracut-pre-trigger[398]: rd.md=0: removing MD RAID activation Jan 29 12:03:56.311764 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 12:03:56.320460 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 12:03:56.386245 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 12:03:56.395454 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 12:03:56.436626 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 12:03:56.442163 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 12:03:56.444093 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 12:03:56.445683 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 12:03:56.463538 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 12:03:56.497714 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 12:03:56.508005 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 29 12:03:56.530671 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 29 12:03:56.531583 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jan 29 12:03:56.531744 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:7c:6b:40:2a:3b Jan 29 12:03:56.531903 kernel: cryptd: max_cpu_qlen set to 1000 Jan 29 12:03:56.536792 (udev-worker)[459]: Network interface NamePolicy= disabled on kernel command line. Jan 29 12:03:56.560869 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 12:03:56.562882 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:03:56.574855 kernel: AVX2 version of gcm_enc/dec engaged. Jan 29 12:03:56.567728 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 12:03:56.569340 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 12:03:56.569569 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:03:56.571481 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:03:56.584078 kernel: AES CTR mode by8 optimization enabled Jan 29 12:03:56.584382 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:03:56.607288 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 29 12:03:56.611534 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 29 12:03:56.624315 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 29 12:03:56.639292 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 12:03:56.639436 kernel: GPT:9289727 != 16777215 Jan 29 12:03:56.639455 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 12:03:56.639472 kernel: GPT:9289727 != 16777215 Jan 29 12:03:56.639488 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 12:03:56.639507 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 29 12:03:56.733285 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/nvme0n1p3 scanned by (udev-worker) (445) Jan 29 12:03:56.742356 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (450) Jan 29 12:03:56.814489 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:03:56.827502 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 12:03:56.913158 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 29 12:03:56.916772 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:03:56.941680 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 29 12:03:56.941907 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 29 12:03:56.949840 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 29 12:03:56.964793 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 29 12:03:56.990748 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 12:03:57.012353 disk-uuid[630]: Primary Header is updated. Jan 29 12:03:57.012353 disk-uuid[630]: Secondary Entries is updated. Jan 29 12:03:57.012353 disk-uuid[630]: Secondary Header is updated. Jan 29 12:03:57.018285 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 29 12:03:57.025285 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 29 12:03:57.032309 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 29 12:03:58.036930 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 29 12:03:58.037063 disk-uuid[631]: The operation has completed successfully. Jan 29 12:03:58.252453 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 12:03:58.252579 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 12:03:58.273466 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 12:03:58.289176 sh[974]: Success Jan 29 12:03:58.303285 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 29 12:03:58.418572 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 12:03:58.430480 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 12:03:58.432976 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 12:03:58.486219 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 29 12:03:58.486302 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:03:58.486324 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 12:03:58.488287 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 12:03:58.488340 kernel: BTRFS info (device dm-0): using free space tree Jan 29 12:03:58.529293 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 29 12:03:58.536461 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 12:03:58.537497 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 12:03:58.556578 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 12:03:58.561469 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 12:03:58.596489 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:03:58.596557 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:03:58.596580 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 29 12:03:58.602302 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 29 12:03:58.620710 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 12:03:58.622888 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:03:58.640921 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 12:03:58.655982 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 12:03:58.771811 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 12:03:58.784525 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 12:03:58.862237 systemd-networkd[1167]: lo: Link UP Jan 29 12:03:58.862249 systemd-networkd[1167]: lo: Gained carrier Jan 29 12:03:58.885946 systemd-networkd[1167]: Enumeration completed Jan 29 12:03:58.886099 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 12:03:58.886488 systemd-networkd[1167]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:03:58.886493 systemd-networkd[1167]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 12:03:58.892571 systemd[1]: Reached target network.target - Network. Jan 29 12:03:58.898975 systemd-networkd[1167]: eth0: Link UP Jan 29 12:03:58.898982 systemd-networkd[1167]: eth0: Gained carrier Jan 29 12:03:58.899000 systemd-networkd[1167]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:03:58.920527 systemd-networkd[1167]: eth0: DHCPv4 address 172.31.21.48/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 29 12:03:58.927786 ignition[1107]: Ignition 2.19.0 Jan 29 12:03:58.927803 ignition[1107]: Stage: fetch-offline Jan 29 12:03:58.928077 ignition[1107]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:03:58.928091 ignition[1107]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 12:03:58.929181 ignition[1107]: Ignition finished successfully Jan 29 12:03:58.936615 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 12:03:58.945517 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 29 12:03:59.004299 ignition[1176]: Ignition 2.19.0 Jan 29 12:03:59.004315 ignition[1176]: Stage: fetch Jan 29 12:03:59.004682 ignition[1176]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:03:59.004693 ignition[1176]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 12:03:59.004948 ignition[1176]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 12:03:59.061734 ignition[1176]: PUT result: OK Jan 29 12:03:59.079690 ignition[1176]: parsed url from cmdline: "" Jan 29 12:03:59.079701 ignition[1176]: no config URL provided Jan 29 12:03:59.079711 ignition[1176]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 12:03:59.079733 ignition[1176]: no config at "/usr/lib/ignition/user.ign" Jan 29 12:03:59.079757 ignition[1176]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 12:03:59.083112 ignition[1176]: PUT result: OK Jan 29 12:03:59.083163 ignition[1176]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 29 12:03:59.086523 ignition[1176]: GET result: OK Jan 29 12:03:59.087192 ignition[1176]: parsing config with SHA512: 0da59010b56c59a1a8aa0f4a5e8bcc73bcf41bf6d2158539cb1b6b5b7c534c508c895bbee3c1a9a54c8bcef56068bfe6f421be54a62933fc9ea254af7f0b24e0 Jan 29 12:03:59.091576 unknown[1176]: fetched base config from "system" Jan 29 12:03:59.092064 ignition[1176]: fetch: fetch complete Jan 29 12:03:59.091590 unknown[1176]: fetched base config from "system" Jan 29 12:03:59.092073 ignition[1176]: fetch: fetch passed Jan 29 12:03:59.091596 unknown[1176]: fetched user config from "aws" Jan 29 12:03:59.092114 ignition[1176]: Ignition finished successfully Jan 29 12:03:59.095334 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 29 12:03:59.104474 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 12:03:59.119602 ignition[1182]: Ignition 2.19.0 Jan 29 12:03:59.119615 ignition[1182]: Stage: kargs Jan 29 12:03:59.120049 ignition[1182]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:03:59.120062 ignition[1182]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 12:03:59.120179 ignition[1182]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 12:03:59.122595 ignition[1182]: PUT result: OK Jan 29 12:03:59.127792 ignition[1182]: kargs: kargs passed Jan 29 12:03:59.127855 ignition[1182]: Ignition finished successfully Jan 29 12:03:59.136426 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 12:03:59.142616 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 12:03:59.179792 ignition[1188]: Ignition 2.19.0 Jan 29 12:03:59.179806 ignition[1188]: Stage: disks Jan 29 12:03:59.180796 ignition[1188]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:03:59.180810 ignition[1188]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 12:03:59.180925 ignition[1188]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 12:03:59.183447 ignition[1188]: PUT result: OK Jan 29 12:03:59.190194 ignition[1188]: disks: disks passed Jan 29 12:03:59.190286 ignition[1188]: Ignition finished successfully Jan 29 12:03:59.194444 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 12:03:59.194816 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 12:03:59.197931 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 12:03:59.202220 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 12:03:59.203514 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 12:03:59.207634 systemd[1]: Reached target basic.target - Basic System. Jan 29 12:03:59.216656 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 12:03:59.278507 systemd-fsck[1196]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 29 12:03:59.288584 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 12:03:59.304964 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 12:03:59.465281 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 29 12:03:59.466479 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 12:03:59.468962 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 12:03:59.477413 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 12:03:59.482410 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 12:03:59.485608 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 12:03:59.485894 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 12:03:59.486141 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 12:03:59.505921 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 12:03:59.510474 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1215) Jan 29 12:03:59.515227 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:03:59.515307 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:03:59.515329 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 29 12:03:59.517492 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 12:03:59.533282 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 29 12:03:59.540736 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 12:03:59.781395 initrd-setup-root[1239]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 12:03:59.793752 initrd-setup-root[1246]: cut: /sysroot/etc/group: No such file or directory Jan 29 12:03:59.803079 initrd-setup-root[1253]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 12:03:59.810699 initrd-setup-root[1260]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 12:04:00.016696 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 12:04:00.023496 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 12:04:00.033776 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 12:04:00.047735 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 12:04:00.049106 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:04:00.121077 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 12:04:00.124951 ignition[1327]: INFO : Ignition 2.19.0 Jan 29 12:04:00.124951 ignition[1327]: INFO : Stage: mount Jan 29 12:04:00.124951 ignition[1327]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 12:04:00.124951 ignition[1327]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 12:04:00.124951 ignition[1327]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 12:04:00.132326 ignition[1327]: INFO : PUT result: OK Jan 29 12:04:00.136003 ignition[1327]: INFO : mount: mount passed Jan 29 12:04:00.139178 ignition[1327]: INFO : Ignition finished successfully Jan 29 12:04:00.139068 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 12:04:00.145423 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 12:04:00.165999 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 12:04:00.190296 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1339) Jan 29 12:04:00.193402 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:04:00.193478 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:04:00.193498 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 29 12:04:00.199283 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 29 12:04:00.202109 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 12:04:00.258709 ignition[1356]: INFO : Ignition 2.19.0 Jan 29 12:04:00.258709 ignition[1356]: INFO : Stage: files Jan 29 12:04:00.265030 ignition[1356]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 12:04:00.265030 ignition[1356]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 12:04:00.273100 ignition[1356]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 12:04:00.277581 ignition[1356]: INFO : PUT result: OK Jan 29 12:04:00.286107 ignition[1356]: DEBUG : files: compiled without relabeling support, skipping Jan 29 12:04:00.309087 ignition[1356]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 12:04:00.309087 ignition[1356]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 12:04:00.317273 ignition[1356]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 12:04:00.319358 ignition[1356]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 12:04:00.321420 unknown[1356]: wrote ssh authorized keys file for user: core Jan 29 12:04:00.322626 ignition[1356]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 12:04:00.325476 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 12:04:00.327721 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 29 12:04:00.445625 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 12:04:00.586777 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 12:04:00.586777 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 29 12:04:00.592109 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 12:04:00.592109 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 12:04:00.592109 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 12:04:00.592109 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 12:04:00.592109 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 12:04:00.592109 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 12:04:00.592109 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 12:04:00.592109 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 12:04:00.592109 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 12:04:00.592109 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 12:04:00.592109 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 12:04:00.592109 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 12:04:00.592109 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 29 12:04:00.805435 systemd-networkd[1167]: eth0: Gained IPv6LL Jan 29 12:04:01.115399 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 29 12:04:01.653440 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 12:04:01.658189 ignition[1356]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 29 12:04:01.662722 ignition[1356]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 12:04:01.664968 ignition[1356]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 12:04:01.664968 ignition[1356]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 29 12:04:01.664968 ignition[1356]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 29 12:04:01.664968 ignition[1356]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 12:04:01.664968 ignition[1356]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 12:04:01.664968 ignition[1356]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 12:04:01.664968 ignition[1356]: INFO : files: files passed Jan 29 12:04:01.664968 ignition[1356]: INFO : Ignition finished successfully Jan 29 12:04:01.680348 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 12:04:01.689463 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 12:04:01.695965 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 12:04:01.710160 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 12:04:01.716899 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 12:04:01.736620 initrd-setup-root-after-ignition[1384]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 12:04:01.736620 initrd-setup-root-after-ignition[1384]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 12:04:01.742686 initrd-setup-root-after-ignition[1388]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 12:04:01.748814 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 12:04:01.756844 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 12:04:01.767798 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 12:04:01.834250 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 12:04:01.834412 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 12:04:01.836840 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 12:04:01.839125 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 12:04:01.839240 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 12:04:01.849474 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 12:04:01.863100 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 12:04:01.872484 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 12:04:01.907343 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 12:04:01.907594 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 12:04:01.915117 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 12:04:01.916735 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 12:04:01.916868 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 12:04:01.921192 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 12:04:01.930338 systemd[1]: Stopped target basic.target - Basic System. Jan 29 12:04:01.937761 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 12:04:01.952032 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 12:04:01.961998 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 12:04:01.971635 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 12:04:01.976519 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 12:04:01.980163 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 12:04:01.983150 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 12:04:01.984622 systemd[1]: Stopped target swap.target - Swaps. Jan 29 12:04:01.991377 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 12:04:01.992697 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 12:04:01.995195 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 12:04:01.996963 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 12:04:02.004834 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 12:04:02.015379 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 12:04:02.016921 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 12:04:02.017104 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 12:04:02.022491 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 12:04:02.022635 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 12:04:02.025963 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 12:04:02.026127 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 12:04:02.036028 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 12:04:02.040679 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 12:04:02.041228 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 12:04:02.054441 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 12:04:02.057377 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 12:04:02.061547 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 12:04:02.065499 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 12:04:02.065901 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 12:04:02.082998 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 12:04:02.083980 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 12:04:02.111449 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 12:04:02.114700 ignition[1408]: INFO : Ignition 2.19.0 Jan 29 12:04:02.114700 ignition[1408]: INFO : Stage: umount Jan 29 12:04:02.114700 ignition[1408]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 12:04:02.114700 ignition[1408]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 12:04:02.124688 ignition[1408]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 12:04:02.124688 ignition[1408]: INFO : PUT result: OK Jan 29 12:04:02.124688 ignition[1408]: INFO : umount: umount passed Jan 29 12:04:02.124688 ignition[1408]: INFO : Ignition finished successfully Jan 29 12:04:02.120733 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 12:04:02.120902 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 12:04:02.127175 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 12:04:02.127831 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 12:04:02.133380 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 12:04:02.133561 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 12:04:02.136292 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 12:04:02.136368 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 12:04:02.139344 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 29 12:04:02.139416 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 29 12:04:02.143150 systemd[1]: Stopped target network.target - Network. Jan 29 12:04:02.146596 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 12:04:02.146687 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 12:04:02.150000 systemd[1]: Stopped target paths.target - Path Units. Jan 29 12:04:02.152329 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 12:04:02.156170 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 12:04:02.159489 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 12:04:02.161759 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 12:04:02.164358 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 12:04:02.165503 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 12:04:02.167845 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 12:04:02.167902 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 12:04:02.170404 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 12:04:02.174079 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 12:04:02.184911 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 12:04:02.185000 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 12:04:02.198536 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 12:04:02.198625 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 12:04:02.211571 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 12:04:02.213536 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 12:04:02.221335 systemd-networkd[1167]: eth0: DHCPv6 lease lost Jan 29 12:04:02.224800 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 12:04:02.225034 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 12:04:02.231674 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 12:04:02.232950 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 12:04:02.239775 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 12:04:02.239849 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 12:04:02.248400 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 12:04:02.250254 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 12:04:02.251749 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 12:04:02.255743 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 12:04:02.255805 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:04:02.255882 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 12:04:02.255914 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 12:04:02.255957 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 12:04:02.255988 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 12:04:02.256113 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 12:04:02.286739 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 12:04:02.288214 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 12:04:02.294694 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 12:04:02.294779 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 12:04:02.296328 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 12:04:02.296373 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 12:04:02.303027 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 12:04:02.303203 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 12:04:02.304726 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 12:04:02.304777 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 12:04:02.306326 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 12:04:02.306391 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:04:02.317549 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 12:04:02.318965 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 12:04:02.319034 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 12:04:02.328330 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 12:04:02.328436 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:04:02.335172 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 12:04:02.335324 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 12:04:02.348124 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 12:04:02.348289 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 12:04:02.352559 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 12:04:02.368203 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 12:04:02.389794 systemd[1]: Switching root. Jan 29 12:04:02.425155 systemd-journald[178]: Journal stopped Jan 29 12:04:04.561769 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Jan 29 12:04:04.561862 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 12:04:04.561890 kernel: SELinux: policy capability open_perms=1 Jan 29 12:04:04.561908 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 12:04:04.561932 kernel: SELinux: policy capability always_check_network=0 Jan 29 12:04:04.561949 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 12:04:04.561965 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 12:04:04.561983 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 12:04:04.562003 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 12:04:04.562020 kernel: audit: type=1403 audit(1738152242.922:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 12:04:04.562049 systemd[1]: Successfully loaded SELinux policy in 70.545ms. Jan 29 12:04:04.562079 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 31.186ms. Jan 29 12:04:04.562099 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 12:04:04.562118 systemd[1]: Detected virtualization amazon. Jan 29 12:04:04.562143 systemd[1]: Detected architecture x86-64. Jan 29 12:04:04.562160 systemd[1]: Detected first boot. Jan 29 12:04:04.562180 systemd[1]: Initializing machine ID from VM UUID. Jan 29 12:04:04.562199 zram_generator::config[1451]: No configuration found. Jan 29 12:04:04.562228 systemd[1]: Populated /etc with preset unit settings. Jan 29 12:04:04.562251 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 12:04:04.562287 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 12:04:04.562308 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 12:04:04.562329 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 12:04:04.562348 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 12:04:04.562371 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 12:04:04.562391 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 12:04:04.562412 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 12:04:04.562433 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 12:04:04.562458 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 12:04:04.562478 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 12:04:04.562497 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 12:04:04.562516 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 12:04:04.562535 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 12:04:04.562553 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 12:04:04.562574 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 12:04:04.562595 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 12:04:04.562621 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 29 12:04:04.562640 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 12:04:04.562659 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 12:04:04.562678 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 12:04:04.562697 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 12:04:04.562714 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 12:04:04.562732 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 12:04:04.562750 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 12:04:04.562774 systemd[1]: Reached target slices.target - Slice Units. Jan 29 12:04:04.562793 systemd[1]: Reached target swap.target - Swaps. Jan 29 12:04:04.562810 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 12:04:04.562828 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 12:04:04.562848 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 12:04:04.562865 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 12:04:04.562883 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 12:04:04.562902 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 12:04:04.562932 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 12:04:04.562957 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 12:04:04.562976 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 12:04:04.562998 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:04:04.563019 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 12:04:04.563043 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 12:04:04.563065 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 12:04:04.563084 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 12:04:04.563101 systemd[1]: Reached target machines.target - Containers. Jan 29 12:04:04.563120 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 12:04:04.563141 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 12:04:04.563158 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 12:04:04.563176 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 12:04:04.563194 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 12:04:04.563212 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 12:04:04.563232 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 12:04:04.563253 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 12:04:04.565316 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 12:04:04.565357 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 12:04:04.565381 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 12:04:04.565401 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 12:04:04.565422 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 12:04:04.565442 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 12:04:04.565463 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 12:04:04.565483 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 12:04:04.565505 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 12:04:04.565525 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 12:04:04.565551 kernel: loop: module loaded Jan 29 12:04:04.565574 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 12:04:04.565595 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 12:04:04.565619 systemd[1]: Stopped verity-setup.service. Jan 29 12:04:04.565717 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:04:04.565740 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 12:04:04.565760 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 12:04:04.565782 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 12:04:04.565805 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 12:04:04.565833 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 12:04:04.565856 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 12:04:04.565879 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 12:04:04.565901 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 12:04:04.565924 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 12:04:04.565950 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 12:04:04.565972 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 12:04:04.565994 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 12:04:04.566017 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 12:04:04.566040 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 12:04:04.566062 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 12:04:04.566085 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 12:04:04.566112 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 12:04:04.566134 kernel: fuse: init (API version 7.39) Jan 29 12:04:04.566156 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 12:04:04.566178 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 12:04:04.566200 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 12:04:04.566222 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 12:04:04.566249 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 12:04:04.566309 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 12:04:04.566332 kernel: ACPI: bus type drm_connector registered Jan 29 12:04:04.566354 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 12:04:04.566376 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 12:04:04.566439 systemd-journald[1530]: Collecting audit messages is disabled. Jan 29 12:04:04.566485 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 12:04:04.566511 systemd-journald[1530]: Journal started Jan 29 12:04:04.566563 systemd-journald[1530]: Runtime Journal (/run/log/journal/ec29efe4d141eae066407749a94970fd) is 4.8M, max 38.6M, 33.7M free. Jan 29 12:04:03.948155 systemd[1]: Queued start job for default target multi-user.target. Jan 29 12:04:03.979782 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 29 12:04:03.980186 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 12:04:04.573965 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 12:04:04.580196 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 12:04:04.593602 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 12:04:04.601025 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 12:04:04.614492 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 12:04:04.635498 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 12:04:04.635719 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 12:04:04.640732 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 12:04:04.643947 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 12:04:04.644229 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 12:04:04.646221 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 12:04:04.646429 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 12:04:04.649140 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 12:04:04.651952 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 12:04:04.672375 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 12:04:04.689411 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 12:04:04.718539 kernel: loop0: detected capacity change from 0 to 142488 Jan 29 12:04:04.724415 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 12:04:04.754453 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 12:04:04.771392 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 12:04:04.776748 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 12:04:04.789871 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 12:04:04.805586 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 12:04:04.813730 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:04:04.816845 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 12:04:04.829863 systemd-journald[1530]: Time spent on flushing to /var/log/journal/ec29efe4d141eae066407749a94970fd is 123.326ms for 967 entries. Jan 29 12:04:04.829863 systemd-journald[1530]: System Journal (/var/log/journal/ec29efe4d141eae066407749a94970fd) is 8.0M, max 195.6M, 187.6M free. Jan 29 12:04:04.969988 systemd-journald[1530]: Received client request to flush runtime journal. Jan 29 12:04:04.971692 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 12:04:04.971747 kernel: loop1: detected capacity change from 0 to 61336 Jan 29 12:04:04.882483 udevadm[1587]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 29 12:04:04.935179 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 12:04:04.937538 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 12:04:04.969866 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 12:04:04.989885 kernel: loop2: detected capacity change from 0 to 140768 Jan 29 12:04:04.982559 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 12:04:04.988777 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 12:04:05.051821 systemd-tmpfiles[1595]: ACLs are not supported, ignoring. Jan 29 12:04:05.052205 systemd-tmpfiles[1595]: ACLs are not supported, ignoring. Jan 29 12:04:05.063452 kernel: loop3: detected capacity change from 0 to 205544 Jan 29 12:04:05.066373 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 12:04:05.176601 kernel: loop4: detected capacity change from 0 to 142488 Jan 29 12:04:05.257300 kernel: loop5: detected capacity change from 0 to 61336 Jan 29 12:04:05.276374 kernel: loop6: detected capacity change from 0 to 140768 Jan 29 12:04:05.329391 kernel: loop7: detected capacity change from 0 to 205544 Jan 29 12:04:05.380189 (sd-merge)[1604]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 29 12:04:05.381968 (sd-merge)[1604]: Merged extensions into '/usr'. Jan 29 12:04:05.405224 systemd[1]: Reloading requested from client PID 1558 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 12:04:05.405401 systemd[1]: Reloading... Jan 29 12:04:05.584385 zram_generator::config[1635]: No configuration found. Jan 29 12:04:05.866175 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:04:05.987887 systemd[1]: Reloading finished in 581 ms. Jan 29 12:04:06.031948 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 12:04:06.046619 systemd[1]: Starting ensure-sysext.service... Jan 29 12:04:06.062551 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 12:04:06.096524 systemd[1]: Reloading requested from client PID 1678 ('systemctl') (unit ensure-sysext.service)... Jan 29 12:04:06.096552 systemd[1]: Reloading... Jan 29 12:04:06.113585 systemd-tmpfiles[1679]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 12:04:06.117759 systemd-tmpfiles[1679]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 12:04:06.120017 systemd-tmpfiles[1679]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 12:04:06.123157 systemd-tmpfiles[1679]: ACLs are not supported, ignoring. Jan 29 12:04:06.123280 systemd-tmpfiles[1679]: ACLs are not supported, ignoring. Jan 29 12:04:06.141173 systemd-tmpfiles[1679]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 12:04:06.141189 systemd-tmpfiles[1679]: Skipping /boot Jan 29 12:04:06.170413 systemd-tmpfiles[1679]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 12:04:06.170434 systemd-tmpfiles[1679]: Skipping /boot Jan 29 12:04:06.240363 zram_generator::config[1704]: No configuration found. Jan 29 12:04:06.250289 ldconfig[1551]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 12:04:06.467772 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:04:06.539906 systemd[1]: Reloading finished in 442 ms. Jan 29 12:04:06.576384 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 12:04:06.583159 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 12:04:06.592600 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 12:04:06.613566 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 29 12:04:06.619521 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 12:04:06.626461 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 12:04:06.636603 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 12:04:06.643755 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 12:04:06.651571 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 12:04:06.670661 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 12:04:06.675760 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:04:06.676119 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 12:04:06.684615 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 12:04:06.693039 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 12:04:06.696187 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 12:04:06.697514 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 12:04:06.697780 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:04:06.702005 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:04:06.703339 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 12:04:06.703599 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 12:04:06.703751 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:04:06.711128 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:04:06.711609 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 12:04:06.720304 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 12:04:06.722318 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 12:04:06.722615 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 12:04:06.724498 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:04:06.733817 systemd[1]: Finished ensure-sysext.service. Jan 29 12:04:06.757325 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 12:04:06.757769 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 12:04:06.759757 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 12:04:06.761498 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 12:04:06.774720 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 12:04:06.774957 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 12:04:06.777399 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 12:04:06.782638 systemd-udevd[1769]: Using default interface naming scheme 'v255'. Jan 29 12:04:06.782840 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 12:04:06.783064 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 12:04:06.785611 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 12:04:06.789741 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 12:04:06.792081 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 12:04:06.808639 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 12:04:06.830780 augenrules[1794]: No rules Jan 29 12:04:06.829065 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 12:04:06.832368 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 29 12:04:06.849993 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 12:04:06.869482 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 12:04:06.871913 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 12:04:06.881073 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 12:04:06.893182 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 12:04:07.005921 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 29 12:04:07.029362 systemd-resolved[1767]: Positive Trust Anchors: Jan 29 12:04:07.031773 systemd-resolved[1767]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 12:04:07.031843 systemd-resolved[1767]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 12:04:07.055385 (udev-worker)[1812]: Network interface NamePolicy= disabled on kernel command line. Jan 29 12:04:07.072686 systemd-resolved[1767]: Defaulting to hostname 'linux'. Jan 29 12:04:07.077375 systemd-networkd[1808]: lo: Link UP Jan 29 12:04:07.077387 systemd-networkd[1808]: lo: Gained carrier Jan 29 12:04:07.080652 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 12:04:07.082470 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 12:04:07.084403 systemd-networkd[1808]: Enumeration completed Jan 29 12:04:07.084526 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 12:04:07.086096 systemd[1]: Reached target network.target - Network. Jan 29 12:04:07.090099 systemd-networkd[1808]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:04:07.090112 systemd-networkd[1808]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 12:04:07.095497 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 12:04:07.097677 systemd-networkd[1808]: eth0: Link UP Jan 29 12:04:07.098130 systemd-networkd[1808]: eth0: Gained carrier Jan 29 12:04:07.098158 systemd-networkd[1808]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:04:07.115067 systemd-networkd[1808]: eth0: DHCPv4 address 172.31.21.48/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 29 12:04:07.115304 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1819) Jan 29 12:04:07.188649 systemd-networkd[1808]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:04:07.216348 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 29 12:04:07.234670 kernel: ACPI: button: Power Button [PWRF] Jan 29 12:04:07.234756 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Jan 29 12:04:07.238184 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Jan 29 12:04:07.250297 kernel: ACPI: button: Sleep Button [SLPF] Jan 29 12:04:07.278297 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input5 Jan 29 12:04:07.336323 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 29 12:04:07.354680 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 12:04:07.423706 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 12:04:07.434292 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 12:04:07.462519 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:04:07.469787 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 12:04:07.486520 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 12:04:07.531222 lvm[1923]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 12:04:07.571875 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 12:04:07.573065 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 12:04:07.583539 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 12:04:07.603297 lvm[1927]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 12:04:07.638579 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 12:04:07.776790 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:04:07.782146 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 12:04:07.785815 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 12:04:07.787511 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 12:04:07.789336 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 12:04:07.790674 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 12:04:07.792237 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 12:04:07.794307 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 12:04:07.794440 systemd[1]: Reached target paths.target - Path Units. Jan 29 12:04:07.795723 systemd[1]: Reached target timers.target - Timer Units. Jan 29 12:04:07.797998 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 12:04:07.801068 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 12:04:07.807779 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 12:04:07.809779 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 12:04:07.811428 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 12:04:07.813286 systemd[1]: Reached target basic.target - Basic System. Jan 29 12:04:07.814429 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 12:04:07.814469 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 12:04:07.819406 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 12:04:07.824460 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 29 12:04:07.833552 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 12:04:07.840491 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 12:04:07.852668 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 12:04:07.854093 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 12:04:07.864479 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 12:04:07.869626 systemd[1]: Started ntpd.service - Network Time Service. Jan 29 12:04:07.877547 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 12:04:07.884598 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 29 12:04:07.896552 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 12:04:07.912718 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 12:04:07.929566 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 12:04:07.933052 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 12:04:07.933799 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 12:04:07.943877 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 12:04:07.962921 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 12:04:07.976818 jq[1937]: false Jan 29 12:04:07.978133 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 12:04:07.979050 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 12:04:07.983165 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 12:04:07.983998 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 12:04:08.000902 dbus-daemon[1936]: [system] SELinux support is enabled Jan 29 12:04:08.029389 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 12:04:08.033764 dbus-daemon[1936]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1808 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 29 12:04:08.052464 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 12:04:08.052523 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 12:04:08.054553 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 12:04:08.054592 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 12:04:08.068890 (ntainerd)[1959]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 12:04:08.086740 dbus-daemon[1936]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 29 12:04:08.093679 jq[1950]: true Jan 29 12:04:08.119849 update_engine[1948]: I20250129 12:04:08.116312 1948 main.cc:92] Flatcar Update Engine starting Jan 29 12:04:08.131859 extend-filesystems[1938]: Found loop4 Jan 29 12:04:08.131859 extend-filesystems[1938]: Found loop5 Jan 29 12:04:08.131859 extend-filesystems[1938]: Found loop6 Jan 29 12:04:08.131859 extend-filesystems[1938]: Found loop7 Jan 29 12:04:08.131859 extend-filesystems[1938]: Found nvme0n1 Jan 29 12:04:08.131859 extend-filesystems[1938]: Found nvme0n1p1 Jan 29 12:04:08.131859 extend-filesystems[1938]: Found nvme0n1p2 Jan 29 12:04:08.131859 extend-filesystems[1938]: Found nvme0n1p3 Jan 29 12:04:08.131859 extend-filesystems[1938]: Found usr Jan 29 12:04:08.131859 extend-filesystems[1938]: Found nvme0n1p4 Jan 29 12:04:08.131859 extend-filesystems[1938]: Found nvme0n1p6 Jan 29 12:04:08.131859 extend-filesystems[1938]: Found nvme0n1p7 Jan 29 12:04:08.131859 extend-filesystems[1938]: Found nvme0n1p9 Jan 29 12:04:08.131859 extend-filesystems[1938]: Checking size of /dev/nvme0n1p9 Jan 29 12:04:08.127554 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 29 12:04:08.176715 tar[1965]: linux-amd64/helm Jan 29 12:04:08.188845 update_engine[1948]: I20250129 12:04:08.151212 1948 update_check_scheduler.cc:74] Next update check in 5m22s Jan 29 12:04:08.134600 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 12:04:08.135086 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 12:04:08.189021 jq[1970]: true Jan 29 12:04:08.150952 systemd[1]: Started update-engine.service - Update Engine. Jan 29 12:04:08.179909 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 12:04:08.220058 ntpd[1940]: ntpd 4.2.8p17@1.4004-o Wed Jan 29 09:31:52 UTC 2025 (1): Starting Jan 29 12:04:08.221464 ntpd[1940]: 29 Jan 12:04:08 ntpd[1940]: ntpd 4.2.8p17@1.4004-o Wed Jan 29 09:31:52 UTC 2025 (1): Starting Jan 29 12:04:08.221464 ntpd[1940]: 29 Jan 12:04:08 ntpd[1940]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 29 12:04:08.221464 ntpd[1940]: 29 Jan 12:04:08 ntpd[1940]: ---------------------------------------------------- Jan 29 12:04:08.221464 ntpd[1940]: 29 Jan 12:04:08 ntpd[1940]: ntp-4 is maintained by Network Time Foundation, Jan 29 12:04:08.221464 ntpd[1940]: 29 Jan 12:04:08 ntpd[1940]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 29 12:04:08.221464 ntpd[1940]: 29 Jan 12:04:08 ntpd[1940]: corporation. Support and training for ntp-4 are Jan 29 12:04:08.221464 ntpd[1940]: 29 Jan 12:04:08 ntpd[1940]: available at https://www.nwtime.org/support Jan 29 12:04:08.221464 ntpd[1940]: 29 Jan 12:04:08 ntpd[1940]: ---------------------------------------------------- Jan 29 12:04:08.220089 ntpd[1940]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 29 12:04:08.220097 ntpd[1940]: ---------------------------------------------------- Jan 29 12:04:08.220104 ntpd[1940]: ntp-4 is maintained by Network Time Foundation, Jan 29 12:04:08.220111 ntpd[1940]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 29 12:04:08.220118 ntpd[1940]: corporation. Support and training for ntp-4 are Jan 29 12:04:08.220125 ntpd[1940]: available at https://www.nwtime.org/support Jan 29 12:04:08.220132 ntpd[1940]: ---------------------------------------------------- Jan 29 12:04:08.231648 ntpd[1940]: proto: precision = 0.056 usec (-24) Jan 29 12:04:08.234337 ntpd[1940]: 29 Jan 12:04:08 ntpd[1940]: proto: precision = 0.056 usec (-24) Jan 29 12:04:08.234337 ntpd[1940]: 29 Jan 12:04:08 ntpd[1940]: basedate set to 2025-01-17 Jan 29 12:04:08.234337 ntpd[1940]: 29 Jan 12:04:08 ntpd[1940]: gps base set to 2025-01-19 (week 2350) Jan 29 12:04:08.231921 ntpd[1940]: basedate set to 2025-01-17 Jan 29 12:04:08.231931 ntpd[1940]: gps base set to 2025-01-19 (week 2350) Jan 29 12:04:08.238557 ntpd[1940]: Listen and drop on 0 v6wildcard [::]:123 Jan 29 12:04:08.242398 ntpd[1940]: 29 Jan 12:04:08 ntpd[1940]: Listen and drop on 0 v6wildcard [::]:123 Jan 29 12:04:08.242398 ntpd[1940]: 29 Jan 12:04:08 ntpd[1940]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 29 12:04:08.242398 ntpd[1940]: 29 Jan 12:04:08 ntpd[1940]: Listen normally on 2 lo 127.0.0.1:123 Jan 29 12:04:08.242398 ntpd[1940]: 29 Jan 12:04:08 ntpd[1940]: Listen normally on 3 eth0 172.31.21.48:123 Jan 29 12:04:08.242398 ntpd[1940]: 29 Jan 12:04:08 ntpd[1940]: Listen normally on 4 lo [::1]:123 Jan 29 12:04:08.242398 ntpd[1940]: 29 Jan 12:04:08 ntpd[1940]: bind(21) AF_INET6 fe80::47c:6bff:fe40:2a3b%2#123 flags 0x11 failed: Cannot assign requested address Jan 29 12:04:08.242398 ntpd[1940]: 29 Jan 12:04:08 ntpd[1940]: unable to create socket on eth0 (5) for fe80::47c:6bff:fe40:2a3b%2#123 Jan 29 12:04:08.242398 ntpd[1940]: 29 Jan 12:04:08 ntpd[1940]: failed to init interface for address fe80::47c:6bff:fe40:2a3b%2 Jan 29 12:04:08.242398 ntpd[1940]: 29 Jan 12:04:08 ntpd[1940]: Listening on routing socket on fd #21 for interface updates Jan 29 12:04:08.242398 ntpd[1940]: 29 Jan 12:04:08 ntpd[1940]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 29 12:04:08.242398 ntpd[1940]: 29 Jan 12:04:08 ntpd[1940]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 29 12:04:08.238612 ntpd[1940]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 29 12:04:08.238765 ntpd[1940]: Listen normally on 2 lo 127.0.0.1:123 Jan 29 12:04:08.238792 ntpd[1940]: Listen normally on 3 eth0 172.31.21.48:123 Jan 29 12:04:08.238827 ntpd[1940]: Listen normally on 4 lo [::1]:123 Jan 29 12:04:08.238858 ntpd[1940]: bind(21) AF_INET6 fe80::47c:6bff:fe40:2a3b%2#123 flags 0x11 failed: Cannot assign requested address Jan 29 12:04:08.238874 ntpd[1940]: unable to create socket on eth0 (5) for fe80::47c:6bff:fe40:2a3b%2#123 Jan 29 12:04:08.238885 ntpd[1940]: failed to init interface for address fe80::47c:6bff:fe40:2a3b%2 Jan 29 12:04:08.238925 ntpd[1940]: Listening on routing socket on fd #21 for interface updates Jan 29 12:04:08.240134 ntpd[1940]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 29 12:04:08.240158 ntpd[1940]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 29 12:04:08.266077 extend-filesystems[1938]: Resized partition /dev/nvme0n1p9 Jan 29 12:04:08.279678 coreos-metadata[1935]: Jan 29 12:04:08.279 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 29 12:04:08.289029 coreos-metadata[1935]: Jan 29 12:04:08.288 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 29 12:04:08.290558 extend-filesystems[1994]: resize2fs 1.47.1 (20-May-2024) Jan 29 12:04:08.302510 coreos-metadata[1935]: Jan 29 12:04:08.299 INFO Fetch successful Jan 29 12:04:08.302510 coreos-metadata[1935]: Jan 29 12:04:08.299 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 29 12:04:08.295342 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 29 12:04:08.309510 coreos-metadata[1935]: Jan 29 12:04:08.308 INFO Fetch successful Jan 29 12:04:08.309510 coreos-metadata[1935]: Jan 29 12:04:08.308 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 29 12:04:08.310324 coreos-metadata[1935]: Jan 29 12:04:08.309 INFO Fetch successful Jan 29 12:04:08.310324 coreos-metadata[1935]: Jan 29 12:04:08.309 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 29 12:04:08.314333 coreos-metadata[1935]: Jan 29 12:04:08.312 INFO Fetch successful Jan 29 12:04:08.314333 coreos-metadata[1935]: Jan 29 12:04:08.312 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 29 12:04:08.314719 coreos-metadata[1935]: Jan 29 12:04:08.314 INFO Fetch failed with 404: resource not found Jan 29 12:04:08.314719 coreos-metadata[1935]: Jan 29 12:04:08.314 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 29 12:04:08.315555 coreos-metadata[1935]: Jan 29 12:04:08.315 INFO Fetch successful Jan 29 12:04:08.315555 coreos-metadata[1935]: Jan 29 12:04:08.315 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 29 12:04:08.316033 systemd-logind[1947]: Watching system buttons on /dev/input/event1 (Power Button) Jan 29 12:04:08.317291 coreos-metadata[1935]: Jan 29 12:04:08.316 INFO Fetch successful Jan 29 12:04:08.317291 coreos-metadata[1935]: Jan 29 12:04:08.316 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 29 12:04:08.316908 systemd-logind[1947]: Watching system buttons on /dev/input/event2 (Sleep Button) Jan 29 12:04:08.316935 systemd-logind[1947]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 29 12:04:08.317722 systemd-logind[1947]: New seat seat0. Jan 29 12:04:08.319181 coreos-metadata[1935]: Jan 29 12:04:08.317 INFO Fetch successful Jan 29 12:04:08.319181 coreos-metadata[1935]: Jan 29 12:04:08.317 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 29 12:04:08.320329 coreos-metadata[1935]: Jan 29 12:04:08.319 INFO Fetch successful Jan 29 12:04:08.320329 coreos-metadata[1935]: Jan 29 12:04:08.319 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 29 12:04:08.320231 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 12:04:08.325489 coreos-metadata[1935]: Jan 29 12:04:08.323 INFO Fetch successful Jan 29 12:04:08.338292 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jan 29 12:04:08.419811 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 29 12:04:08.422625 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 12:04:08.485737 systemd-networkd[1808]: eth0: Gained IPv6LL Jan 29 12:04:08.499085 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 12:04:08.501255 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 12:04:08.503288 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jan 29 12:04:08.523498 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 29 12:04:08.533969 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:04:08.544749 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 12:04:08.553536 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1819) Jan 29 12:04:08.553656 extend-filesystems[1994]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 29 12:04:08.553656 extend-filesystems[1994]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 29 12:04:08.553656 extend-filesystems[1994]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jan 29 12:04:08.556196 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 12:04:08.559439 extend-filesystems[1938]: Resized filesystem in /dev/nvme0n1p9 Jan 29 12:04:08.557374 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 12:04:08.572403 bash[2007]: Updated "/home/core/.ssh/authorized_keys" Jan 29 12:04:08.574065 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 12:04:08.617902 dbus-daemon[1936]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 29 12:04:08.621126 systemd[1]: Starting sshkeys.service... Jan 29 12:04:08.622407 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 29 12:04:08.632105 dbus-daemon[1936]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1972 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 29 12:04:08.648249 systemd[1]: Starting polkit.service - Authorization Manager... Jan 29 12:04:08.704979 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 12:04:08.710092 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 29 12:04:08.721929 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 29 12:04:08.731216 polkitd[2043]: Started polkitd version 121 Jan 29 12:04:08.758085 polkitd[2043]: Loading rules from directory /etc/polkit-1/rules.d Jan 29 12:04:08.762538 systemd[1]: Started polkit.service - Authorization Manager. Jan 29 12:04:08.758177 polkitd[2043]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 29 12:04:08.761687 polkitd[2043]: Finished loading, compiling and executing 2 rules Jan 29 12:04:08.762329 dbus-daemon[1936]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 29 12:04:08.762976 polkitd[2043]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 29 12:04:08.786299 amazon-ssm-agent[2019]: Initializing new seelog logger Jan 29 12:04:08.786299 amazon-ssm-agent[2019]: New Seelog Logger Creation Complete Jan 29 12:04:08.786299 amazon-ssm-agent[2019]: 2025/01/29 12:04:08 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 12:04:08.786299 amazon-ssm-agent[2019]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 12:04:08.786299 amazon-ssm-agent[2019]: 2025/01/29 12:04:08 processing appconfig overrides Jan 29 12:04:08.787135 amazon-ssm-agent[2019]: 2025/01/29 12:04:08 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 12:04:08.787226 amazon-ssm-agent[2019]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 12:04:08.787401 amazon-ssm-agent[2019]: 2025/01/29 12:04:08 processing appconfig overrides Jan 29 12:04:08.787816 amazon-ssm-agent[2019]: 2025/01/29 12:04:08 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 12:04:08.788388 amazon-ssm-agent[2019]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 12:04:08.788388 amazon-ssm-agent[2019]: 2025/01/29 12:04:08 processing appconfig overrides Jan 29 12:04:08.790966 amazon-ssm-agent[2019]: 2025-01-29 12:04:08 INFO Proxy environment variables: Jan 29 12:04:08.792359 amazon-ssm-agent[2019]: 2025/01/29 12:04:08 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 12:04:08.792465 amazon-ssm-agent[2019]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 12:04:08.794108 amazon-ssm-agent[2019]: 2025/01/29 12:04:08 processing appconfig overrides Jan 29 12:04:08.845336 systemd-resolved[1767]: System hostname changed to 'ip-172-31-21-48'. Jan 29 12:04:08.845405 systemd-hostnamed[1972]: Hostname set to (transient) Jan 29 12:04:08.890335 amazon-ssm-agent[2019]: 2025-01-29 12:04:08 INFO no_proxy: Jan 29 12:04:08.994194 amazon-ssm-agent[2019]: 2025-01-29 12:04:08 INFO https_proxy: Jan 29 12:04:09.025456 coreos-metadata[2071]: Jan 29 12:04:09.025 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 29 12:04:09.025998 locksmithd[1978]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 12:04:09.034706 coreos-metadata[2071]: Jan 29 12:04:09.033 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 29 12:04:09.036531 coreos-metadata[2071]: Jan 29 12:04:09.036 INFO Fetch successful Jan 29 12:04:09.036531 coreos-metadata[2071]: Jan 29 12:04:09.036 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 29 12:04:09.046335 coreos-metadata[2071]: Jan 29 12:04:09.041 INFO Fetch successful Jan 29 12:04:09.048907 unknown[2071]: wrote ssh authorized keys file for user: core Jan 29 12:04:09.095399 amazon-ssm-agent[2019]: 2025-01-29 12:04:08 INFO http_proxy: Jan 29 12:04:09.102393 containerd[1959]: time="2025-01-29T12:04:09.102299626Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 29 12:04:09.127094 update-ssh-keys[2137]: Updated "/home/core/.ssh/authorized_keys" Jan 29 12:04:09.130378 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 29 12:04:09.142676 systemd[1]: Finished sshkeys.service. Jan 29 12:04:09.197689 amazon-ssm-agent[2019]: 2025-01-29 12:04:08 INFO Checking if agent identity type OnPrem can be assumed Jan 29 12:04:09.294708 amazon-ssm-agent[2019]: 2025-01-29 12:04:08 INFO Checking if agent identity type EC2 can be assumed Jan 29 12:04:09.338756 containerd[1959]: time="2025-01-29T12:04:09.338312365Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:04:09.349320 containerd[1959]: time="2025-01-29T12:04:09.348592097Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:04:09.349320 containerd[1959]: time="2025-01-29T12:04:09.348645040Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 12:04:09.349320 containerd[1959]: time="2025-01-29T12:04:09.348673276Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 12:04:09.349320 containerd[1959]: time="2025-01-29T12:04:09.348868240Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 12:04:09.349320 containerd[1959]: time="2025-01-29T12:04:09.348891950Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 12:04:09.349320 containerd[1959]: time="2025-01-29T12:04:09.348965387Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:04:09.349320 containerd[1959]: time="2025-01-29T12:04:09.348984644Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:04:09.349320 containerd[1959]: time="2025-01-29T12:04:09.349198984Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:04:09.349320 containerd[1959]: time="2025-01-29T12:04:09.349223577Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 12:04:09.349320 containerd[1959]: time="2025-01-29T12:04:09.349243637Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:04:09.349320 containerd[1959]: time="2025-01-29T12:04:09.349277542Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 12:04:09.349790 containerd[1959]: time="2025-01-29T12:04:09.349391535Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:04:09.349790 containerd[1959]: time="2025-01-29T12:04:09.349641513Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:04:09.349882 containerd[1959]: time="2025-01-29T12:04:09.349799354Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:04:09.349882 containerd[1959]: time="2025-01-29T12:04:09.349823362Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 12:04:09.349963 containerd[1959]: time="2025-01-29T12:04:09.349940372Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 12:04:09.351165 containerd[1959]: time="2025-01-29T12:04:09.350000435Z" level=info msg="metadata content store policy set" policy=shared Jan 29 12:04:09.363807 containerd[1959]: time="2025-01-29T12:04:09.362569665Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 12:04:09.363807 containerd[1959]: time="2025-01-29T12:04:09.362667105Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 12:04:09.363807 containerd[1959]: time="2025-01-29T12:04:09.362692855Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 12:04:09.363807 containerd[1959]: time="2025-01-29T12:04:09.362772445Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 12:04:09.363807 containerd[1959]: time="2025-01-29T12:04:09.362804402Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 12:04:09.363807 containerd[1959]: time="2025-01-29T12:04:09.363001850Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 12:04:09.363807 containerd[1959]: time="2025-01-29T12:04:09.363500870Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 12:04:09.363807 containerd[1959]: time="2025-01-29T12:04:09.363696608Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 12:04:09.363807 containerd[1959]: time="2025-01-29T12:04:09.363723054Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 12:04:09.363807 containerd[1959]: time="2025-01-29T12:04:09.363745981Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 12:04:09.363807 containerd[1959]: time="2025-01-29T12:04:09.363770012Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 12:04:09.363807 containerd[1959]: time="2025-01-29T12:04:09.363792197Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 12:04:09.363807 containerd[1959]: time="2025-01-29T12:04:09.363810749Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 12:04:09.364371 containerd[1959]: time="2025-01-29T12:04:09.363833056Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 12:04:09.364371 containerd[1959]: time="2025-01-29T12:04:09.363852228Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 12:04:09.364371 containerd[1959]: time="2025-01-29T12:04:09.363868871Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 12:04:09.364371 containerd[1959]: time="2025-01-29T12:04:09.363884788Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 12:04:09.364371 containerd[1959]: time="2025-01-29T12:04:09.363901448Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 12:04:09.364371 containerd[1959]: time="2025-01-29T12:04:09.363929566Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 12:04:09.364371 containerd[1959]: time="2025-01-29T12:04:09.363950705Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 12:04:09.364371 containerd[1959]: time="2025-01-29T12:04:09.363970132Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 12:04:09.364371 containerd[1959]: time="2025-01-29T12:04:09.364001374Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 12:04:09.364371 containerd[1959]: time="2025-01-29T12:04:09.364020083Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 12:04:09.364371 containerd[1959]: time="2025-01-29T12:04:09.364040906Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 12:04:09.364371 containerd[1959]: time="2025-01-29T12:04:09.364087306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 12:04:09.364371 containerd[1959]: time="2025-01-29T12:04:09.364107805Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 12:04:09.364371 containerd[1959]: time="2025-01-29T12:04:09.364128985Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 12:04:09.364885 containerd[1959]: time="2025-01-29T12:04:09.364153356Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 12:04:09.364885 containerd[1959]: time="2025-01-29T12:04:09.364172498Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 12:04:09.364885 containerd[1959]: time="2025-01-29T12:04:09.364193442Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 12:04:09.364885 containerd[1959]: time="2025-01-29T12:04:09.364213687Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 12:04:09.364885 containerd[1959]: time="2025-01-29T12:04:09.364244945Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 12:04:09.370585 containerd[1959]: time="2025-01-29T12:04:09.370368884Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 12:04:09.370585 containerd[1959]: time="2025-01-29T12:04:09.370421408Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 12:04:09.370585 containerd[1959]: time="2025-01-29T12:04:09.370442394Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 12:04:09.370585 containerd[1959]: time="2025-01-29T12:04:09.370533444Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 12:04:09.370814 containerd[1959]: time="2025-01-29T12:04:09.370562705Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 12:04:09.370814 containerd[1959]: time="2025-01-29T12:04:09.370654798Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 12:04:09.370814 containerd[1959]: time="2025-01-29T12:04:09.370675955Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 12:04:09.370814 containerd[1959]: time="2025-01-29T12:04:09.370690427Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 12:04:09.370814 containerd[1959]: time="2025-01-29T12:04:09.370710555Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 12:04:09.370814 containerd[1959]: time="2025-01-29T12:04:09.370732227Z" level=info msg="NRI interface is disabled by configuration." Jan 29 12:04:09.370814 containerd[1959]: time="2025-01-29T12:04:09.370748692Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 12:04:09.372926 containerd[1959]: time="2025-01-29T12:04:09.371181193Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 12:04:09.372926 containerd[1959]: time="2025-01-29T12:04:09.371315143Z" level=info msg="Connect containerd service" Jan 29 12:04:09.372926 containerd[1959]: time="2025-01-29T12:04:09.371367056Z" level=info msg="using legacy CRI server" Jan 29 12:04:09.372926 containerd[1959]: time="2025-01-29T12:04:09.371377968Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 12:04:09.372926 containerd[1959]: time="2025-01-29T12:04:09.371522787Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 12:04:09.378440 containerd[1959]: time="2025-01-29T12:04:09.378074881Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 12:04:09.383968 containerd[1959]: time="2025-01-29T12:04:09.381732510Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 12:04:09.383968 containerd[1959]: time="2025-01-29T12:04:09.381807817Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 12:04:09.383968 containerd[1959]: time="2025-01-29T12:04:09.381863969Z" level=info msg="Start subscribing containerd event" Jan 29 12:04:09.383968 containerd[1959]: time="2025-01-29T12:04:09.381917716Z" level=info msg="Start recovering state" Jan 29 12:04:09.383968 containerd[1959]: time="2025-01-29T12:04:09.382000786Z" level=info msg="Start event monitor" Jan 29 12:04:09.383968 containerd[1959]: time="2025-01-29T12:04:09.382017632Z" level=info msg="Start snapshots syncer" Jan 29 12:04:09.383968 containerd[1959]: time="2025-01-29T12:04:09.382033556Z" level=info msg="Start cni network conf syncer for default" Jan 29 12:04:09.383968 containerd[1959]: time="2025-01-29T12:04:09.382045860Z" level=info msg="Start streaming server" Jan 29 12:04:09.382227 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 12:04:09.394379 containerd[1959]: time="2025-01-29T12:04:09.394332111Z" level=info msg="containerd successfully booted in 0.293123s" Jan 29 12:04:09.398353 amazon-ssm-agent[2019]: 2025-01-29 12:04:09 INFO Agent will take identity from EC2 Jan 29 12:04:09.499211 amazon-ssm-agent[2019]: 2025-01-29 12:04:09 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 29 12:04:09.597154 amazon-ssm-agent[2019]: 2025-01-29 12:04:09 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 29 12:04:09.700277 amazon-ssm-agent[2019]: 2025-01-29 12:04:09 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 29 12:04:09.798716 amazon-ssm-agent[2019]: 2025-01-29 12:04:09 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 29 12:04:09.815313 sshd_keygen[1967]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 12:04:09.869441 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 12:04:09.875669 tar[1965]: linux-amd64/LICENSE Jan 29 12:04:09.878564 tar[1965]: linux-amd64/README.md Jan 29 12:04:09.878984 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 12:04:09.898560 amazon-ssm-agent[2019]: 2025-01-29 12:04:09 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jan 29 12:04:09.900620 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 12:04:09.902815 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 12:04:09.903051 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 12:04:09.915802 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 12:04:09.920310 amazon-ssm-agent[2019]: 2025-01-29 12:04:09 INFO [amazon-ssm-agent] Starting Core Agent Jan 29 12:04:09.920310 amazon-ssm-agent[2019]: 2025-01-29 12:04:09 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 29 12:04:09.920310 amazon-ssm-agent[2019]: 2025-01-29 12:04:09 INFO [Registrar] Starting registrar module Jan 29 12:04:09.920310 amazon-ssm-agent[2019]: 2025-01-29 12:04:09 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 29 12:04:09.920310 amazon-ssm-agent[2019]: 2025-01-29 12:04:09 INFO [EC2Identity] EC2 registration was successful. Jan 29 12:04:09.920310 amazon-ssm-agent[2019]: 2025-01-29 12:04:09 INFO [CredentialRefresher] credentialRefresher has started Jan 29 12:04:09.920310 amazon-ssm-agent[2019]: 2025-01-29 12:04:09 INFO [CredentialRefresher] Starting credentials refresher loop Jan 29 12:04:09.920310 amazon-ssm-agent[2019]: 2025-01-29 12:04:09 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 29 12:04:09.938532 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 12:04:09.951310 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 12:04:09.954468 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 29 12:04:09.956245 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 12:04:09.998068 amazon-ssm-agent[2019]: 2025-01-29 12:04:09 INFO [CredentialRefresher] Next credential rotation will be in 30.241660691216666 minutes Jan 29 12:04:10.941799 amazon-ssm-agent[2019]: 2025-01-29 12:04:10 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 29 12:04:11.042067 amazon-ssm-agent[2019]: 2025-01-29 12:04:10 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2180) started Jan 29 12:04:11.103521 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:04:11.106319 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 12:04:11.107974 systemd[1]: Startup finished in 865ms (kernel) + 8.166s (initrd) + 8.254s (userspace) = 17.286s. Jan 29 12:04:11.113018 (kubelet)[2192]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:04:11.156963 amazon-ssm-agent[2019]: 2025-01-29 12:04:10 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 29 12:04:11.245846 ntpd[1940]: Listen normally on 6 eth0 [fe80::47c:6bff:fe40:2a3b%2]:123 Jan 29 12:04:11.246749 ntpd[1940]: 29 Jan 12:04:11 ntpd[1940]: Listen normally on 6 eth0 [fe80::47c:6bff:fe40:2a3b%2]:123 Jan 29 12:04:12.415559 kubelet[2192]: E0129 12:04:12.415509 2192 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:04:12.418252 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:04:12.418597 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:04:15.592722 systemd-resolved[1767]: Clock change detected. Flushing caches. Jan 29 12:04:17.701167 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 12:04:17.708664 systemd[1]: Started sshd@0-172.31.21.48:22-139.178.68.195:34754.service - OpenSSH per-connection server daemon (139.178.68.195:34754). Jan 29 12:04:17.886729 sshd[2208]: Accepted publickey for core from 139.178.68.195 port 34754 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:04:17.888834 sshd[2208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:04:17.898010 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 12:04:17.903672 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 12:04:17.907231 systemd-logind[1947]: New session 1 of user core. Jan 29 12:04:17.922439 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 12:04:17.929077 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 12:04:17.944102 (systemd)[2212]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 12:04:18.065140 systemd[2212]: Queued start job for default target default.target. Jan 29 12:04:18.072532 systemd[2212]: Created slice app.slice - User Application Slice. Jan 29 12:04:18.072573 systemd[2212]: Reached target paths.target - Paths. Jan 29 12:04:18.072594 systemd[2212]: Reached target timers.target - Timers. Jan 29 12:04:18.074019 systemd[2212]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 12:04:18.086852 systemd[2212]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 12:04:18.087193 systemd[2212]: Reached target sockets.target - Sockets. Jan 29 12:04:18.087222 systemd[2212]: Reached target basic.target - Basic System. Jan 29 12:04:18.087274 systemd[2212]: Reached target default.target - Main User Target. Jan 29 12:04:18.087330 systemd[2212]: Startup finished in 136ms. Jan 29 12:04:18.087452 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 12:04:18.088837 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 12:04:18.241433 systemd[1]: Started sshd@1-172.31.21.48:22-139.178.68.195:34768.service - OpenSSH per-connection server daemon (139.178.68.195:34768). Jan 29 12:04:18.431293 sshd[2223]: Accepted publickey for core from 139.178.68.195 port 34768 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:04:18.432860 sshd[2223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:04:18.450712 systemd-logind[1947]: New session 2 of user core. Jan 29 12:04:18.457566 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 12:04:18.586853 sshd[2223]: pam_unix(sshd:session): session closed for user core Jan 29 12:04:18.593438 systemd[1]: sshd@1-172.31.21.48:22-139.178.68.195:34768.service: Deactivated successfully. Jan 29 12:04:18.597630 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 12:04:18.601118 systemd-logind[1947]: Session 2 logged out. Waiting for processes to exit. Jan 29 12:04:18.622724 systemd-logind[1947]: Removed session 2. Jan 29 12:04:18.628688 systemd[1]: Started sshd@2-172.31.21.48:22-139.178.68.195:34774.service - OpenSSH per-connection server daemon (139.178.68.195:34774). Jan 29 12:04:18.796854 sshd[2230]: Accepted publickey for core from 139.178.68.195 port 34774 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:04:18.798824 sshd[2230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:04:18.804291 systemd-logind[1947]: New session 3 of user core. Jan 29 12:04:18.820533 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 12:04:18.935749 sshd[2230]: pam_unix(sshd:session): session closed for user core Jan 29 12:04:18.939021 systemd[1]: sshd@2-172.31.21.48:22-139.178.68.195:34774.service: Deactivated successfully. Jan 29 12:04:18.940949 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 12:04:18.942523 systemd-logind[1947]: Session 3 logged out. Waiting for processes to exit. Jan 29 12:04:18.943867 systemd-logind[1947]: Removed session 3. Jan 29 12:04:18.977705 systemd[1]: Started sshd@3-172.31.21.48:22-139.178.68.195:34788.service - OpenSSH per-connection server daemon (139.178.68.195:34788). Jan 29 12:04:19.133610 sshd[2237]: Accepted publickey for core from 139.178.68.195 port 34788 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:04:19.135426 sshd[2237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:04:19.141240 systemd-logind[1947]: New session 4 of user core. Jan 29 12:04:19.150568 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 12:04:19.272366 sshd[2237]: pam_unix(sshd:session): session closed for user core Jan 29 12:04:19.277734 systemd[1]: sshd@3-172.31.21.48:22-139.178.68.195:34788.service: Deactivated successfully. Jan 29 12:04:19.282632 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 12:04:19.287985 systemd-logind[1947]: Session 4 logged out. Waiting for processes to exit. Jan 29 12:04:19.304864 systemd-logind[1947]: Removed session 4. Jan 29 12:04:19.316053 systemd[1]: Started sshd@4-172.31.21.48:22-139.178.68.195:34800.service - OpenSSH per-connection server daemon (139.178.68.195:34800). Jan 29 12:04:19.496632 sshd[2244]: Accepted publickey for core from 139.178.68.195 port 34800 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:04:19.498662 sshd[2244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:04:19.505119 systemd-logind[1947]: New session 5 of user core. Jan 29 12:04:19.510531 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 12:04:19.650520 sudo[2247]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 12:04:19.652444 sudo[2247]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:04:20.143659 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 12:04:20.144734 (dockerd)[2262]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 12:04:20.767374 dockerd[2262]: time="2025-01-29T12:04:20.767293415Z" level=info msg="Starting up" Jan 29 12:04:20.907376 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport502992364-merged.mount: Deactivated successfully. Jan 29 12:04:20.940742 dockerd[2262]: time="2025-01-29T12:04:20.940690881Z" level=info msg="Loading containers: start." Jan 29 12:04:21.094378 kernel: Initializing XFRM netlink socket Jan 29 12:04:21.126490 (udev-worker)[2284]: Network interface NamePolicy= disabled on kernel command line. Jan 29 12:04:21.204553 systemd-networkd[1808]: docker0: Link UP Jan 29 12:04:21.228826 dockerd[2262]: time="2025-01-29T12:04:21.228776136Z" level=info msg="Loading containers: done." Jan 29 12:04:21.252140 dockerd[2262]: time="2025-01-29T12:04:21.252045249Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 12:04:21.252465 dockerd[2262]: time="2025-01-29T12:04:21.252215547Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 29 12:04:21.252465 dockerd[2262]: time="2025-01-29T12:04:21.252454882Z" level=info msg="Daemon has completed initialization" Jan 29 12:04:21.288529 dockerd[2262]: time="2025-01-29T12:04:21.288202053Z" level=info msg="API listen on /run/docker.sock" Jan 29 12:04:21.288413 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 12:04:22.507214 containerd[1959]: time="2025-01-29T12:04:22.507170729Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\"" Jan 29 12:04:23.041014 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 12:04:23.047775 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:04:23.217199 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount702623176.mount: Deactivated successfully. Jan 29 12:04:23.294718 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:04:23.305814 (kubelet)[2417]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:04:23.380447 kubelet[2417]: E0129 12:04:23.380398 2417 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:04:23.388083 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:04:23.388268 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:04:25.775958 containerd[1959]: time="2025-01-29T12:04:25.775908403Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:25.777177 containerd[1959]: time="2025-01-29T12:04:25.777130303Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.5: active requests=0, bytes read=27976721" Jan 29 12:04:25.778432 containerd[1959]: time="2025-01-29T12:04:25.778135021Z" level=info msg="ImageCreate event name:\"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:25.781542 containerd[1959]: time="2025-01-29T12:04:25.781487865Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:25.783331 containerd[1959]: time="2025-01-29T12:04:25.782832776Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.5\" with image id \"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\", size \"27973521\" in 3.275614098s" Jan 29 12:04:25.783331 containerd[1959]: time="2025-01-29T12:04:25.782877370Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\" returns image reference \"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\"" Jan 29 12:04:25.784970 containerd[1959]: time="2025-01-29T12:04:25.784947416Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\"" Jan 29 12:04:28.210521 containerd[1959]: time="2025-01-29T12:04:28.210471034Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:28.211794 containerd[1959]: time="2025-01-29T12:04:28.211745082Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.5: active requests=0, bytes read=24701143" Jan 29 12:04:28.213298 containerd[1959]: time="2025-01-29T12:04:28.212918966Z" level=info msg="ImageCreate event name:\"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:28.215947 containerd[1959]: time="2025-01-29T12:04:28.215908335Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:28.217119 containerd[1959]: time="2025-01-29T12:04:28.217080523Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.5\" with image id \"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\", size \"26147725\" in 2.432102044s" Jan 29 12:04:28.217191 containerd[1959]: time="2025-01-29T12:04:28.217126702Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\" returns image reference \"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\"" Jan 29 12:04:28.217802 containerd[1959]: time="2025-01-29T12:04:28.217777185Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\"" Jan 29 12:04:30.085368 containerd[1959]: time="2025-01-29T12:04:30.085318031Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:30.091152 containerd[1959]: time="2025-01-29T12:04:30.090801096Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.5: active requests=0, bytes read=18652053" Jan 29 12:04:30.095139 containerd[1959]: time="2025-01-29T12:04:30.095059675Z" level=info msg="ImageCreate event name:\"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:30.102393 containerd[1959]: time="2025-01-29T12:04:30.102321537Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:30.105938 containerd[1959]: time="2025-01-29T12:04:30.105367379Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.5\" with image id \"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\", size \"20098653\" in 1.887552629s" Jan 29 12:04:30.105938 containerd[1959]: time="2025-01-29T12:04:30.105421045Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\" returns image reference \"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\"" Jan 29 12:04:30.106291 containerd[1959]: time="2025-01-29T12:04:30.106192497Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\"" Jan 29 12:04:31.351848 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3495532292.mount: Deactivated successfully. Jan 29 12:04:32.509604 containerd[1959]: time="2025-01-29T12:04:32.509552030Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:32.510899 containerd[1959]: time="2025-01-29T12:04:32.510731001Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.5: active requests=0, bytes read=30231128" Jan 29 12:04:32.513482 containerd[1959]: time="2025-01-29T12:04:32.512163428Z" level=info msg="ImageCreate event name:\"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:32.514745 containerd[1959]: time="2025-01-29T12:04:32.514521692Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:32.515258 containerd[1959]: time="2025-01-29T12:04:32.515223339Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.5\" with image id \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\", size \"30230147\" in 2.408994066s" Jan 29 12:04:32.515342 containerd[1959]: time="2025-01-29T12:04:32.515265920Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\" returns image reference \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\"" Jan 29 12:04:32.516135 containerd[1959]: time="2025-01-29T12:04:32.516103393Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 29 12:04:33.075874 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3577690922.mount: Deactivated successfully. Jan 29 12:04:33.638175 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 12:04:33.648620 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:04:33.921504 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:04:33.931805 (kubelet)[2536]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:04:34.028219 kubelet[2536]: E0129 12:04:34.028145 2536 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:04:34.031224 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:04:34.031442 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:04:34.424432 containerd[1959]: time="2025-01-29T12:04:34.424382604Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:34.426404 containerd[1959]: time="2025-01-29T12:04:34.426344040Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 29 12:04:34.429473 containerd[1959]: time="2025-01-29T12:04:34.429406544Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:34.442325 containerd[1959]: time="2025-01-29T12:04:34.441432317Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:34.443005 containerd[1959]: time="2025-01-29T12:04:34.442959723Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.926739012s" Jan 29 12:04:34.443005 containerd[1959]: time="2025-01-29T12:04:34.443000721Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 29 12:04:34.443924 containerd[1959]: time="2025-01-29T12:04:34.443895320Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 29 12:04:35.015958 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4135392350.mount: Deactivated successfully. Jan 29 12:04:35.032662 containerd[1959]: time="2025-01-29T12:04:35.032612333Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:35.033562 containerd[1959]: time="2025-01-29T12:04:35.033505260Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 29 12:04:35.034662 containerd[1959]: time="2025-01-29T12:04:35.034608914Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:35.038156 containerd[1959]: time="2025-01-29T12:04:35.038100040Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:35.040236 containerd[1959]: time="2025-01-29T12:04:35.040185925Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 596.259731ms" Jan 29 12:04:35.040236 containerd[1959]: time="2025-01-29T12:04:35.040227978Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 29 12:04:35.041279 containerd[1959]: time="2025-01-29T12:04:35.041077413Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 29 12:04:35.603957 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount365975095.mount: Deactivated successfully. Jan 29 12:04:38.192775 containerd[1959]: time="2025-01-29T12:04:38.192710817Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:38.194321 containerd[1959]: time="2025-01-29T12:04:38.194254441Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779973" Jan 29 12:04:38.195878 containerd[1959]: time="2025-01-29T12:04:38.195445936Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:38.198946 containerd[1959]: time="2025-01-29T12:04:38.198559816Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:38.199817 containerd[1959]: time="2025-01-29T12:04:38.199782790Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.158665713s" Jan 29 12:04:38.199890 containerd[1959]: time="2025-01-29T12:04:38.199824758Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jan 29 12:04:39.252587 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 29 12:04:40.804099 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:04:40.810689 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:04:40.846973 systemd[1]: Reloading requested from client PID 2631 ('systemctl') (unit session-5.scope)... Jan 29 12:04:40.846991 systemd[1]: Reloading... Jan 29 12:04:40.963400 zram_generator::config[2669]: No configuration found. Jan 29 12:04:41.123077 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:04:41.210666 systemd[1]: Reloading finished in 363 ms. Jan 29 12:04:41.259939 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 29 12:04:41.260096 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 29 12:04:41.260473 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:04:41.268855 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:04:41.461756 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:04:41.472765 (kubelet)[2730]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 12:04:41.524620 kubelet[2730]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:04:41.524620 kubelet[2730]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 12:04:41.524620 kubelet[2730]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:04:41.526452 kubelet[2730]: I0129 12:04:41.526220 2730 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 12:04:42.346048 kubelet[2730]: I0129 12:04:42.346004 2730 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 29 12:04:42.346048 kubelet[2730]: I0129 12:04:42.346036 2730 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 12:04:42.346394 kubelet[2730]: I0129 12:04:42.346368 2730 server.go:929] "Client rotation is on, will bootstrap in background" Jan 29 12:04:42.395880 kubelet[2730]: E0129 12:04:42.395843 2730 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.21.48:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.21.48:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:04:42.396474 kubelet[2730]: I0129 12:04:42.396286 2730 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 12:04:42.405795 kubelet[2730]: E0129 12:04:42.405684 2730 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 12:04:42.405795 kubelet[2730]: I0129 12:04:42.405719 2730 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 12:04:42.410023 kubelet[2730]: I0129 12:04:42.409989 2730 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 12:04:42.412703 kubelet[2730]: I0129 12:04:42.412672 2730 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 12:04:42.412908 kubelet[2730]: I0129 12:04:42.412878 2730 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 12:04:42.413100 kubelet[2730]: I0129 12:04:42.412908 2730 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-21-48","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 12:04:42.413230 kubelet[2730]: I0129 12:04:42.413115 2730 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 12:04:42.413230 kubelet[2730]: I0129 12:04:42.413129 2730 container_manager_linux.go:300] "Creating device plugin manager" Jan 29 12:04:42.413334 kubelet[2730]: I0129 12:04:42.413261 2730 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:04:42.416145 kubelet[2730]: I0129 12:04:42.415855 2730 kubelet.go:408] "Attempting to sync node with API server" Jan 29 12:04:42.416145 kubelet[2730]: I0129 12:04:42.415884 2730 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 12:04:42.416145 kubelet[2730]: I0129 12:04:42.415923 2730 kubelet.go:314] "Adding apiserver pod source" Jan 29 12:04:42.416145 kubelet[2730]: I0129 12:04:42.415940 2730 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 12:04:42.436095 kubelet[2730]: W0129 12:04:42.435374 2730 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.21.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-48&limit=500&resourceVersion=0": dial tcp 172.31.21.48:6443: connect: connection refused Jan 29 12:04:42.436095 kubelet[2730]: E0129 12:04:42.435462 2730 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.21.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-48&limit=500&resourceVersion=0\": dial tcp 172.31.21.48:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:04:42.436095 kubelet[2730]: W0129 12:04:42.435713 2730 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.21.48:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.21.48:6443: connect: connection refused Jan 29 12:04:42.436095 kubelet[2730]: E0129 12:04:42.435762 2730 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.21.48:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.21.48:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:04:42.436546 kubelet[2730]: I0129 12:04:42.436517 2730 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 29 12:04:42.439978 kubelet[2730]: I0129 12:04:42.439957 2730 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 12:04:42.440915 kubelet[2730]: W0129 12:04:42.440876 2730 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 12:04:42.441572 kubelet[2730]: I0129 12:04:42.441548 2730 server.go:1269] "Started kubelet" Jan 29 12:04:42.443622 kubelet[2730]: I0129 12:04:42.442693 2730 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 12:04:42.443927 kubelet[2730]: I0129 12:04:42.443900 2730 server.go:460] "Adding debug handlers to kubelet server" Jan 29 12:04:42.446879 kubelet[2730]: I0129 12:04:42.446430 2730 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 12:04:42.450317 kubelet[2730]: I0129 12:04:42.450239 2730 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 12:04:42.450506 kubelet[2730]: I0129 12:04:42.450488 2730 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 12:04:42.458238 kubelet[2730]: E0129 12:04:42.453516 2730 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.21.48:6443/api/v1/namespaces/default/events\": dial tcp 172.31.21.48:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-21-48.181f284ab0763142 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-21-48,UID:ip-172-31-21-48,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-21-48,},FirstTimestamp:2025-01-29 12:04:42.441527618 +0000 UTC m=+0.964576315,LastTimestamp:2025-01-29 12:04:42.441527618 +0000 UTC m=+0.964576315,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-21-48,}" Jan 29 12:04:42.458521 kubelet[2730]: I0129 12:04:42.458496 2730 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 12:04:42.461423 kubelet[2730]: E0129 12:04:42.460488 2730 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-21-48\" not found" Jan 29 12:04:42.461423 kubelet[2730]: I0129 12:04:42.460556 2730 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 12:04:42.461423 kubelet[2730]: I0129 12:04:42.460777 2730 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 12:04:42.461423 kubelet[2730]: I0129 12:04:42.460833 2730 reconciler.go:26] "Reconciler: start to sync state" Jan 29 12:04:42.461423 kubelet[2730]: W0129 12:04:42.461232 2730 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.21.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.21.48:6443: connect: connection refused Jan 29 12:04:42.461423 kubelet[2730]: E0129 12:04:42.461287 2730 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.21.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.21.48:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:04:42.461885 kubelet[2730]: I0129 12:04:42.461868 2730 factory.go:221] Registration of the systemd container factory successfully Jan 29 12:04:42.462063 kubelet[2730]: I0129 12:04:42.462042 2730 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 12:04:42.463713 kubelet[2730]: E0129 12:04:42.463685 2730 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 12:04:42.464209 kubelet[2730]: I0129 12:04:42.464194 2730 factory.go:221] Registration of the containerd container factory successfully Jan 29 12:04:42.475511 kubelet[2730]: E0129 12:04:42.474365 2730 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-48?timeout=10s\": dial tcp 172.31.21.48:6443: connect: connection refused" interval="200ms" Jan 29 12:04:42.476918 kubelet[2730]: I0129 12:04:42.476870 2730 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 12:04:42.478429 kubelet[2730]: I0129 12:04:42.478399 2730 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 12:04:42.478429 kubelet[2730]: I0129 12:04:42.478423 2730 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 12:04:42.478557 kubelet[2730]: I0129 12:04:42.478447 2730 kubelet.go:2321] "Starting kubelet main sync loop" Jan 29 12:04:42.478557 kubelet[2730]: E0129 12:04:42.478499 2730 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 12:04:42.486612 kubelet[2730]: W0129 12:04:42.486499 2730 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.21.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.21.48:6443: connect: connection refused Jan 29 12:04:42.486782 kubelet[2730]: E0129 12:04:42.486584 2730 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.21.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.21.48:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:04:42.497371 kubelet[2730]: I0129 12:04:42.497175 2730 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 12:04:42.497371 kubelet[2730]: I0129 12:04:42.497196 2730 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 12:04:42.497371 kubelet[2730]: I0129 12:04:42.497211 2730 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:04:42.502467 kubelet[2730]: I0129 12:04:42.502432 2730 policy_none.go:49] "None policy: Start" Jan 29 12:04:42.503284 kubelet[2730]: I0129 12:04:42.503257 2730 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 12:04:42.503424 kubelet[2730]: I0129 12:04:42.503353 2730 state_mem.go:35] "Initializing new in-memory state store" Jan 29 12:04:42.512228 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 12:04:42.523181 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 12:04:42.526661 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 12:04:42.537316 kubelet[2730]: I0129 12:04:42.537270 2730 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 12:04:42.537690 kubelet[2730]: I0129 12:04:42.537537 2730 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 12:04:42.537690 kubelet[2730]: I0129 12:04:42.537551 2730 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 12:04:42.539749 kubelet[2730]: I0129 12:04:42.538682 2730 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 12:04:42.540271 kubelet[2730]: E0129 12:04:42.540253 2730 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-21-48\" not found" Jan 29 12:04:42.589033 systemd[1]: Created slice kubepods-burstable-pod071e1551006737256df63733b698c2f8.slice - libcontainer container kubepods-burstable-pod071e1551006737256df63733b698c2f8.slice. Jan 29 12:04:42.598600 systemd[1]: Created slice kubepods-burstable-pod596b4464faa0d1bb8aa8513072de0760.slice - libcontainer container kubepods-burstable-pod596b4464faa0d1bb8aa8513072de0760.slice. Jan 29 12:04:42.614580 systemd[1]: Created slice kubepods-burstable-podf0e33540546d2997e65c3c22e559334a.slice - libcontainer container kubepods-burstable-podf0e33540546d2997e65c3c22e559334a.slice. Jan 29 12:04:42.640556 kubelet[2730]: I0129 12:04:42.640210 2730 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-21-48" Jan 29 12:04:42.640671 kubelet[2730]: E0129 12:04:42.640637 2730 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.21.48:6443/api/v1/nodes\": dial tcp 172.31.21.48:6443: connect: connection refused" node="ip-172-31-21-48" Jan 29 12:04:42.662012 kubelet[2730]: I0129 12:04:42.661972 2730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f0e33540546d2997e65c3c22e559334a-kubeconfig\") pod \"kube-scheduler-ip-172-31-21-48\" (UID: \"f0e33540546d2997e65c3c22e559334a\") " pod="kube-system/kube-scheduler-ip-172-31-21-48" Jan 29 12:04:42.662012 kubelet[2730]: I0129 12:04:42.662014 2730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/071e1551006737256df63733b698c2f8-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-21-48\" (UID: \"071e1551006737256df63733b698c2f8\") " pod="kube-system/kube-apiserver-ip-172-31-21-48" Jan 29 12:04:42.662012 kubelet[2730]: I0129 12:04:42.662041 2730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/596b4464faa0d1bb8aa8513072de0760-ca-certs\") pod \"kube-controller-manager-ip-172-31-21-48\" (UID: \"596b4464faa0d1bb8aa8513072de0760\") " pod="kube-system/kube-controller-manager-ip-172-31-21-48" Jan 29 12:04:42.662012 kubelet[2730]: I0129 12:04:42.662083 2730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/596b4464faa0d1bb8aa8513072de0760-kubeconfig\") pod \"kube-controller-manager-ip-172-31-21-48\" (UID: \"596b4464faa0d1bb8aa8513072de0760\") " pod="kube-system/kube-controller-manager-ip-172-31-21-48" Jan 29 12:04:42.662406 kubelet[2730]: I0129 12:04:42.662103 2730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/071e1551006737256df63733b698c2f8-ca-certs\") pod \"kube-apiserver-ip-172-31-21-48\" (UID: \"071e1551006737256df63733b698c2f8\") " pod="kube-system/kube-apiserver-ip-172-31-21-48" Jan 29 12:04:42.662406 kubelet[2730]: I0129 12:04:42.662121 2730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/071e1551006737256df63733b698c2f8-k8s-certs\") pod \"kube-apiserver-ip-172-31-21-48\" (UID: \"071e1551006737256df63733b698c2f8\") " pod="kube-system/kube-apiserver-ip-172-31-21-48" Jan 29 12:04:42.662406 kubelet[2730]: I0129 12:04:42.662164 2730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/596b4464faa0d1bb8aa8513072de0760-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-21-48\" (UID: \"596b4464faa0d1bb8aa8513072de0760\") " pod="kube-system/kube-controller-manager-ip-172-31-21-48" Jan 29 12:04:42.662406 kubelet[2730]: I0129 12:04:42.662185 2730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/596b4464faa0d1bb8aa8513072de0760-k8s-certs\") pod \"kube-controller-manager-ip-172-31-21-48\" (UID: \"596b4464faa0d1bb8aa8513072de0760\") " pod="kube-system/kube-controller-manager-ip-172-31-21-48" Jan 29 12:04:42.662406 kubelet[2730]: I0129 12:04:42.662207 2730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/596b4464faa0d1bb8aa8513072de0760-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-21-48\" (UID: \"596b4464faa0d1bb8aa8513072de0760\") " pod="kube-system/kube-controller-manager-ip-172-31-21-48" Jan 29 12:04:42.679279 kubelet[2730]: E0129 12:04:42.679229 2730 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-48?timeout=10s\": dial tcp 172.31.21.48:6443: connect: connection refused" interval="400ms" Jan 29 12:04:42.843206 kubelet[2730]: I0129 12:04:42.843174 2730 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-21-48" Jan 29 12:04:42.843542 kubelet[2730]: E0129 12:04:42.843513 2730 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.21.48:6443/api/v1/nodes\": dial tcp 172.31.21.48:6443: connect: connection refused" node="ip-172-31-21-48" Jan 29 12:04:42.900273 containerd[1959]: time="2025-01-29T12:04:42.900141859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-21-48,Uid:071e1551006737256df63733b698c2f8,Namespace:kube-system,Attempt:0,}" Jan 29 12:04:42.913523 containerd[1959]: time="2025-01-29T12:04:42.913475850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-21-48,Uid:596b4464faa0d1bb8aa8513072de0760,Namespace:kube-system,Attempt:0,}" Jan 29 12:04:42.917602 containerd[1959]: time="2025-01-29T12:04:42.917561102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-21-48,Uid:f0e33540546d2997e65c3c22e559334a,Namespace:kube-system,Attempt:0,}" Jan 29 12:04:43.079937 kubelet[2730]: E0129 12:04:43.079884 2730 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-48?timeout=10s\": dial tcp 172.31.21.48:6443: connect: connection refused" interval="800ms" Jan 29 12:04:43.246127 kubelet[2730]: I0129 12:04:43.246088 2730 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-21-48" Jan 29 12:04:43.246549 kubelet[2730]: E0129 12:04:43.246458 2730 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.21.48:6443/api/v1/nodes\": dial tcp 172.31.21.48:6443: connect: connection refused" node="ip-172-31-21-48" Jan 29 12:04:43.417933 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount602873179.mount: Deactivated successfully. Jan 29 12:04:43.429809 containerd[1959]: time="2025-01-29T12:04:43.429755741Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:04:43.431420 containerd[1959]: time="2025-01-29T12:04:43.431361348Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 29 12:04:43.433521 containerd[1959]: time="2025-01-29T12:04:43.433482889Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:04:43.435736 containerd[1959]: time="2025-01-29T12:04:43.435681781Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:04:43.442200 containerd[1959]: time="2025-01-29T12:04:43.442147423Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 12:04:43.443397 containerd[1959]: time="2025-01-29T12:04:43.443363354Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:04:43.445610 containerd[1959]: time="2025-01-29T12:04:43.445552657Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 12:04:43.449146 containerd[1959]: time="2025-01-29T12:04:43.449100117Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:04:43.450378 containerd[1959]: time="2025-01-29T12:04:43.449907605Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 532.262608ms" Jan 29 12:04:43.452437 containerd[1959]: time="2025-01-29T12:04:43.452406104Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 538.843616ms" Jan 29 12:04:43.455022 containerd[1959]: time="2025-01-29T12:04:43.454988724Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 554.759236ms" Jan 29 12:04:43.640281 kubelet[2730]: W0129 12:04:43.640136 2730 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.21.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-48&limit=500&resourceVersion=0": dial tcp 172.31.21.48:6443: connect: connection refused Jan 29 12:04:43.640281 kubelet[2730]: E0129 12:04:43.640211 2730 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.21.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-48&limit=500&resourceVersion=0\": dial tcp 172.31.21.48:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:04:43.714039 containerd[1959]: time="2025-01-29T12:04:43.708386057Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:04:43.714039 containerd[1959]: time="2025-01-29T12:04:43.708448149Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:04:43.714039 containerd[1959]: time="2025-01-29T12:04:43.708488762Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:04:43.714039 containerd[1959]: time="2025-01-29T12:04:43.708582201Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:04:43.714039 containerd[1959]: time="2025-01-29T12:04:43.702724116Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:04:43.714039 containerd[1959]: time="2025-01-29T12:04:43.704398909Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:04:43.714039 containerd[1959]: time="2025-01-29T12:04:43.704428427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:04:43.714039 containerd[1959]: time="2025-01-29T12:04:43.704559921Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:04:43.728384 containerd[1959]: time="2025-01-29T12:04:43.727754975Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:04:43.728384 containerd[1959]: time="2025-01-29T12:04:43.727845042Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:04:43.728384 containerd[1959]: time="2025-01-29T12:04:43.727862538Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:04:43.729186 containerd[1959]: time="2025-01-29T12:04:43.729104307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:04:43.745012 kubelet[2730]: W0129 12:04:43.744875 2730 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.21.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.21.48:6443: connect: connection refused Jan 29 12:04:43.745012 kubelet[2730]: E0129 12:04:43.744972 2730 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.21.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.21.48:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:04:43.752777 systemd[1]: Started cri-containerd-0e2b83bb24d908ea1a4e4da6a89399877d2ad4b24ef0f038b8089febb0e493e2.scope - libcontainer container 0e2b83bb24d908ea1a4e4da6a89399877d2ad4b24ef0f038b8089febb0e493e2. Jan 29 12:04:43.787575 systemd[1]: Started cri-containerd-48fcdacef782d100073b06e05d33d1c5072f8ac94ead14eb79f435bc09e49340.scope - libcontainer container 48fcdacef782d100073b06e05d33d1c5072f8ac94ead14eb79f435bc09e49340. Jan 29 12:04:43.790022 systemd[1]: Started cri-containerd-e20ef3d607879f72c19444536848bf1e87160bd2fd7a4d36eb75cdb62b346dea.scope - libcontainer container e20ef3d607879f72c19444536848bf1e87160bd2fd7a4d36eb75cdb62b346dea. Jan 29 12:04:43.843739 kubelet[2730]: W0129 12:04:43.843631 2730 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.21.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.21.48:6443: connect: connection refused Jan 29 12:04:43.845638 kubelet[2730]: E0129 12:04:43.843773 2730 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.21.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.21.48:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:04:43.874615 containerd[1959]: time="2025-01-29T12:04:43.874571785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-21-48,Uid:f0e33540546d2997e65c3c22e559334a,Namespace:kube-system,Attempt:0,} returns sandbox id \"0e2b83bb24d908ea1a4e4da6a89399877d2ad4b24ef0f038b8089febb0e493e2\"" Jan 29 12:04:43.877427 kubelet[2730]: W0129 12:04:43.875813 2730 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.21.48:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.21.48:6443: connect: connection refused Jan 29 12:04:43.879153 kubelet[2730]: E0129 12:04:43.879060 2730 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.21.48:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.21.48:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:04:43.882792 kubelet[2730]: E0129 12:04:43.882752 2730 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-48?timeout=10s\": dial tcp 172.31.21.48:6443: connect: connection refused" interval="1.6s" Jan 29 12:04:43.894749 containerd[1959]: time="2025-01-29T12:04:43.894494552Z" level=info msg="CreateContainer within sandbox \"0e2b83bb24d908ea1a4e4da6a89399877d2ad4b24ef0f038b8089febb0e493e2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 12:04:43.916664 containerd[1959]: time="2025-01-29T12:04:43.916476170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-21-48,Uid:071e1551006737256df63733b698c2f8,Namespace:kube-system,Attempt:0,} returns sandbox id \"48fcdacef782d100073b06e05d33d1c5072f8ac94ead14eb79f435bc09e49340\"" Jan 29 12:04:43.922288 containerd[1959]: time="2025-01-29T12:04:43.921837424Z" level=info msg="CreateContainer within sandbox \"48fcdacef782d100073b06e05d33d1c5072f8ac94ead14eb79f435bc09e49340\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 12:04:43.925970 containerd[1959]: time="2025-01-29T12:04:43.925688340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-21-48,Uid:596b4464faa0d1bb8aa8513072de0760,Namespace:kube-system,Attempt:0,} returns sandbox id \"e20ef3d607879f72c19444536848bf1e87160bd2fd7a4d36eb75cdb62b346dea\"" Jan 29 12:04:43.931638 containerd[1959]: time="2025-01-29T12:04:43.931415048Z" level=info msg="CreateContainer within sandbox \"e20ef3d607879f72c19444536848bf1e87160bd2fd7a4d36eb75cdb62b346dea\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 12:04:43.939262 containerd[1959]: time="2025-01-29T12:04:43.938739853Z" level=info msg="CreateContainer within sandbox \"0e2b83bb24d908ea1a4e4da6a89399877d2ad4b24ef0f038b8089febb0e493e2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b7f0bf8a8e656a0570b17ca2fbf7d1d98e7baa7775faa5d1438f5da112478bff\"" Jan 29 12:04:43.939612 containerd[1959]: time="2025-01-29T12:04:43.939574216Z" level=info msg="StartContainer for \"b7f0bf8a8e656a0570b17ca2fbf7d1d98e7baa7775faa5d1438f5da112478bff\"" Jan 29 12:04:43.960112 containerd[1959]: time="2025-01-29T12:04:43.959957693Z" level=info msg="CreateContainer within sandbox \"48fcdacef782d100073b06e05d33d1c5072f8ac94ead14eb79f435bc09e49340\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0445605474d2f0effd131cde5943c83cd96e826b2f9f87c4f9460856571b3291\"" Jan 29 12:04:43.961411 containerd[1959]: time="2025-01-29T12:04:43.961277457Z" level=info msg="StartContainer for \"0445605474d2f0effd131cde5943c83cd96e826b2f9f87c4f9460856571b3291\"" Jan 29 12:04:43.976577 systemd[1]: Started cri-containerd-b7f0bf8a8e656a0570b17ca2fbf7d1d98e7baa7775faa5d1438f5da112478bff.scope - libcontainer container b7f0bf8a8e656a0570b17ca2fbf7d1d98e7baa7775faa5d1438f5da112478bff. Jan 29 12:04:43.978489 containerd[1959]: time="2025-01-29T12:04:43.978447204Z" level=info msg="CreateContainer within sandbox \"e20ef3d607879f72c19444536848bf1e87160bd2fd7a4d36eb75cdb62b346dea\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a453654a5dc95b7fd4f6eb0ba8bcc1ea87ca8218ee5628932014385e7f6bd116\"" Jan 29 12:04:43.980519 containerd[1959]: time="2025-01-29T12:04:43.980489324Z" level=info msg="StartContainer for \"a453654a5dc95b7fd4f6eb0ba8bcc1ea87ca8218ee5628932014385e7f6bd116\"" Jan 29 12:04:44.046663 systemd[1]: Started cri-containerd-0445605474d2f0effd131cde5943c83cd96e826b2f9f87c4f9460856571b3291.scope - libcontainer container 0445605474d2f0effd131cde5943c83cd96e826b2f9f87c4f9460856571b3291. Jan 29 12:04:44.049469 systemd[1]: Started cri-containerd-a453654a5dc95b7fd4f6eb0ba8bcc1ea87ca8218ee5628932014385e7f6bd116.scope - libcontainer container a453654a5dc95b7fd4f6eb0ba8bcc1ea87ca8218ee5628932014385e7f6bd116. Jan 29 12:04:44.053880 kubelet[2730]: I0129 12:04:44.053853 2730 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-21-48" Jan 29 12:04:44.054448 kubelet[2730]: E0129 12:04:44.054356 2730 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.21.48:6443/api/v1/nodes\": dial tcp 172.31.21.48:6443: connect: connection refused" node="ip-172-31-21-48" Jan 29 12:04:44.087949 containerd[1959]: time="2025-01-29T12:04:44.087906862Z" level=info msg="StartContainer for \"b7f0bf8a8e656a0570b17ca2fbf7d1d98e7baa7775faa5d1438f5da112478bff\" returns successfully" Jan 29 12:04:44.164880 containerd[1959]: time="2025-01-29T12:04:44.164747582Z" level=info msg="StartContainer for \"0445605474d2f0effd131cde5943c83cd96e826b2f9f87c4f9460856571b3291\" returns successfully" Jan 29 12:04:44.186482 containerd[1959]: time="2025-01-29T12:04:44.186428314Z" level=info msg="StartContainer for \"a453654a5dc95b7fd4f6eb0ba8bcc1ea87ca8218ee5628932014385e7f6bd116\" returns successfully" Jan 29 12:04:44.547936 kubelet[2730]: E0129 12:04:44.547885 2730 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.21.48:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.21.48:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:04:45.656476 kubelet[2730]: I0129 12:04:45.656439 2730 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-21-48" Jan 29 12:04:46.817288 kubelet[2730]: E0129 12:04:46.817251 2730 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-21-48\" not found" node="ip-172-31-21-48" Jan 29 12:04:46.944718 kubelet[2730]: I0129 12:04:46.944663 2730 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-21-48" Jan 29 12:04:46.944718 kubelet[2730]: E0129 12:04:46.944718 2730 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ip-172-31-21-48\": node \"ip-172-31-21-48\" not found" Jan 29 12:04:47.090033 kubelet[2730]: E0129 12:04:47.089490 2730 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-21-48\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-21-48" Jan 29 12:04:47.439559 kubelet[2730]: I0129 12:04:47.439482 2730 apiserver.go:52] "Watching apiserver" Jan 29 12:04:47.461229 kubelet[2730]: I0129 12:04:47.461189 2730 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 12:04:48.966540 systemd[1]: Reloading requested from client PID 2996 ('systemctl') (unit session-5.scope)... Jan 29 12:04:48.966561 systemd[1]: Reloading... Jan 29 12:04:49.171456 zram_generator::config[3033]: No configuration found. Jan 29 12:04:49.449519 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:04:49.627161 systemd[1]: Reloading finished in 660 ms. Jan 29 12:04:49.697943 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:04:49.711953 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 12:04:49.712427 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:04:49.712491 systemd[1]: kubelet.service: Consumed 1.314s CPU time, 115.2M memory peak, 0B memory swap peak. Jan 29 12:04:49.719849 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:04:50.008780 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:04:50.018678 (kubelet)[3094]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 12:04:50.118098 kubelet[3094]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:04:50.118098 kubelet[3094]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 12:04:50.118098 kubelet[3094]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:04:50.122341 kubelet[3094]: I0129 12:04:50.117040 3094 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 12:04:50.152038 kubelet[3094]: I0129 12:04:50.151762 3094 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 29 12:04:50.152230 kubelet[3094]: I0129 12:04:50.152215 3094 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 12:04:50.152820 kubelet[3094]: I0129 12:04:50.152801 3094 server.go:929] "Client rotation is on, will bootstrap in background" Jan 29 12:04:50.155483 kubelet[3094]: I0129 12:04:50.155457 3094 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 12:04:50.158132 kubelet[3094]: I0129 12:04:50.158111 3094 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 12:04:50.168340 kubelet[3094]: E0129 12:04:50.168244 3094 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 12:04:50.168742 kubelet[3094]: I0129 12:04:50.168397 3094 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 12:04:50.174443 kubelet[3094]: I0129 12:04:50.174415 3094 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 12:04:50.174713 kubelet[3094]: I0129 12:04:50.174672 3094 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 12:04:50.174844 kubelet[3094]: I0129 12:04:50.174806 3094 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 12:04:50.175188 kubelet[3094]: I0129 12:04:50.174847 3094 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-21-48","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 12:04:50.175395 kubelet[3094]: I0129 12:04:50.175195 3094 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 12:04:50.175395 kubelet[3094]: I0129 12:04:50.175211 3094 container_manager_linux.go:300] "Creating device plugin manager" Jan 29 12:04:50.175395 kubelet[3094]: I0129 12:04:50.175251 3094 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:04:50.175530 kubelet[3094]: I0129 12:04:50.175414 3094 kubelet.go:408] "Attempting to sync node with API server" Jan 29 12:04:50.175530 kubelet[3094]: I0129 12:04:50.175432 3094 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 12:04:50.178582 kubelet[3094]: I0129 12:04:50.176681 3094 kubelet.go:314] "Adding apiserver pod source" Jan 29 12:04:50.178582 kubelet[3094]: I0129 12:04:50.178397 3094 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 12:04:50.194387 kubelet[3094]: I0129 12:04:50.194352 3094 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 29 12:04:50.200331 kubelet[3094]: I0129 12:04:50.199840 3094 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 12:04:50.209790 kubelet[3094]: I0129 12:04:50.209750 3094 server.go:1269] "Started kubelet" Jan 29 12:04:50.217339 kubelet[3094]: I0129 12:04:50.217141 3094 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 12:04:50.222209 kubelet[3094]: I0129 12:04:50.221931 3094 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 12:04:50.239527 kubelet[3094]: I0129 12:04:50.238194 3094 server.go:460] "Adding debug handlers to kubelet server" Jan 29 12:04:50.258482 kubelet[3094]: I0129 12:04:50.224062 3094 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 12:04:50.263605 kubelet[3094]: I0129 12:04:50.224881 3094 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 12:04:50.264510 kubelet[3094]: I0129 12:04:50.224896 3094 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 12:04:50.264510 kubelet[3094]: I0129 12:04:50.264254 3094 reconciler.go:26] "Reconciler: start to sync state" Jan 29 12:04:50.264510 kubelet[3094]: I0129 12:04:50.243205 3094 factory.go:221] Registration of the systemd container factory successfully Jan 29 12:04:50.265650 kubelet[3094]: I0129 12:04:50.265597 3094 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 12:04:50.267129 kubelet[3094]: E0129 12:04:50.225078 3094 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-21-48\" not found" Jan 29 12:04:50.267129 kubelet[3094]: I0129 12:04:50.223464 3094 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 12:04:50.272786 kubelet[3094]: I0129 12:04:50.270588 3094 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 12:04:50.277587 kubelet[3094]: I0129 12:04:50.276955 3094 factory.go:221] Registration of the containerd container factory successfully Jan 29 12:04:50.282463 kubelet[3094]: E0129 12:04:50.282419 3094 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 12:04:50.294766 kubelet[3094]: I0129 12:04:50.290161 3094 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 12:04:50.310795 kubelet[3094]: I0129 12:04:50.310718 3094 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 12:04:50.310795 kubelet[3094]: I0129 12:04:50.310761 3094 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 12:04:50.311267 kubelet[3094]: I0129 12:04:50.310939 3094 kubelet.go:2321] "Starting kubelet main sync loop" Jan 29 12:04:50.311267 kubelet[3094]: E0129 12:04:50.311179 3094 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 12:04:50.394050 kubelet[3094]: I0129 12:04:50.393923 3094 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 12:04:50.394050 kubelet[3094]: I0129 12:04:50.394019 3094 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 12:04:50.394050 kubelet[3094]: I0129 12:04:50.394050 3094 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:04:50.397759 kubelet[3094]: I0129 12:04:50.394420 3094 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 12:04:50.397759 kubelet[3094]: I0129 12:04:50.394437 3094 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 12:04:50.397759 kubelet[3094]: I0129 12:04:50.394462 3094 policy_none.go:49] "None policy: Start" Jan 29 12:04:50.397759 kubelet[3094]: I0129 12:04:50.395718 3094 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 12:04:50.397759 kubelet[3094]: I0129 12:04:50.395741 3094 state_mem.go:35] "Initializing new in-memory state store" Jan 29 12:04:50.397759 kubelet[3094]: I0129 12:04:50.396213 3094 state_mem.go:75] "Updated machine memory state" Jan 29 12:04:50.408502 kubelet[3094]: I0129 12:04:50.408434 3094 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 12:04:50.410667 kubelet[3094]: I0129 12:04:50.408700 3094 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 12:04:50.410667 kubelet[3094]: I0129 12:04:50.408716 3094 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 12:04:50.410667 kubelet[3094]: I0129 12:04:50.409459 3094 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 12:04:50.470139 kubelet[3094]: I0129 12:04:50.469819 3094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/071e1551006737256df63733b698c2f8-ca-certs\") pod \"kube-apiserver-ip-172-31-21-48\" (UID: \"071e1551006737256df63733b698c2f8\") " pod="kube-system/kube-apiserver-ip-172-31-21-48" Jan 29 12:04:50.470139 kubelet[3094]: I0129 12:04:50.469868 3094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/596b4464faa0d1bb8aa8513072de0760-k8s-certs\") pod \"kube-controller-manager-ip-172-31-21-48\" (UID: \"596b4464faa0d1bb8aa8513072de0760\") " pod="kube-system/kube-controller-manager-ip-172-31-21-48" Jan 29 12:04:50.470139 kubelet[3094]: I0129 12:04:50.469896 3094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/596b4464faa0d1bb8aa8513072de0760-kubeconfig\") pod \"kube-controller-manager-ip-172-31-21-48\" (UID: \"596b4464faa0d1bb8aa8513072de0760\") " pod="kube-system/kube-controller-manager-ip-172-31-21-48" Jan 29 12:04:50.470139 kubelet[3094]: I0129 12:04:50.469921 3094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/596b4464faa0d1bb8aa8513072de0760-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-21-48\" (UID: \"596b4464faa0d1bb8aa8513072de0760\") " pod="kube-system/kube-controller-manager-ip-172-31-21-48" Jan 29 12:04:50.470139 kubelet[3094]: I0129 12:04:50.469947 3094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/071e1551006737256df63733b698c2f8-k8s-certs\") pod \"kube-apiserver-ip-172-31-21-48\" (UID: \"071e1551006737256df63733b698c2f8\") " pod="kube-system/kube-apiserver-ip-172-31-21-48" Jan 29 12:04:50.470433 kubelet[3094]: I0129 12:04:50.469971 3094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/071e1551006737256df63733b698c2f8-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-21-48\" (UID: \"071e1551006737256df63733b698c2f8\") " pod="kube-system/kube-apiserver-ip-172-31-21-48" Jan 29 12:04:50.470433 kubelet[3094]: I0129 12:04:50.469996 3094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/596b4464faa0d1bb8aa8513072de0760-ca-certs\") pod \"kube-controller-manager-ip-172-31-21-48\" (UID: \"596b4464faa0d1bb8aa8513072de0760\") " pod="kube-system/kube-controller-manager-ip-172-31-21-48" Jan 29 12:04:50.470433 kubelet[3094]: I0129 12:04:50.470021 3094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/596b4464faa0d1bb8aa8513072de0760-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-21-48\" (UID: \"596b4464faa0d1bb8aa8513072de0760\") " pod="kube-system/kube-controller-manager-ip-172-31-21-48" Jan 29 12:04:50.470433 kubelet[3094]: I0129 12:04:50.470047 3094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f0e33540546d2997e65c3c22e559334a-kubeconfig\") pod \"kube-scheduler-ip-172-31-21-48\" (UID: \"f0e33540546d2997e65c3c22e559334a\") " pod="kube-system/kube-scheduler-ip-172-31-21-48" Jan 29 12:04:50.534552 kubelet[3094]: I0129 12:04:50.531777 3094 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-21-48" Jan 29 12:04:50.543338 kubelet[3094]: I0129 12:04:50.543166 3094 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-21-48" Jan 29 12:04:50.543338 kubelet[3094]: I0129 12:04:50.543260 3094 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-21-48" Jan 29 12:04:51.184447 kubelet[3094]: I0129 12:04:51.183494 3094 apiserver.go:52] "Watching apiserver" Jan 29 12:04:51.264841 kubelet[3094]: I0129 12:04:51.264766 3094 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 12:04:51.408508 kubelet[3094]: I0129 12:04:51.407825 3094 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-21-48" podStartSLOduration=1.40779121 podStartE2EDuration="1.40779121s" podCreationTimestamp="2025-01-29 12:04:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:04:51.407089794 +0000 UTC m=+1.380183335" watchObservedRunningTime="2025-01-29 12:04:51.40779121 +0000 UTC m=+1.380884747" Jan 29 12:04:51.409543 kubelet[3094]: I0129 12:04:51.409285 3094 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-21-48" podStartSLOduration=1.409138899 podStartE2EDuration="1.409138899s" podCreationTimestamp="2025-01-29 12:04:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:04:51.389313701 +0000 UTC m=+1.362407236" watchObservedRunningTime="2025-01-29 12:04:51.409138899 +0000 UTC m=+1.382232441" Jan 29 12:04:51.450209 kubelet[3094]: I0129 12:04:51.448974 3094 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-21-48" podStartSLOduration=1.448956043 podStartE2EDuration="1.448956043s" podCreationTimestamp="2025-01-29 12:04:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:04:51.427637693 +0000 UTC m=+1.400731235" watchObservedRunningTime="2025-01-29 12:04:51.448956043 +0000 UTC m=+1.422049594" Jan 29 12:04:51.761771 sudo[2247]: pam_unix(sudo:session): session closed for user root Jan 29 12:04:51.788220 sshd[2244]: pam_unix(sshd:session): session closed for user core Jan 29 12:04:51.792811 systemd[1]: sshd@4-172.31.21.48:22-139.178.68.195:34800.service: Deactivated successfully. Jan 29 12:04:51.796851 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 12:04:51.797058 systemd[1]: session-5.scope: Consumed 3.761s CPU time, 143.9M memory peak, 0B memory swap peak. Jan 29 12:04:51.797929 systemd-logind[1947]: Session 5 logged out. Waiting for processes to exit. Jan 29 12:04:51.799357 systemd-logind[1947]: Removed session 5. Jan 29 12:04:53.612386 kubelet[3094]: I0129 12:04:53.611922 3094 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 12:04:53.613641 containerd[1959]: time="2025-01-29T12:04:53.613362886Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 12:04:53.614504 kubelet[3094]: I0129 12:04:53.614012 3094 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 12:04:54.196737 update_engine[1948]: I20250129 12:04:54.195474 1948 update_attempter.cc:509] Updating boot flags... Jan 29 12:04:54.343535 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3167) Jan 29 12:04:54.641567 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3169) Jan 29 12:04:54.758519 systemd[1]: Created slice kubepods-besteffort-pode21d6eb9_e0b0_40ea_a15e_8b7e6fdfa162.slice - libcontainer container kubepods-besteffort-pode21d6eb9_e0b0_40ea_a15e_8b7e6fdfa162.slice. Jan 29 12:04:54.794158 systemd[1]: Created slice kubepods-burstable-pod502fa276_b116_4850_a9b3_fe2ac012d0b6.slice - libcontainer container kubepods-burstable-pod502fa276_b116_4850_a9b3_fe2ac012d0b6.slice. Jan 29 12:04:54.807379 kubelet[3094]: I0129 12:04:54.807295 3094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/502fa276-b116-4850-a9b3-fe2ac012d0b6-run\") pod \"kube-flannel-ds-ts7bq\" (UID: \"502fa276-b116-4850-a9b3-fe2ac012d0b6\") " pod="kube-flannel/kube-flannel-ds-ts7bq" Jan 29 12:04:54.807865 kubelet[3094]: I0129 12:04:54.807390 3094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/502fa276-b116-4850-a9b3-fe2ac012d0b6-cni\") pod \"kube-flannel-ds-ts7bq\" (UID: \"502fa276-b116-4850-a9b3-fe2ac012d0b6\") " pod="kube-flannel/kube-flannel-ds-ts7bq" Jan 29 12:04:54.807865 kubelet[3094]: I0129 12:04:54.807416 3094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf4j7\" (UniqueName: \"kubernetes.io/projected/502fa276-b116-4850-a9b3-fe2ac012d0b6-kube-api-access-gf4j7\") pod \"kube-flannel-ds-ts7bq\" (UID: \"502fa276-b116-4850-a9b3-fe2ac012d0b6\") " pod="kube-flannel/kube-flannel-ds-ts7bq" Jan 29 12:04:54.807865 kubelet[3094]: I0129 12:04:54.807448 3094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e21d6eb9-e0b0-40ea-a15e-8b7e6fdfa162-xtables-lock\") pod \"kube-proxy-57qpb\" (UID: \"e21d6eb9-e0b0-40ea-a15e-8b7e6fdfa162\") " pod="kube-system/kube-proxy-57qpb" Jan 29 12:04:54.807865 kubelet[3094]: I0129 12:04:54.807472 3094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/502fa276-b116-4850-a9b3-fe2ac012d0b6-cni-plugin\") pod \"kube-flannel-ds-ts7bq\" (UID: \"502fa276-b116-4850-a9b3-fe2ac012d0b6\") " pod="kube-flannel/kube-flannel-ds-ts7bq" Jan 29 12:04:54.807865 kubelet[3094]: I0129 12:04:54.807496 3094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/502fa276-b116-4850-a9b3-fe2ac012d0b6-flannel-cfg\") pod \"kube-flannel-ds-ts7bq\" (UID: \"502fa276-b116-4850-a9b3-fe2ac012d0b6\") " pod="kube-flannel/kube-flannel-ds-ts7bq" Jan 29 12:04:54.808088 kubelet[3094]: I0129 12:04:54.807522 3094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/502fa276-b116-4850-a9b3-fe2ac012d0b6-xtables-lock\") pod \"kube-flannel-ds-ts7bq\" (UID: \"502fa276-b116-4850-a9b3-fe2ac012d0b6\") " pod="kube-flannel/kube-flannel-ds-ts7bq" Jan 29 12:04:54.808088 kubelet[3094]: I0129 12:04:54.807548 3094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e21d6eb9-e0b0-40ea-a15e-8b7e6fdfa162-kube-proxy\") pod \"kube-proxy-57qpb\" (UID: \"e21d6eb9-e0b0-40ea-a15e-8b7e6fdfa162\") " pod="kube-system/kube-proxy-57qpb" Jan 29 12:04:54.808088 kubelet[3094]: I0129 12:04:54.807568 3094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zzj8\" (UniqueName: \"kubernetes.io/projected/e21d6eb9-e0b0-40ea-a15e-8b7e6fdfa162-kube-api-access-7zzj8\") pod \"kube-proxy-57qpb\" (UID: \"e21d6eb9-e0b0-40ea-a15e-8b7e6fdfa162\") " pod="kube-system/kube-proxy-57qpb" Jan 29 12:04:54.808088 kubelet[3094]: I0129 12:04:54.807590 3094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e21d6eb9-e0b0-40ea-a15e-8b7e6fdfa162-lib-modules\") pod \"kube-proxy-57qpb\" (UID: \"e21d6eb9-e0b0-40ea-a15e-8b7e6fdfa162\") " pod="kube-system/kube-proxy-57qpb" Jan 29 12:04:55.086646 containerd[1959]: time="2025-01-29T12:04:55.085420394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-57qpb,Uid:e21d6eb9-e0b0-40ea-a15e-8b7e6fdfa162,Namespace:kube-system,Attempt:0,}" Jan 29 12:04:55.112294 containerd[1959]: time="2025-01-29T12:04:55.111863188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-ts7bq,Uid:502fa276-b116-4850-a9b3-fe2ac012d0b6,Namespace:kube-flannel,Attempt:0,}" Jan 29 12:04:55.130572 containerd[1959]: time="2025-01-29T12:04:55.130474165Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:04:55.130894 containerd[1959]: time="2025-01-29T12:04:55.130533636Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:04:55.130894 containerd[1959]: time="2025-01-29T12:04:55.130568753Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:04:55.130894 containerd[1959]: time="2025-01-29T12:04:55.130795588Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:04:55.180532 systemd[1]: Started cri-containerd-88296ab38d87164bdf64a9b38ca1ec1f3af07cf1e5526927754958feea4942c9.scope - libcontainer container 88296ab38d87164bdf64a9b38ca1ec1f3af07cf1e5526927754958feea4942c9. Jan 29 12:04:55.189645 containerd[1959]: time="2025-01-29T12:04:55.188222464Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:04:55.189645 containerd[1959]: time="2025-01-29T12:04:55.188298644Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:04:55.189645 containerd[1959]: time="2025-01-29T12:04:55.188340531Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:04:55.189645 containerd[1959]: time="2025-01-29T12:04:55.188442684Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:04:55.237696 systemd[1]: Started cri-containerd-337b4035f9b647a5c4dbf4a9456ea62e495c28042983517316bb28d00c734d51.scope - libcontainer container 337b4035f9b647a5c4dbf4a9456ea62e495c28042983517316bb28d00c734d51. Jan 29 12:04:55.253003 containerd[1959]: time="2025-01-29T12:04:55.252457117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-57qpb,Uid:e21d6eb9-e0b0-40ea-a15e-8b7e6fdfa162,Namespace:kube-system,Attempt:0,} returns sandbox id \"88296ab38d87164bdf64a9b38ca1ec1f3af07cf1e5526927754958feea4942c9\"" Jan 29 12:04:55.258022 containerd[1959]: time="2025-01-29T12:04:55.257938844Z" level=info msg="CreateContainer within sandbox \"88296ab38d87164bdf64a9b38ca1ec1f3af07cf1e5526927754958feea4942c9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 12:04:55.299410 containerd[1959]: time="2025-01-29T12:04:55.298334999Z" level=info msg="CreateContainer within sandbox \"88296ab38d87164bdf64a9b38ca1ec1f3af07cf1e5526927754958feea4942c9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fb60a308af41cbf00df8858a659618f7d956f21c7dbbb895a22fd3918352eae2\"" Jan 29 12:04:55.300597 containerd[1959]: time="2025-01-29T12:04:55.300555880Z" level=info msg="StartContainer for \"fb60a308af41cbf00df8858a659618f7d956f21c7dbbb895a22fd3918352eae2\"" Jan 29 12:04:55.324180 containerd[1959]: time="2025-01-29T12:04:55.324132366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-ts7bq,Uid:502fa276-b116-4850-a9b3-fe2ac012d0b6,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"337b4035f9b647a5c4dbf4a9456ea62e495c28042983517316bb28d00c734d51\"" Jan 29 12:04:55.332683 containerd[1959]: time="2025-01-29T12:04:55.332638242Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Jan 29 12:04:55.355526 systemd[1]: Started cri-containerd-fb60a308af41cbf00df8858a659618f7d956f21c7dbbb895a22fd3918352eae2.scope - libcontainer container fb60a308af41cbf00df8858a659618f7d956f21c7dbbb895a22fd3918352eae2. Jan 29 12:04:55.396853 containerd[1959]: time="2025-01-29T12:04:55.396807368Z" level=info msg="StartContainer for \"fb60a308af41cbf00df8858a659618f7d956f21c7dbbb895a22fd3918352eae2\" returns successfully" Jan 29 12:04:55.978285 systemd[1]: run-containerd-runc-k8s.io-88296ab38d87164bdf64a9b38ca1ec1f3af07cf1e5526927754958feea4942c9-runc.Quprwz.mount: Deactivated successfully. Jan 29 12:04:56.431092 kubelet[3094]: I0129 12:04:56.430394 3094 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-57qpb" podStartSLOduration=2.430373151 podStartE2EDuration="2.430373151s" podCreationTimestamp="2025-01-29 12:04:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:04:56.415218152 +0000 UTC m=+6.388311693" watchObservedRunningTime="2025-01-29 12:04:56.430373151 +0000 UTC m=+6.403466689" Jan 29 12:04:57.503107 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4070125508.mount: Deactivated successfully. Jan 29 12:04:57.580577 containerd[1959]: time="2025-01-29T12:04:57.580479149Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:57.582611 containerd[1959]: time="2025-01-29T12:04:57.582277412Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3852937" Jan 29 12:04:57.586202 containerd[1959]: time="2025-01-29T12:04:57.584833923Z" level=info msg="ImageCreate event name:\"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:57.588918 containerd[1959]: time="2025-01-29T12:04:57.588826800Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:57.589904 containerd[1959]: time="2025-01-29T12:04:57.589869459Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3842055\" in 2.257191143s" Jan 29 12:04:57.589904 containerd[1959]: time="2025-01-29T12:04:57.589906221Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" Jan 29 12:04:57.592851 containerd[1959]: time="2025-01-29T12:04:57.592743330Z" level=info msg="CreateContainer within sandbox \"337b4035f9b647a5c4dbf4a9456ea62e495c28042983517316bb28d00c734d51\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Jan 29 12:04:57.623686 containerd[1959]: time="2025-01-29T12:04:57.623639272Z" level=info msg="CreateContainer within sandbox \"337b4035f9b647a5c4dbf4a9456ea62e495c28042983517316bb28d00c734d51\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"117f2a421393172eec7987b9cb413a976142db377eae33e667b8ae391acf60fc\"" Jan 29 12:04:57.625793 containerd[1959]: time="2025-01-29T12:04:57.624671068Z" level=info msg="StartContainer for \"117f2a421393172eec7987b9cb413a976142db377eae33e667b8ae391acf60fc\"" Jan 29 12:04:57.682721 systemd[1]: Started cri-containerd-117f2a421393172eec7987b9cb413a976142db377eae33e667b8ae391acf60fc.scope - libcontainer container 117f2a421393172eec7987b9cb413a976142db377eae33e667b8ae391acf60fc. Jan 29 12:04:57.726942 systemd[1]: cri-containerd-117f2a421393172eec7987b9cb413a976142db377eae33e667b8ae391acf60fc.scope: Deactivated successfully. Jan 29 12:04:57.728781 containerd[1959]: time="2025-01-29T12:04:57.728739782Z" level=info msg="StartContainer for \"117f2a421393172eec7987b9cb413a976142db377eae33e667b8ae391acf60fc\" returns successfully" Jan 29 12:04:57.791275 containerd[1959]: time="2025-01-29T12:04:57.791108805Z" level=info msg="shim disconnected" id=117f2a421393172eec7987b9cb413a976142db377eae33e667b8ae391acf60fc namespace=k8s.io Jan 29 12:04:57.791275 containerd[1959]: time="2025-01-29T12:04:57.791171555Z" level=warning msg="cleaning up after shim disconnected" id=117f2a421393172eec7987b9cb413a976142db377eae33e667b8ae391acf60fc namespace=k8s.io Jan 29 12:04:57.791275 containerd[1959]: time="2025-01-29T12:04:57.791183047Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:04:58.365378 systemd[1]: run-containerd-runc-k8s.io-117f2a421393172eec7987b9cb413a976142db377eae33e667b8ae391acf60fc-runc.jwp2hg.mount: Deactivated successfully. Jan 29 12:04:58.365516 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-117f2a421393172eec7987b9cb413a976142db377eae33e667b8ae391acf60fc-rootfs.mount: Deactivated successfully. Jan 29 12:04:58.408882 containerd[1959]: time="2025-01-29T12:04:58.408520866Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Jan 29 12:05:00.735077 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount430355792.mount: Deactivated successfully. Jan 29 12:05:01.969861 containerd[1959]: time="2025-01-29T12:05:01.969802274Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:05:01.972926 containerd[1959]: time="2025-01-29T12:05:01.972815550Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26866358" Jan 29 12:05:01.983154 containerd[1959]: time="2025-01-29T12:05:01.983057831Z" level=info msg="ImageCreate event name:\"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:05:02.018326 containerd[1959]: time="2025-01-29T12:05:02.016553197Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:05:02.026119 containerd[1959]: time="2025-01-29T12:05:02.026065826Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26855532\" in 3.617455788s" Jan 29 12:05:02.026119 containerd[1959]: time="2025-01-29T12:05:02.026115498Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" Jan 29 12:05:02.030938 containerd[1959]: time="2025-01-29T12:05:02.030808726Z" level=info msg="CreateContainer within sandbox \"337b4035f9b647a5c4dbf4a9456ea62e495c28042983517316bb28d00c734d51\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 29 12:05:02.075843 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2581874358.mount: Deactivated successfully. Jan 29 12:05:02.080773 containerd[1959]: time="2025-01-29T12:05:02.080716546Z" level=info msg="CreateContainer within sandbox \"337b4035f9b647a5c4dbf4a9456ea62e495c28042983517316bb28d00c734d51\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d1e78b94e590e1acc7b065fd3cf540bc8817387e14d0eb3ba88b975a1778df7f\"" Jan 29 12:05:02.081689 containerd[1959]: time="2025-01-29T12:05:02.081631569Z" level=info msg="StartContainer for \"d1e78b94e590e1acc7b065fd3cf540bc8817387e14d0eb3ba88b975a1778df7f\"" Jan 29 12:05:02.133563 systemd[1]: Started cri-containerd-d1e78b94e590e1acc7b065fd3cf540bc8817387e14d0eb3ba88b975a1778df7f.scope - libcontainer container d1e78b94e590e1acc7b065fd3cf540bc8817387e14d0eb3ba88b975a1778df7f. Jan 29 12:05:02.168543 systemd[1]: cri-containerd-d1e78b94e590e1acc7b065fd3cf540bc8817387e14d0eb3ba88b975a1778df7f.scope: Deactivated successfully. Jan 29 12:05:02.178101 containerd[1959]: time="2025-01-29T12:05:02.177984887Z" level=info msg="StartContainer for \"d1e78b94e590e1acc7b065fd3cf540bc8817387e14d0eb3ba88b975a1778df7f\" returns successfully" Jan 29 12:05:02.203031 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d1e78b94e590e1acc7b065fd3cf540bc8817387e14d0eb3ba88b975a1778df7f-rootfs.mount: Deactivated successfully. Jan 29 12:05:02.253359 kubelet[3094]: I0129 12:05:02.253104 3094 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 29 12:05:02.345373 systemd[1]: Created slice kubepods-burstable-pod488acd92_753d_4711_8d5a_112cb8e58884.slice - libcontainer container kubepods-burstable-pod488acd92_753d_4711_8d5a_112cb8e58884.slice. Jan 29 12:05:02.368747 systemd[1]: Created slice kubepods-burstable-pod1475098a_8abd_489f_9128_c1ba43b66fab.slice - libcontainer container kubepods-burstable-pod1475098a_8abd_489f_9128_c1ba43b66fab.slice. Jan 29 12:05:02.390738 containerd[1959]: time="2025-01-29T12:05:02.390547937Z" level=info msg="shim disconnected" id=d1e78b94e590e1acc7b065fd3cf540bc8817387e14d0eb3ba88b975a1778df7f namespace=k8s.io Jan 29 12:05:02.390738 containerd[1959]: time="2025-01-29T12:05:02.390698818Z" level=warning msg="cleaning up after shim disconnected" id=d1e78b94e590e1acc7b065fd3cf540bc8817387e14d0eb3ba88b975a1778df7f namespace=k8s.io Jan 29 12:05:02.390738 containerd[1959]: time="2025-01-29T12:05:02.390715125Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:05:02.409569 containerd[1959]: time="2025-01-29T12:05:02.409273439Z" level=warning msg="cleanup warnings time=\"2025-01-29T12:05:02Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 12:05:02.423338 containerd[1959]: time="2025-01-29T12:05:02.423271180Z" level=info msg="CreateContainer within sandbox \"337b4035f9b647a5c4dbf4a9456ea62e495c28042983517316bb28d00c734d51\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Jan 29 12:05:02.444229 containerd[1959]: time="2025-01-29T12:05:02.444172914Z" level=info msg="CreateContainer within sandbox \"337b4035f9b647a5c4dbf4a9456ea62e495c28042983517316bb28d00c734d51\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"8477a6139e623e0f37291d72ef1b6d76d11ebc11ef350dde59f5a5a84191c0e7\"" Jan 29 12:05:02.446220 containerd[1959]: time="2025-01-29T12:05:02.445007357Z" level=info msg="StartContainer for \"8477a6139e623e0f37291d72ef1b6d76d11ebc11ef350dde59f5a5a84191c0e7\"" Jan 29 12:05:02.477516 kubelet[3094]: I0129 12:05:02.476906 3094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4v2n\" (UniqueName: \"kubernetes.io/projected/488acd92-753d-4711-8d5a-112cb8e58884-kube-api-access-p4v2n\") pod \"coredns-6f6b679f8f-hm9ns\" (UID: \"488acd92-753d-4711-8d5a-112cb8e58884\") " pod="kube-system/coredns-6f6b679f8f-hm9ns" Jan 29 12:05:02.477516 kubelet[3094]: I0129 12:05:02.477237 3094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1475098a-8abd-489f-9128-c1ba43b66fab-config-volume\") pod \"coredns-6f6b679f8f-54p5q\" (UID: \"1475098a-8abd-489f-9128-c1ba43b66fab\") " pod="kube-system/coredns-6f6b679f8f-54p5q" Jan 29 12:05:02.477516 kubelet[3094]: I0129 12:05:02.477272 3094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhbmv\" (UniqueName: \"kubernetes.io/projected/1475098a-8abd-489f-9128-c1ba43b66fab-kube-api-access-zhbmv\") pod \"coredns-6f6b679f8f-54p5q\" (UID: \"1475098a-8abd-489f-9128-c1ba43b66fab\") " pod="kube-system/coredns-6f6b679f8f-54p5q" Jan 29 12:05:02.477516 kubelet[3094]: I0129 12:05:02.477352 3094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/488acd92-753d-4711-8d5a-112cb8e58884-config-volume\") pod \"coredns-6f6b679f8f-hm9ns\" (UID: \"488acd92-753d-4711-8d5a-112cb8e58884\") " pod="kube-system/coredns-6f6b679f8f-hm9ns" Jan 29 12:05:02.494569 systemd[1]: Started cri-containerd-8477a6139e623e0f37291d72ef1b6d76d11ebc11ef350dde59f5a5a84191c0e7.scope - libcontainer container 8477a6139e623e0f37291d72ef1b6d76d11ebc11ef350dde59f5a5a84191c0e7. Jan 29 12:05:02.539579 containerd[1959]: time="2025-01-29T12:05:02.538082972Z" level=info msg="StartContainer for \"8477a6139e623e0f37291d72ef1b6d76d11ebc11ef350dde59f5a5a84191c0e7\" returns successfully" Jan 29 12:05:02.661481 containerd[1959]: time="2025-01-29T12:05:02.661428409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hm9ns,Uid:488acd92-753d-4711-8d5a-112cb8e58884,Namespace:kube-system,Attempt:0,}" Jan 29 12:05:02.679158 containerd[1959]: time="2025-01-29T12:05:02.678901056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-54p5q,Uid:1475098a-8abd-489f-9128-c1ba43b66fab,Namespace:kube-system,Attempt:0,}" Jan 29 12:05:02.729398 containerd[1959]: time="2025-01-29T12:05:02.729333872Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-54p5q,Uid:1475098a-8abd-489f-9128-c1ba43b66fab,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f2bd2cf56b70326152a01e19a74cbea5306427d903defdb84e491c3a784c50d0\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 29 12:05:02.729749 kubelet[3094]: E0129 12:05:02.729714 3094 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2bd2cf56b70326152a01e19a74cbea5306427d903defdb84e491c3a784c50d0\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 29 12:05:02.729875 kubelet[3094]: E0129 12:05:02.729784 3094 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2bd2cf56b70326152a01e19a74cbea5306427d903defdb84e491c3a784c50d0\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-54p5q" Jan 29 12:05:02.729875 kubelet[3094]: E0129 12:05:02.729812 3094 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2bd2cf56b70326152a01e19a74cbea5306427d903defdb84e491c3a784c50d0\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-54p5q" Jan 29 12:05:02.730136 kubelet[3094]: E0129 12:05:02.729961 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-54p5q_kube-system(1475098a-8abd-489f-9128-c1ba43b66fab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-54p5q_kube-system(1475098a-8abd-489f-9128-c1ba43b66fab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f2bd2cf56b70326152a01e19a74cbea5306427d903defdb84e491c3a784c50d0\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6f6b679f8f-54p5q" podUID="1475098a-8abd-489f-9128-c1ba43b66fab" Jan 29 12:05:02.730832 containerd[1959]: time="2025-01-29T12:05:02.730362258Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hm9ns,Uid:488acd92-753d-4711-8d5a-112cb8e58884,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"14169cd8cf6777c2ac4e3ddc360c9333cd74be74a90c475dc54f18b04113f8fd\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 29 12:05:02.730915 kubelet[3094]: E0129 12:05:02.730663 3094 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14169cd8cf6777c2ac4e3ddc360c9333cd74be74a90c475dc54f18b04113f8fd\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 29 12:05:02.730915 kubelet[3094]: E0129 12:05:02.730713 3094 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14169cd8cf6777c2ac4e3ddc360c9333cd74be74a90c475dc54f18b04113f8fd\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-hm9ns" Jan 29 12:05:02.730915 kubelet[3094]: E0129 12:05:02.730736 3094 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14169cd8cf6777c2ac4e3ddc360c9333cd74be74a90c475dc54f18b04113f8fd\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-hm9ns" Jan 29 12:05:02.730915 kubelet[3094]: E0129 12:05:02.730773 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-hm9ns_kube-system(488acd92-753d-4711-8d5a-112cb8e58884)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-hm9ns_kube-system(488acd92-753d-4711-8d5a-112cb8e58884)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"14169cd8cf6777c2ac4e3ddc360c9333cd74be74a90c475dc54f18b04113f8fd\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6f6b679f8f-hm9ns" podUID="488acd92-753d-4711-8d5a-112cb8e58884" Jan 29 12:05:03.613602 (udev-worker)[3814]: Network interface NamePolicy= disabled on kernel command line. Jan 29 12:05:03.646290 systemd-networkd[1808]: flannel.1: Link UP Jan 29 12:05:03.646299 systemd-networkd[1808]: flannel.1: Gained carrier Jan 29 12:05:05.113604 systemd-networkd[1808]: flannel.1: Gained IPv6LL Jan 29 12:05:07.592555 ntpd[1940]: Listen normally on 7 flannel.1 192.168.0.0:123 Jan 29 12:05:07.592648 ntpd[1940]: Listen normally on 8 flannel.1 [fe80::f057:4aff:fe32:4fe4%4]:123 Jan 29 12:05:07.593088 ntpd[1940]: 29 Jan 12:05:07 ntpd[1940]: Listen normally on 7 flannel.1 192.168.0.0:123 Jan 29 12:05:07.593088 ntpd[1940]: 29 Jan 12:05:07 ntpd[1940]: Listen normally on 8 flannel.1 [fe80::f057:4aff:fe32:4fe4%4]:123 Jan 29 12:05:14.313526 containerd[1959]: time="2025-01-29T12:05:14.312984790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-54p5q,Uid:1475098a-8abd-489f-9128-c1ba43b66fab,Namespace:kube-system,Attempt:0,}" Jan 29 12:05:14.441962 systemd-networkd[1808]: cni0: Link UP Jan 29 12:05:14.442080 systemd-networkd[1808]: cni0: Gained carrier Jan 29 12:05:14.455298 (udev-worker)[3954]: Network interface NamePolicy= disabled on kernel command line. Jan 29 12:05:14.455608 systemd-networkd[1808]: cni0: Lost carrier Jan 29 12:05:14.498423 systemd-networkd[1808]: vethadbdfa2c: Link UP Jan 29 12:05:14.503099 kernel: cni0: port 1(vethadbdfa2c) entered blocking state Jan 29 12:05:14.503199 kernel: cni0: port 1(vethadbdfa2c) entered disabled state Jan 29 12:05:14.505608 kernel: vethadbdfa2c: entered allmulticast mode Jan 29 12:05:14.505699 kernel: vethadbdfa2c: entered promiscuous mode Jan 29 12:05:14.508743 kernel: cni0: port 1(vethadbdfa2c) entered blocking state Jan 29 12:05:14.508815 kernel: cni0: port 1(vethadbdfa2c) entered forwarding state Jan 29 12:05:14.508846 kernel: cni0: port 1(vethadbdfa2c) entered disabled state Jan 29 12:05:14.510119 (udev-worker)[3960]: Network interface NamePolicy= disabled on kernel command line. Jan 29 12:05:14.523165 kernel: cni0: port 1(vethadbdfa2c) entered blocking state Jan 29 12:05:14.524431 kernel: cni0: port 1(vethadbdfa2c) entered forwarding state Jan 29 12:05:14.523840 systemd-networkd[1808]: vethadbdfa2c: Gained carrier Jan 29 12:05:14.524222 systemd-networkd[1808]: cni0: Gained carrier Jan 29 12:05:14.541200 containerd[1959]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009a8e8), "name":"cbr0", "type":"bridge"} Jan 29 12:05:14.541200 containerd[1959]: delegateAdd: netconf sent to delegate plugin: Jan 29 12:05:14.575379 containerd[1959]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":8951,"name":"cbr0","type":"bridge"}time="2025-01-29T12:05:14.575140897Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:05:14.576578 containerd[1959]: time="2025-01-29T12:05:14.575941316Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:05:14.576578 containerd[1959]: time="2025-01-29T12:05:14.575968894Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:05:14.576578 containerd[1959]: time="2025-01-29T12:05:14.576156688Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:05:14.613805 systemd[1]: run-containerd-runc-k8s.io-ecaf3e2e526bf4f4b66f205bb7565cddfe9b9c2134c342c8625aea1d839e6f99-runc.egAYU9.mount: Deactivated successfully. Jan 29 12:05:14.625435 systemd[1]: Started cri-containerd-ecaf3e2e526bf4f4b66f205bb7565cddfe9b9c2134c342c8625aea1d839e6f99.scope - libcontainer container ecaf3e2e526bf4f4b66f205bb7565cddfe9b9c2134c342c8625aea1d839e6f99. Jan 29 12:05:14.689769 containerd[1959]: time="2025-01-29T12:05:14.689723497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-54p5q,Uid:1475098a-8abd-489f-9128-c1ba43b66fab,Namespace:kube-system,Attempt:0,} returns sandbox id \"ecaf3e2e526bf4f4b66f205bb7565cddfe9b9c2134c342c8625aea1d839e6f99\"" Jan 29 12:05:14.693897 containerd[1959]: time="2025-01-29T12:05:14.693723766Z" level=info msg="CreateContainer within sandbox \"ecaf3e2e526bf4f4b66f205bb7565cddfe9b9c2134c342c8625aea1d839e6f99\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 12:05:14.718017 containerd[1959]: time="2025-01-29T12:05:14.717920923Z" level=info msg="CreateContainer within sandbox \"ecaf3e2e526bf4f4b66f205bb7565cddfe9b9c2134c342c8625aea1d839e6f99\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"293f3d40075bb390ca2c15635809832620b4806198179447a7e6e4df6771001b\"" Jan 29 12:05:14.720173 containerd[1959]: time="2025-01-29T12:05:14.718763796Z" level=info msg="StartContainer for \"293f3d40075bb390ca2c15635809832620b4806198179447a7e6e4df6771001b\"" Jan 29 12:05:14.756530 systemd[1]: Started cri-containerd-293f3d40075bb390ca2c15635809832620b4806198179447a7e6e4df6771001b.scope - libcontainer container 293f3d40075bb390ca2c15635809832620b4806198179447a7e6e4df6771001b. Jan 29 12:05:14.818653 containerd[1959]: time="2025-01-29T12:05:14.818621145Z" level=info msg="StartContainer for \"293f3d40075bb390ca2c15635809832620b4806198179447a7e6e4df6771001b\" returns successfully" Jan 29 12:05:15.312744 containerd[1959]: time="2025-01-29T12:05:15.312605848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hm9ns,Uid:488acd92-753d-4711-8d5a-112cb8e58884,Namespace:kube-system,Attempt:0,}" Jan 29 12:05:15.345819 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1175156408.mount: Deactivated successfully. Jan 29 12:05:15.357654 systemd-networkd[1808]: veth8f012948: Link UP Jan 29 12:05:15.360367 kernel: cni0: port 2(veth8f012948) entered blocking state Jan 29 12:05:15.360493 kernel: cni0: port 2(veth8f012948) entered disabled state Jan 29 12:05:15.363389 kernel: veth8f012948: entered allmulticast mode Jan 29 12:05:15.366425 kernel: veth8f012948: entered promiscuous mode Jan 29 12:05:15.366518 kernel: cni0: port 2(veth8f012948) entered blocking state Jan 29 12:05:15.366545 kernel: cni0: port 2(veth8f012948) entered forwarding state Jan 29 12:05:15.375149 systemd-networkd[1808]: veth8f012948: Gained carrier Jan 29 12:05:15.378118 containerd[1959]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009a8e8), "name":"cbr0", "type":"bridge"} Jan 29 12:05:15.378118 containerd[1959]: delegateAdd: netconf sent to delegate plugin: Jan 29 12:05:15.430618 containerd[1959]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":8951,"name":"cbr0","type":"bridge"}time="2025-01-29T12:05:15.430277019Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:05:15.430618 containerd[1959]: time="2025-01-29T12:05:15.430434520Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:05:15.430618 containerd[1959]: time="2025-01-29T12:05:15.430448477Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:05:15.431544 containerd[1959]: time="2025-01-29T12:05:15.430960632Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:05:15.464754 systemd[1]: run-containerd-runc-k8s.io-e56b78e9a289cf1fee2fb95734513fb41f6bb072357df89a2f181c0c1a79311d-runc.7Wrrqq.mount: Deactivated successfully. Jan 29 12:05:15.485563 systemd[1]: Started cri-containerd-e56b78e9a289cf1fee2fb95734513fb41f6bb072357df89a2f181c0c1a79311d.scope - libcontainer container e56b78e9a289cf1fee2fb95734513fb41f6bb072357df89a2f181c0c1a79311d. Jan 29 12:05:15.561053 kubelet[3094]: I0129 12:05:15.560840 3094 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-ts7bq" podStartSLOduration=14.860942486999999 podStartE2EDuration="21.560818088s" podCreationTimestamp="2025-01-29 12:04:54 +0000 UTC" firstStartedPulling="2025-01-29 12:04:55.327587386 +0000 UTC m=+5.300680911" lastFinishedPulling="2025-01-29 12:05:02.027462983 +0000 UTC m=+12.000556512" observedRunningTime="2025-01-29 12:05:03.454714825 +0000 UTC m=+13.427808365" watchObservedRunningTime="2025-01-29 12:05:15.560818088 +0000 UTC m=+25.533911691" Jan 29 12:05:15.689870 containerd[1959]: time="2025-01-29T12:05:15.689830630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hm9ns,Uid:488acd92-753d-4711-8d5a-112cb8e58884,Namespace:kube-system,Attempt:0,} returns sandbox id \"e56b78e9a289cf1fee2fb95734513fb41f6bb072357df89a2f181c0c1a79311d\"" Jan 29 12:05:15.694841 containerd[1959]: time="2025-01-29T12:05:15.694793359Z" level=info msg="CreateContainer within sandbox \"e56b78e9a289cf1fee2fb95734513fb41f6bb072357df89a2f181c0c1a79311d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 12:05:15.721737 containerd[1959]: time="2025-01-29T12:05:15.721694504Z" level=info msg="CreateContainer within sandbox \"e56b78e9a289cf1fee2fb95734513fb41f6bb072357df89a2f181c0c1a79311d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"19728e3788d5d788d21170de6eacee0bef1408b101914fbf7ce19282c546d6fc\"" Jan 29 12:05:15.724138 containerd[1959]: time="2025-01-29T12:05:15.722877252Z" level=info msg="StartContainer for \"19728e3788d5d788d21170de6eacee0bef1408b101914fbf7ce19282c546d6fc\"" Jan 29 12:05:15.773627 systemd[1]: Started cri-containerd-19728e3788d5d788d21170de6eacee0bef1408b101914fbf7ce19282c546d6fc.scope - libcontainer container 19728e3788d5d788d21170de6eacee0bef1408b101914fbf7ce19282c546d6fc. Jan 29 12:05:15.830822 containerd[1959]: time="2025-01-29T12:05:15.830774855Z" level=info msg="StartContainer for \"19728e3788d5d788d21170de6eacee0bef1408b101914fbf7ce19282c546d6fc\" returns successfully" Jan 29 12:05:15.929470 systemd-networkd[1808]: vethadbdfa2c: Gained IPv6LL Jan 29 12:05:15.929919 systemd-networkd[1808]: cni0: Gained IPv6LL Jan 29 12:05:16.341585 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1467290368.mount: Deactivated successfully. Jan 29 12:05:16.536696 kubelet[3094]: I0129 12:05:16.536021 3094 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-hm9ns" podStartSLOduration=22.53600026 podStartE2EDuration="22.53600026s" podCreationTimestamp="2025-01-29 12:04:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:05:16.535519304 +0000 UTC m=+26.508612847" watchObservedRunningTime="2025-01-29 12:05:16.53600026 +0000 UTC m=+26.509093800" Jan 29 12:05:16.536696 kubelet[3094]: I0129 12:05:16.536140 3094 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-54p5q" podStartSLOduration=22.536132567 podStartE2EDuration="22.536132567s" podCreationTimestamp="2025-01-29 12:04:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:05:15.563509826 +0000 UTC m=+25.536603373" watchObservedRunningTime="2025-01-29 12:05:16.536132567 +0000 UTC m=+26.509226108" Jan 29 12:05:16.633954 systemd-networkd[1808]: veth8f012948: Gained IPv6LL Jan 29 12:05:19.592654 ntpd[1940]: Listen normally on 9 cni0 192.168.0.1:123 Jan 29 12:05:19.592752 ntpd[1940]: Listen normally on 10 cni0 [fe80::6444:57ff:fe7d:fba8%5]:123 Jan 29 12:05:19.593182 ntpd[1940]: 29 Jan 12:05:19 ntpd[1940]: Listen normally on 9 cni0 192.168.0.1:123 Jan 29 12:05:19.593182 ntpd[1940]: 29 Jan 12:05:19 ntpd[1940]: Listen normally on 10 cni0 [fe80::6444:57ff:fe7d:fba8%5]:123 Jan 29 12:05:19.593182 ntpd[1940]: 29 Jan 12:05:19 ntpd[1940]: Listen normally on 11 vethadbdfa2c [fe80::a07f:c1ff:fe4c:73cb%6]:123 Jan 29 12:05:19.593182 ntpd[1940]: 29 Jan 12:05:19 ntpd[1940]: Listen normally on 12 veth8f012948 [fe80::b0b4:39ff:feb6:1f73%7]:123 Jan 29 12:05:19.592814 ntpd[1940]: Listen normally on 11 vethadbdfa2c [fe80::a07f:c1ff:fe4c:73cb%6]:123 Jan 29 12:05:19.592858 ntpd[1940]: Listen normally on 12 veth8f012948 [fe80::b0b4:39ff:feb6:1f73%7]:123 Jan 29 12:05:31.915328 systemd[1]: Started sshd@5-172.31.21.48:22-139.178.68.195:52374.service - OpenSSH per-connection server daemon (139.178.68.195:52374). Jan 29 12:05:32.103834 sshd[4231]: Accepted publickey for core from 139.178.68.195 port 52374 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:05:32.105676 sshd[4231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:05:32.110811 systemd-logind[1947]: New session 6 of user core. Jan 29 12:05:32.120519 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 12:05:32.330018 sshd[4231]: pam_unix(sshd:session): session closed for user core Jan 29 12:05:32.334131 systemd[1]: sshd@5-172.31.21.48:22-139.178.68.195:52374.service: Deactivated successfully. Jan 29 12:05:32.336094 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 12:05:32.337572 systemd-logind[1947]: Session 6 logged out. Waiting for processes to exit. Jan 29 12:05:32.338705 systemd-logind[1947]: Removed session 6. Jan 29 12:05:37.364515 systemd[1]: Started sshd@6-172.31.21.48:22-139.178.68.195:60146.service - OpenSSH per-connection server daemon (139.178.68.195:60146). Jan 29 12:05:37.530161 sshd[4265]: Accepted publickey for core from 139.178.68.195 port 60146 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:05:37.531885 sshd[4265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:05:37.538331 systemd-logind[1947]: New session 7 of user core. Jan 29 12:05:37.542506 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 12:05:37.736195 sshd[4265]: pam_unix(sshd:session): session closed for user core Jan 29 12:05:37.739735 systemd[1]: sshd@6-172.31.21.48:22-139.178.68.195:60146.service: Deactivated successfully. Jan 29 12:05:37.742621 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 12:05:37.744218 systemd-logind[1947]: Session 7 logged out. Waiting for processes to exit. Jan 29 12:05:37.745545 systemd-logind[1947]: Removed session 7. Jan 29 12:05:42.773675 systemd[1]: Started sshd@7-172.31.21.48:22-139.178.68.195:60158.service - OpenSSH per-connection server daemon (139.178.68.195:60158). Jan 29 12:05:42.934043 sshd[4300]: Accepted publickey for core from 139.178.68.195 port 60158 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:05:42.935703 sshd[4300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:05:42.940914 systemd-logind[1947]: New session 8 of user core. Jan 29 12:05:42.948688 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 12:05:43.140754 sshd[4300]: pam_unix(sshd:session): session closed for user core Jan 29 12:05:43.145666 systemd[1]: sshd@7-172.31.21.48:22-139.178.68.195:60158.service: Deactivated successfully. Jan 29 12:05:43.147809 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 12:05:43.148774 systemd-logind[1947]: Session 8 logged out. Waiting for processes to exit. Jan 29 12:05:43.149919 systemd-logind[1947]: Removed session 8. Jan 29 12:05:48.183817 systemd[1]: Started sshd@8-172.31.21.48:22-139.178.68.195:36772.service - OpenSSH per-connection server daemon (139.178.68.195:36772). Jan 29 12:05:48.333350 sshd[4336]: Accepted publickey for core from 139.178.68.195 port 36772 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:05:48.334914 sshd[4336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:05:48.341634 systemd-logind[1947]: New session 9 of user core. Jan 29 12:05:48.348546 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 12:05:48.541391 sshd[4336]: pam_unix(sshd:session): session closed for user core Jan 29 12:05:48.547270 systemd-logind[1947]: Session 9 logged out. Waiting for processes to exit. Jan 29 12:05:48.547799 systemd[1]: sshd@8-172.31.21.48:22-139.178.68.195:36772.service: Deactivated successfully. Jan 29 12:05:48.550288 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 12:05:48.551461 systemd-logind[1947]: Removed session 9. Jan 29 12:05:48.582778 systemd[1]: Started sshd@9-172.31.21.48:22-139.178.68.195:36786.service - OpenSSH per-connection server daemon (139.178.68.195:36786). Jan 29 12:05:48.747153 sshd[4350]: Accepted publickey for core from 139.178.68.195 port 36786 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:05:48.748786 sshd[4350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:05:48.754177 systemd-logind[1947]: New session 10 of user core. Jan 29 12:05:48.757463 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 12:05:49.048891 sshd[4350]: pam_unix(sshd:session): session closed for user core Jan 29 12:05:49.055356 systemd[1]: sshd@9-172.31.21.48:22-139.178.68.195:36786.service: Deactivated successfully. Jan 29 12:05:49.059566 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 12:05:49.063518 systemd-logind[1947]: Session 10 logged out. Waiting for processes to exit. Jan 29 12:05:49.066052 systemd-logind[1947]: Removed session 10. Jan 29 12:05:49.089709 systemd[1]: Started sshd@10-172.31.21.48:22-139.178.68.195:36796.service - OpenSSH per-connection server daemon (139.178.68.195:36796). Jan 29 12:05:49.258218 sshd[4367]: Accepted publickey for core from 139.178.68.195 port 36796 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:05:49.260556 sshd[4367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:05:49.265464 systemd-logind[1947]: New session 11 of user core. Jan 29 12:05:49.269519 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 12:05:49.477162 sshd[4367]: pam_unix(sshd:session): session closed for user core Jan 29 12:05:49.482742 systemd-logind[1947]: Session 11 logged out. Waiting for processes to exit. Jan 29 12:05:49.483627 systemd[1]: sshd@10-172.31.21.48:22-139.178.68.195:36796.service: Deactivated successfully. Jan 29 12:05:49.486331 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 12:05:49.487628 systemd-logind[1947]: Removed session 11. Jan 29 12:05:54.523722 systemd[1]: Started sshd@11-172.31.21.48:22-139.178.68.195:36800.service - OpenSSH per-connection server daemon (139.178.68.195:36800). Jan 29 12:05:54.699701 sshd[4418]: Accepted publickey for core from 139.178.68.195 port 36800 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:05:54.701863 sshd[4418]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:05:54.709355 systemd-logind[1947]: New session 12 of user core. Jan 29 12:05:54.713586 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 12:05:54.959249 sshd[4418]: pam_unix(sshd:session): session closed for user core Jan 29 12:05:54.963007 systemd[1]: sshd@11-172.31.21.48:22-139.178.68.195:36800.service: Deactivated successfully. Jan 29 12:05:54.965692 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 12:05:54.967274 systemd-logind[1947]: Session 12 logged out. Waiting for processes to exit. Jan 29 12:05:54.968808 systemd-logind[1947]: Removed session 12. Jan 29 12:05:59.994723 systemd[1]: Started sshd@12-172.31.21.48:22-139.178.68.195:42002.service - OpenSSH per-connection server daemon (139.178.68.195:42002). Jan 29 12:06:00.206923 sshd[4454]: Accepted publickey for core from 139.178.68.195 port 42002 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:06:00.208923 sshd[4454]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:00.214160 systemd-logind[1947]: New session 13 of user core. Jan 29 12:06:00.220540 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 12:06:00.416194 sshd[4454]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:00.420212 systemd[1]: sshd@12-172.31.21.48:22-139.178.68.195:42002.service: Deactivated successfully. Jan 29 12:06:00.422699 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 12:06:00.424735 systemd-logind[1947]: Session 13 logged out. Waiting for processes to exit. Jan 29 12:06:00.426395 systemd-logind[1947]: Removed session 13. Jan 29 12:06:00.459520 systemd[1]: Started sshd@13-172.31.21.48:22-139.178.68.195:42016.service - OpenSSH per-connection server daemon (139.178.68.195:42016). Jan 29 12:06:00.635105 sshd[4467]: Accepted publickey for core from 139.178.68.195 port 42016 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:06:00.637057 sshd[4467]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:00.642717 systemd-logind[1947]: New session 14 of user core. Jan 29 12:06:00.650633 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 12:06:01.230791 sshd[4467]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:01.236998 systemd[1]: sshd@13-172.31.21.48:22-139.178.68.195:42016.service: Deactivated successfully. Jan 29 12:06:01.247273 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 12:06:01.259350 systemd-logind[1947]: Session 14 logged out. Waiting for processes to exit. Jan 29 12:06:01.283850 systemd[1]: Started sshd@14-172.31.21.48:22-139.178.68.195:42018.service - OpenSSH per-connection server daemon (139.178.68.195:42018). Jan 29 12:06:01.285936 systemd-logind[1947]: Removed session 14. Jan 29 12:06:01.454377 sshd[4478]: Accepted publickey for core from 139.178.68.195 port 42018 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:06:01.456639 sshd[4478]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:01.478273 systemd-logind[1947]: New session 15 of user core. Jan 29 12:06:01.495092 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 12:06:03.651677 sshd[4478]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:03.659821 systemd[1]: sshd@14-172.31.21.48:22-139.178.68.195:42018.service: Deactivated successfully. Jan 29 12:06:03.663507 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 12:06:03.665619 systemd-logind[1947]: Session 15 logged out. Waiting for processes to exit. Jan 29 12:06:03.669296 systemd-logind[1947]: Removed session 15. Jan 29 12:06:03.681355 systemd[1]: Started sshd@15-172.31.21.48:22-139.178.68.195:42028.service - OpenSSH per-connection server daemon (139.178.68.195:42028). Jan 29 12:06:03.847531 sshd[4506]: Accepted publickey for core from 139.178.68.195 port 42028 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:06:03.848287 sshd[4506]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:03.854600 systemd-logind[1947]: New session 16 of user core. Jan 29 12:06:03.860604 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 12:06:04.208442 sshd[4506]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:04.220055 systemd[1]: sshd@15-172.31.21.48:22-139.178.68.195:42028.service: Deactivated successfully. Jan 29 12:06:04.223599 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 12:06:04.226958 systemd-logind[1947]: Session 16 logged out. Waiting for processes to exit. Jan 29 12:06:04.237140 systemd[1]: Started sshd@16-172.31.21.48:22-139.178.68.195:42038.service - OpenSSH per-connection server daemon (139.178.68.195:42038). Jan 29 12:06:04.238806 systemd-logind[1947]: Removed session 16. Jan 29 12:06:04.407348 sshd[4538]: Accepted publickey for core from 139.178.68.195 port 42038 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:06:04.408705 sshd[4538]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:04.417494 systemd-logind[1947]: New session 17 of user core. Jan 29 12:06:04.424558 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 12:06:04.643361 sshd[4538]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:04.655026 systemd[1]: sshd@16-172.31.21.48:22-139.178.68.195:42038.service: Deactivated successfully. Jan 29 12:06:04.660650 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 12:06:04.662588 systemd-logind[1947]: Session 17 logged out. Waiting for processes to exit. Jan 29 12:06:04.665866 systemd-logind[1947]: Removed session 17. Jan 29 12:06:09.681875 systemd[1]: Started sshd@17-172.31.21.48:22-139.178.68.195:43646.service - OpenSSH per-connection server daemon (139.178.68.195:43646). Jan 29 12:06:09.852445 sshd[4572]: Accepted publickey for core from 139.178.68.195 port 43646 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:06:09.854132 sshd[4572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:09.861658 systemd-logind[1947]: New session 18 of user core. Jan 29 12:06:09.865536 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 12:06:10.057902 sshd[4572]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:10.064529 systemd-logind[1947]: Session 18 logged out. Waiting for processes to exit. Jan 29 12:06:10.065162 systemd[1]: sshd@17-172.31.21.48:22-139.178.68.195:43646.service: Deactivated successfully. Jan 29 12:06:10.067757 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 12:06:10.069876 systemd-logind[1947]: Removed session 18. Jan 29 12:06:15.103171 systemd[1]: Started sshd@18-172.31.21.48:22-139.178.68.195:56416.service - OpenSSH per-connection server daemon (139.178.68.195:56416). Jan 29 12:06:15.259735 sshd[4609]: Accepted publickey for core from 139.178.68.195 port 56416 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:06:15.261484 sshd[4609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:15.267333 systemd-logind[1947]: New session 19 of user core. Jan 29 12:06:15.271010 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 12:06:15.505563 sshd[4609]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:15.511195 systemd-logind[1947]: Session 19 logged out. Waiting for processes to exit. Jan 29 12:06:15.513254 systemd[1]: sshd@18-172.31.21.48:22-139.178.68.195:56416.service: Deactivated successfully. Jan 29 12:06:15.515811 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 12:06:15.517647 systemd-logind[1947]: Removed session 19. Jan 29 12:06:20.539935 systemd[1]: Started sshd@19-172.31.21.48:22-139.178.68.195:56428.service - OpenSSH per-connection server daemon (139.178.68.195:56428). Jan 29 12:06:20.722324 sshd[4644]: Accepted publickey for core from 139.178.68.195 port 56428 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:06:20.724285 sshd[4644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:20.728946 systemd-logind[1947]: New session 20 of user core. Jan 29 12:06:20.736520 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 12:06:20.926444 sshd[4644]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:20.932019 systemd-logind[1947]: Session 20 logged out. Waiting for processes to exit. Jan 29 12:06:20.932994 systemd[1]: sshd@19-172.31.21.48:22-139.178.68.195:56428.service: Deactivated successfully. Jan 29 12:06:20.935652 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 12:06:20.936758 systemd-logind[1947]: Removed session 20. Jan 29 12:06:25.985616 systemd[1]: Started sshd@20-172.31.21.48:22-139.178.68.195:53108.service - OpenSSH per-connection server daemon (139.178.68.195:53108). Jan 29 12:06:26.219340 sshd[4678]: Accepted publickey for core from 139.178.68.195 port 53108 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:06:26.220015 sshd[4678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:26.238176 systemd-logind[1947]: New session 21 of user core. Jan 29 12:06:26.255741 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 29 12:06:26.463270 sshd[4678]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:26.467489 systemd-logind[1947]: Session 21 logged out. Waiting for processes to exit. Jan 29 12:06:26.468524 systemd[1]: sshd@20-172.31.21.48:22-139.178.68.195:53108.service: Deactivated successfully. Jan 29 12:06:26.472373 systemd[1]: session-21.scope: Deactivated successfully. Jan 29 12:06:26.475649 systemd-logind[1947]: Removed session 21. Jan 29 12:06:41.371180 systemd[1]: cri-containerd-a453654a5dc95b7fd4f6eb0ba8bcc1ea87ca8218ee5628932014385e7f6bd116.scope: Deactivated successfully. Jan 29 12:06:41.372872 systemd[1]: cri-containerd-a453654a5dc95b7fd4f6eb0ba8bcc1ea87ca8218ee5628932014385e7f6bd116.scope: Consumed 2.471s CPU time, 18.7M memory peak, 0B memory swap peak. Jan 29 12:06:41.408938 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a453654a5dc95b7fd4f6eb0ba8bcc1ea87ca8218ee5628932014385e7f6bd116-rootfs.mount: Deactivated successfully. Jan 29 12:06:41.417796 containerd[1959]: time="2025-01-29T12:06:41.417717973Z" level=info msg="shim disconnected" id=a453654a5dc95b7fd4f6eb0ba8bcc1ea87ca8218ee5628932014385e7f6bd116 namespace=k8s.io Jan 29 12:06:41.417796 containerd[1959]: time="2025-01-29T12:06:41.417794945Z" level=warning msg="cleaning up after shim disconnected" id=a453654a5dc95b7fd4f6eb0ba8bcc1ea87ca8218ee5628932014385e7f6bd116 namespace=k8s.io Jan 29 12:06:41.418972 containerd[1959]: time="2025-01-29T12:06:41.417807130Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:06:41.768503 kubelet[3094]: I0129 12:06:41.768469 3094 scope.go:117] "RemoveContainer" containerID="a453654a5dc95b7fd4f6eb0ba8bcc1ea87ca8218ee5628932014385e7f6bd116" Jan 29 12:06:41.770701 containerd[1959]: time="2025-01-29T12:06:41.770656093Z" level=info msg="CreateContainer within sandbox \"e20ef3d607879f72c19444536848bf1e87160bd2fd7a4d36eb75cdb62b346dea\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 29 12:06:41.798617 containerd[1959]: time="2025-01-29T12:06:41.798566409Z" level=info msg="CreateContainer within sandbox \"e20ef3d607879f72c19444536848bf1e87160bd2fd7a4d36eb75cdb62b346dea\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"3be4d9e620f94d5f5ceb131b5d75f03b27158939107229693ff25af0b5570539\"" Jan 29 12:06:41.800334 containerd[1959]: time="2025-01-29T12:06:41.799158262Z" level=info msg="StartContainer for \"3be4d9e620f94d5f5ceb131b5d75f03b27158939107229693ff25af0b5570539\"" Jan 29 12:06:41.846540 systemd[1]: Started cri-containerd-3be4d9e620f94d5f5ceb131b5d75f03b27158939107229693ff25af0b5570539.scope - libcontainer container 3be4d9e620f94d5f5ceb131b5d75f03b27158939107229693ff25af0b5570539. Jan 29 12:06:41.902096 containerd[1959]: time="2025-01-29T12:06:41.902047678Z" level=info msg="StartContainer for \"3be4d9e620f94d5f5ceb131b5d75f03b27158939107229693ff25af0b5570539\" returns successfully" Jan 29 12:06:42.780666 kubelet[3094]: E0129 12:06:42.780605 3094 controller.go:195] "Failed to update lease" err="Put \"https://172.31.21.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-48?timeout=10s\": context deadline exceeded" Jan 29 12:06:46.469756 systemd[1]: cri-containerd-b7f0bf8a8e656a0570b17ca2fbf7d1d98e7baa7775faa5d1438f5da112478bff.scope: Deactivated successfully. Jan 29 12:06:46.470413 systemd[1]: cri-containerd-b7f0bf8a8e656a0570b17ca2fbf7d1d98e7baa7775faa5d1438f5da112478bff.scope: Consumed 1.384s CPU time, 18.4M memory peak, 0B memory swap peak. Jan 29 12:06:46.518660 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b7f0bf8a8e656a0570b17ca2fbf7d1d98e7baa7775faa5d1438f5da112478bff-rootfs.mount: Deactivated successfully. Jan 29 12:06:46.542004 containerd[1959]: time="2025-01-29T12:06:46.541846900Z" level=info msg="shim disconnected" id=b7f0bf8a8e656a0570b17ca2fbf7d1d98e7baa7775faa5d1438f5da112478bff namespace=k8s.io Jan 29 12:06:46.542004 containerd[1959]: time="2025-01-29T12:06:46.541996646Z" level=warning msg="cleaning up after shim disconnected" id=b7f0bf8a8e656a0570b17ca2fbf7d1d98e7baa7775faa5d1438f5da112478bff namespace=k8s.io Jan 29 12:06:46.542004 containerd[1959]: time="2025-01-29T12:06:46.542009385Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:06:46.787088 kubelet[3094]: I0129 12:06:46.786975 3094 scope.go:117] "RemoveContainer" containerID="b7f0bf8a8e656a0570b17ca2fbf7d1d98e7baa7775faa5d1438f5da112478bff" Jan 29 12:06:46.790148 containerd[1959]: time="2025-01-29T12:06:46.790109904Z" level=info msg="CreateContainer within sandbox \"0e2b83bb24d908ea1a4e4da6a89399877d2ad4b24ef0f038b8089febb0e493e2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 29 12:06:46.838147 containerd[1959]: time="2025-01-29T12:06:46.838094525Z" level=info msg="CreateContainer within sandbox \"0e2b83bb24d908ea1a4e4da6a89399877d2ad4b24ef0f038b8089febb0e493e2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"cfcfde69a17f969b678fc90a0f170e2f4229f2fdc5c6cf613cb2101d538ab2ae\"" Jan 29 12:06:46.839467 containerd[1959]: time="2025-01-29T12:06:46.839431490Z" level=info msg="StartContainer for \"cfcfde69a17f969b678fc90a0f170e2f4229f2fdc5c6cf613cb2101d538ab2ae\"" Jan 29 12:06:46.879538 systemd[1]: Started cri-containerd-cfcfde69a17f969b678fc90a0f170e2f4229f2fdc5c6cf613cb2101d538ab2ae.scope - libcontainer container cfcfde69a17f969b678fc90a0f170e2f4229f2fdc5c6cf613cb2101d538ab2ae. Jan 29 12:06:46.952319 containerd[1959]: time="2025-01-29T12:06:46.951973579Z" level=info msg="StartContainer for \"cfcfde69a17f969b678fc90a0f170e2f4229f2fdc5c6cf613cb2101d538ab2ae\" returns successfully"