Jan 13 20:44:59.998866 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:01:45 -00 2025 Jan 13 20:44:59.998912 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 13 20:44:59.998930 kernel: BIOS-provided physical RAM map: Jan 13 20:44:59.998942 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 13 20:44:59.998953 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 13 20:44:59.998965 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 13 20:44:59.998984 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Jan 13 20:44:59.998996 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Jan 13 20:44:59.999008 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Jan 13 20:44:59.999020 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 13 20:44:59.999032 kernel: NX (Execute Disable) protection: active Jan 13 20:44:59.999044 kernel: APIC: Static calls initialized Jan 13 20:44:59.999056 kernel: SMBIOS 2.7 present. Jan 13 20:44:59.999069 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jan 13 20:44:59.999088 kernel: Hypervisor detected: KVM Jan 13 20:44:59.999102 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 13 20:44:59.999115 kernel: kvm-clock: using sched offset of 7539615022 cycles Jan 13 20:44:59.999127 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 13 20:44:59.999139 kernel: tsc: Detected 2499.996 MHz processor Jan 13 20:44:59.999152 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 13 20:44:59.999165 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 13 20:44:59.999182 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Jan 13 20:44:59.999194 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 13 20:44:59.999206 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 13 20:44:59.999220 kernel: Using GB pages for direct mapping Jan 13 20:44:59.999232 kernel: ACPI: Early table checksum verification disabled Jan 13 20:44:59.999245 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Jan 13 20:44:59.999258 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Jan 13 20:44:59.999271 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 13 20:44:59.999285 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 13 20:44:59.999301 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Jan 13 20:44:59.999315 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 13 20:44:59.999328 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 13 20:44:59.999341 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jan 13 20:44:59.999354 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 13 20:44:59.999366 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jan 13 20:44:59.999381 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jan 13 20:44:59.999396 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 13 20:44:59.999411 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Jan 13 20:44:59.999430 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Jan 13 20:44:59.999451 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Jan 13 20:44:59.999466 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Jan 13 20:44:59.999482 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Jan 13 20:44:59.999498 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Jan 13 20:44:59.999517 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Jan 13 20:44:59.999533 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Jan 13 20:44:59.999549 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Jan 13 20:44:59.999564 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Jan 13 20:44:59.999580 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 13 20:44:59.999595 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 13 20:44:59.999621 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jan 13 20:44:59.999637 kernel: NUMA: Initialized distance table, cnt=1 Jan 13 20:44:59.999653 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Jan 13 20:44:59.999672 kernel: Zone ranges: Jan 13 20:44:59.999688 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 13 20:44:59.999704 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Jan 13 20:44:59.999719 kernel: Normal empty Jan 13 20:44:59.999735 kernel: Movable zone start for each node Jan 13 20:44:59.999786 kernel: Early memory node ranges Jan 13 20:45:00.000781 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 13 20:45:00.000803 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Jan 13 20:45:00.000818 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Jan 13 20:45:00.000839 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 20:45:00.000854 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 13 20:45:00.000870 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Jan 13 20:45:00.000885 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 13 20:45:00.000900 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 13 20:45:00.000916 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jan 13 20:45:00.000931 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 13 20:45:00.000946 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 13 20:45:00.000962 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 13 20:45:00.000977 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 13 20:45:00.000996 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 13 20:45:00.001011 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 13 20:45:00.001027 kernel: TSC deadline timer available Jan 13 20:45:00.001042 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 13 20:45:00.001057 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 13 20:45:00.001072 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Jan 13 20:45:00.001087 kernel: Booting paravirtualized kernel on KVM Jan 13 20:45:00.001103 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 13 20:45:00.001119 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 13 20:45:00.001137 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 13 20:45:00.001152 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 13 20:45:00.001167 kernel: pcpu-alloc: [0] 0 1 Jan 13 20:45:00.001182 kernel: kvm-guest: PV spinlocks enabled Jan 13 20:45:00.001198 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 13 20:45:00.001214 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 13 20:45:00.001231 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 20:45:00.001245 kernel: random: crng init done Jan 13 20:45:00.001263 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 20:45:00.001278 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 13 20:45:00.001293 kernel: Fallback order for Node 0: 0 Jan 13 20:45:00.001309 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Jan 13 20:45:00.001324 kernel: Policy zone: DMA32 Jan 13 20:45:00.001339 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 20:45:00.001355 kernel: Memory: 1932348K/2057760K available (12288K kernel code, 2299K rwdata, 22736K rodata, 42976K init, 2216K bss, 125152K reserved, 0K cma-reserved) Jan 13 20:45:00.001370 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 13 20:45:00.001389 kernel: Kernel/User page tables isolation: enabled Jan 13 20:45:00.001404 kernel: ftrace: allocating 37920 entries in 149 pages Jan 13 20:45:00.001419 kernel: ftrace: allocated 149 pages with 4 groups Jan 13 20:45:00.001435 kernel: Dynamic Preempt: voluntary Jan 13 20:45:00.001450 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 20:45:00.001467 kernel: rcu: RCU event tracing is enabled. Jan 13 20:45:00.001482 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 13 20:45:00.001498 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 20:45:00.001514 kernel: Rude variant of Tasks RCU enabled. Jan 13 20:45:00.001529 kernel: Tracing variant of Tasks RCU enabled. Jan 13 20:45:00.001548 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 20:45:00.001563 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 13 20:45:00.001579 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 13 20:45:00.001594 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 20:45:00.001609 kernel: Console: colour VGA+ 80x25 Jan 13 20:45:00.001625 kernel: printk: console [ttyS0] enabled Jan 13 20:45:00.001640 kernel: ACPI: Core revision 20230628 Jan 13 20:45:00.001655 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jan 13 20:45:00.001671 kernel: APIC: Switch to symmetric I/O mode setup Jan 13 20:45:00.001691 kernel: x2apic enabled Jan 13 20:45:00.001707 kernel: APIC: Switched APIC routing to: physical x2apic Jan 13 20:45:00.001734 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Jan 13 20:45:00.002807 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Jan 13 20:45:00.002826 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 13 20:45:00.002842 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 13 20:45:00.002857 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 13 20:45:00.002871 kernel: Spectre V2 : Mitigation: Retpolines Jan 13 20:45:00.002886 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 13 20:45:00.002900 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 13 20:45:00.002916 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 13 20:45:00.002931 kernel: RETBleed: Vulnerable Jan 13 20:45:00.002951 kernel: Speculative Store Bypass: Vulnerable Jan 13 20:45:00.002966 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jan 13 20:45:00.002981 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 13 20:45:00.002995 kernel: GDS: Unknown: Dependent on hypervisor status Jan 13 20:45:00.003011 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 13 20:45:00.003025 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 13 20:45:00.003044 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 13 20:45:00.003059 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jan 13 20:45:00.003074 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jan 13 20:45:00.003089 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 13 20:45:00.003103 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 13 20:45:00.003118 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 13 20:45:00.003134 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 13 20:45:00.003149 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 13 20:45:00.003164 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jan 13 20:45:00.003180 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jan 13 20:45:00.003195 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jan 13 20:45:00.003214 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jan 13 20:45:00.003229 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jan 13 20:45:00.003244 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jan 13 20:45:00.003259 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jan 13 20:45:00.003275 kernel: Freeing SMP alternatives memory: 32K Jan 13 20:45:00.003290 kernel: pid_max: default: 32768 minimum: 301 Jan 13 20:45:00.003306 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 20:45:00.003321 kernel: landlock: Up and running. Jan 13 20:45:00.003336 kernel: SELinux: Initializing. Jan 13 20:45:00.003351 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 13 20:45:00.003367 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 13 20:45:00.003383 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 13 20:45:00.003401 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:45:00.003415 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:45:00.003431 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:45:00.003448 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 13 20:45:00.003463 kernel: signal: max sigframe size: 3632 Jan 13 20:45:00.003478 kernel: rcu: Hierarchical SRCU implementation. Jan 13 20:45:00.003494 kernel: rcu: Max phase no-delay instances is 400. Jan 13 20:45:00.003510 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 13 20:45:00.003524 kernel: smp: Bringing up secondary CPUs ... Jan 13 20:45:00.003544 kernel: smpboot: x86: Booting SMP configuration: Jan 13 20:45:00.003558 kernel: .... node #0, CPUs: #1 Jan 13 20:45:00.003573 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 13 20:45:00.003590 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 13 20:45:00.003613 kernel: smp: Brought up 1 node, 2 CPUs Jan 13 20:45:00.003629 kernel: smpboot: Max logical packages: 1 Jan 13 20:45:00.003644 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Jan 13 20:45:00.003658 kernel: devtmpfs: initialized Jan 13 20:45:00.003676 kernel: x86/mm: Memory block size: 128MB Jan 13 20:45:00.003691 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 20:45:00.003706 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 13 20:45:00.003721 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 20:45:00.003736 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 20:45:00.004805 kernel: audit: initializing netlink subsys (disabled) Jan 13 20:45:00.004824 kernel: audit: type=2000 audit(1736801099.515:1): state=initialized audit_enabled=0 res=1 Jan 13 20:45:00.004841 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 20:45:00.004857 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 13 20:45:00.004879 kernel: cpuidle: using governor menu Jan 13 20:45:00.004895 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 20:45:00.004912 kernel: dca service started, version 1.12.1 Jan 13 20:45:00.004928 kernel: PCI: Using configuration type 1 for base access Jan 13 20:45:00.004945 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 13 20:45:00.004961 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 20:45:00.004978 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 20:45:00.004994 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 20:45:00.005011 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 20:45:00.005031 kernel: ACPI: Added _OSI(Module Device) Jan 13 20:45:00.005048 kernel: ACPI: Added _OSI(Processor Device) Jan 13 20:45:00.005064 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 20:45:00.005081 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 20:45:00.005097 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 13 20:45:00.005113 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 13 20:45:00.005129 kernel: ACPI: Interpreter enabled Jan 13 20:45:00.005146 kernel: ACPI: PM: (supports S0 S5) Jan 13 20:45:00.005163 kernel: ACPI: Using IOAPIC for interrupt routing Jan 13 20:45:00.005183 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 13 20:45:00.005199 kernel: PCI: Using E820 reservations for host bridge windows Jan 13 20:45:00.005216 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jan 13 20:45:00.005233 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 20:45:00.005448 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 13 20:45:00.005593 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 13 20:45:00.005727 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 13 20:45:00.006774 kernel: acpiphp: Slot [3] registered Jan 13 20:45:00.006791 kernel: acpiphp: Slot [4] registered Jan 13 20:45:00.006806 kernel: acpiphp: Slot [5] registered Jan 13 20:45:00.006849 kernel: acpiphp: Slot [6] registered Jan 13 20:45:00.006865 kernel: acpiphp: Slot [7] registered Jan 13 20:45:00.006878 kernel: acpiphp: Slot [8] registered Jan 13 20:45:00.006895 kernel: acpiphp: Slot [9] registered Jan 13 20:45:00.006942 kernel: acpiphp: Slot [10] registered Jan 13 20:45:00.006957 kernel: acpiphp: Slot [11] registered Jan 13 20:45:00.007004 kernel: acpiphp: Slot [12] registered Jan 13 20:45:00.007024 kernel: acpiphp: Slot [13] registered Jan 13 20:45:00.007067 kernel: acpiphp: Slot [14] registered Jan 13 20:45:00.007081 kernel: acpiphp: Slot [15] registered Jan 13 20:45:00.007095 kernel: acpiphp: Slot [16] registered Jan 13 20:45:00.007139 kernel: acpiphp: Slot [17] registered Jan 13 20:45:00.007152 kernel: acpiphp: Slot [18] registered Jan 13 20:45:00.007167 kernel: acpiphp: Slot [19] registered Jan 13 20:45:00.007217 kernel: acpiphp: Slot [20] registered Jan 13 20:45:00.007231 kernel: acpiphp: Slot [21] registered Jan 13 20:45:00.007245 kernel: acpiphp: Slot [22] registered Jan 13 20:45:00.007292 kernel: acpiphp: Slot [23] registered Jan 13 20:45:00.007307 kernel: acpiphp: Slot [24] registered Jan 13 20:45:00.007321 kernel: acpiphp: Slot [25] registered Jan 13 20:45:00.007337 kernel: acpiphp: Slot [26] registered Jan 13 20:45:00.007350 kernel: acpiphp: Slot [27] registered Jan 13 20:45:00.007364 kernel: acpiphp: Slot [28] registered Jan 13 20:45:00.007379 kernel: acpiphp: Slot [29] registered Jan 13 20:45:00.007395 kernel: acpiphp: Slot [30] registered Jan 13 20:45:00.007410 kernel: acpiphp: Slot [31] registered Jan 13 20:45:00.007429 kernel: PCI host bridge to bus 0000:00 Jan 13 20:45:00.009942 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 13 20:45:00.010087 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 13 20:45:00.010249 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 13 20:45:00.010380 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 13 20:45:00.010504 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 20:45:00.010665 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 13 20:45:00.010836 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 13 20:45:00.010992 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Jan 13 20:45:00.011122 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 13 20:45:00.011251 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Jan 13 20:45:00.011387 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jan 13 20:45:00.011523 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jan 13 20:45:00.011685 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jan 13 20:45:00.013883 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jan 13 20:45:00.014034 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jan 13 20:45:00.014176 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jan 13 20:45:00.014316 kernel: pci 0000:00:01.3: quirk_piix4_acpi+0x0/0x180 took 10742 usecs Jan 13 20:45:00.014462 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Jan 13 20:45:00.014602 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Jan 13 20:45:00.014791 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jan 13 20:45:00.014934 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 13 20:45:00.015080 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 13 20:45:00.015218 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Jan 13 20:45:00.015361 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 13 20:45:00.015498 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Jan 13 20:45:00.015520 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 13 20:45:00.015542 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 13 20:45:00.015559 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 13 20:45:00.015575 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 13 20:45:00.015592 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 13 20:45:00.015617 kernel: iommu: Default domain type: Translated Jan 13 20:45:00.015634 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 13 20:45:00.015651 kernel: PCI: Using ACPI for IRQ routing Jan 13 20:45:00.015668 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 13 20:45:00.015685 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 13 20:45:00.015705 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Jan 13 20:45:00.017910 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jan 13 20:45:00.018185 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jan 13 20:45:00.018323 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 13 20:45:00.018343 kernel: vgaarb: loaded Jan 13 20:45:00.018359 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jan 13 20:45:00.018376 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jan 13 20:45:00.018392 kernel: clocksource: Switched to clocksource kvm-clock Jan 13 20:45:00.018414 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 20:45:00.018429 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 20:45:00.018445 kernel: pnp: PnP ACPI init Jan 13 20:45:00.018461 kernel: pnp: PnP ACPI: found 5 devices Jan 13 20:45:00.018477 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 13 20:45:00.018492 kernel: NET: Registered PF_INET protocol family Jan 13 20:45:00.018508 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 20:45:00.018525 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 13 20:45:00.018541 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 20:45:00.018560 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 13 20:45:00.018575 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 13 20:45:00.018591 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 13 20:45:00.018606 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 13 20:45:00.018622 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 13 20:45:00.018637 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 20:45:00.018653 kernel: NET: Registered PF_XDP protocol family Jan 13 20:45:00.018804 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 13 20:45:00.018926 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 13 20:45:00.019049 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 13 20:45:00.019170 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 13 20:45:00.019307 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 13 20:45:00.019327 kernel: PCI: CLS 0 bytes, default 64 Jan 13 20:45:00.019342 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 13 20:45:00.019358 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Jan 13 20:45:00.019374 kernel: clocksource: Switched to clocksource tsc Jan 13 20:45:00.019390 kernel: Initialise system trusted keyrings Jan 13 20:45:00.019408 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 13 20:45:00.019424 kernel: Key type asymmetric registered Jan 13 20:45:00.019439 kernel: Asymmetric key parser 'x509' registered Jan 13 20:45:00.019454 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 13 20:45:00.019470 kernel: io scheduler mq-deadline registered Jan 13 20:45:00.019485 kernel: io scheduler kyber registered Jan 13 20:45:00.019500 kernel: io scheduler bfq registered Jan 13 20:45:00.019516 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 13 20:45:00.019532 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 20:45:00.019550 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 13 20:45:00.019566 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 13 20:45:00.019581 kernel: i8042: Warning: Keylock active Jan 13 20:45:00.019596 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 13 20:45:00.019621 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 13 20:45:00.019794 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 13 20:45:00.019920 kernel: rtc_cmos 00:00: registered as rtc0 Jan 13 20:45:00.020041 kernel: rtc_cmos 00:00: setting system clock to 2025-01-13T20:44:59 UTC (1736801099) Jan 13 20:45:00.020287 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 13 20:45:00.020310 kernel: intel_pstate: CPU model not supported Jan 13 20:45:00.020326 kernel: NET: Registered PF_INET6 protocol family Jan 13 20:45:00.020342 kernel: Segment Routing with IPv6 Jan 13 20:45:00.020358 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 20:45:00.020373 kernel: NET: Registered PF_PACKET protocol family Jan 13 20:45:00.020389 kernel: Key type dns_resolver registered Jan 13 20:45:00.020404 kernel: IPI shorthand broadcast: enabled Jan 13 20:45:00.020420 kernel: sched_clock: Marking stable (716001637, 245750961)->(1097999515, -136246917) Jan 13 20:45:00.020440 kernel: registered taskstats version 1 Jan 13 20:45:00.020456 kernel: Loading compiled-in X.509 certificates Jan 13 20:45:00.020472 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 98739e9049f62881f4df7ffd1e39335f7f55b344' Jan 13 20:45:00.020487 kernel: Key type .fscrypt registered Jan 13 20:45:00.020502 kernel: Key type fscrypt-provisioning registered Jan 13 20:45:00.020517 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 20:45:00.020533 kernel: ima: Allocated hash algorithm: sha1 Jan 13 20:45:00.020548 kernel: ima: No architecture policies found Jan 13 20:45:00.020567 kernel: clk: Disabling unused clocks Jan 13 20:45:00.020583 kernel: Freeing unused kernel image (initmem) memory: 42976K Jan 13 20:45:00.020599 kernel: Write protecting the kernel read-only data: 36864k Jan 13 20:45:00.020615 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Jan 13 20:45:00.020630 kernel: Run /init as init process Jan 13 20:45:00.020645 kernel: with arguments: Jan 13 20:45:00.020660 kernel: /init Jan 13 20:45:00.020675 kernel: with environment: Jan 13 20:45:00.020690 kernel: HOME=/ Jan 13 20:45:00.020705 kernel: TERM=linux Jan 13 20:45:00.020723 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 20:45:00.022789 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:45:00.022811 systemd[1]: Detected virtualization amazon. Jan 13 20:45:00.022829 systemd[1]: Detected architecture x86-64. Jan 13 20:45:00.022845 systemd[1]: Running in initrd. Jan 13 20:45:00.022862 systemd[1]: No hostname configured, using default hostname. Jan 13 20:45:00.022881 systemd[1]: Hostname set to . Jan 13 20:45:00.022899 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:45:00.022916 systemd[1]: Queued start job for default target initrd.target. Jan 13 20:45:00.022933 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:45:00.022951 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:45:00.022969 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 20:45:00.022987 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:45:00.023004 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 20:45:00.023025 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 20:45:00.023045 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 20:45:00.023063 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 20:45:00.023080 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:45:00.023097 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:45:00.023114 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:45:00.023131 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:45:00.023152 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:45:00.023170 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:45:00.023187 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:45:00.023204 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:45:00.023222 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 20:45:00.023239 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 20:45:00.023256 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:45:00.023274 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:45:00.023291 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:45:00.023311 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:45:00.023329 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 20:45:00.023346 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:45:00.023364 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 20:45:00.023381 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 20:45:00.023404 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:45:00.023422 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:45:00.023439 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:45:00.023456 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 20:45:00.023474 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:45:00.023491 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 20:45:00.023513 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:45:00.023565 systemd-journald[179]: Collecting audit messages is disabled. Jan 13 20:45:00.023693 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 13 20:45:00.023717 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:45:00.023736 systemd-journald[179]: Journal started Jan 13 20:45:00.023824 systemd-journald[179]: Runtime Journal (/run/log/journal/ec237e024117fab3a051beadbab29822) is 4.8M, max 38.6M, 33.7M free. Jan 13 20:44:59.993991 systemd-modules-load[180]: Inserted module 'overlay' Jan 13 20:45:00.157998 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 20:45:00.158035 kernel: Bridge firewalling registered Jan 13 20:45:00.158048 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:45:00.042440 systemd-modules-load[180]: Inserted module 'br_netfilter' Jan 13 20:45:00.154315 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:45:00.164901 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:45:00.176089 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:45:00.208553 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:45:00.214488 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:45:00.217228 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:45:00.219137 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:45:00.237168 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:45:00.238286 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:45:00.242960 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:45:00.264467 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:45:00.276005 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 20:45:00.292392 dracut-cmdline[214]: dracut-dracut-053 Jan 13 20:45:00.305436 dracut-cmdline[214]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 13 20:45:00.325202 systemd-resolved[204]: Positive Trust Anchors: Jan 13 20:45:00.325867 systemd-resolved[204]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:45:00.325936 systemd-resolved[204]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:45:00.343918 systemd-resolved[204]: Defaulting to hostname 'linux'. Jan 13 20:45:00.347894 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:45:00.349761 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:45:00.455775 kernel: SCSI subsystem initialized Jan 13 20:45:00.467777 kernel: Loading iSCSI transport class v2.0-870. Jan 13 20:45:00.478771 kernel: iscsi: registered transport (tcp) Jan 13 20:45:00.502142 kernel: iscsi: registered transport (qla4xxx) Jan 13 20:45:00.502222 kernel: QLogic iSCSI HBA Driver Jan 13 20:45:00.547286 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 20:45:00.562039 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 20:45:00.652513 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 20:45:00.652703 kernel: device-mapper: uevent: version 1.0.3 Jan 13 20:45:00.656221 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 20:45:00.726800 kernel: raid6: avx512x4 gen() 8868 MB/s Jan 13 20:45:00.746116 kernel: raid6: avx512x2 gen() 11039 MB/s Jan 13 20:45:00.765794 kernel: raid6: avx512x1 gen() 8712 MB/s Jan 13 20:45:00.782804 kernel: raid6: avx2x4 gen() 11885 MB/s Jan 13 20:45:00.799787 kernel: raid6: avx2x2 gen() 16478 MB/s Jan 13 20:45:00.816856 kernel: raid6: avx2x1 gen() 11966 MB/s Jan 13 20:45:00.816937 kernel: raid6: using algorithm avx2x2 gen() 16478 MB/s Jan 13 20:45:00.838784 kernel: raid6: .... xor() 13596 MB/s, rmw enabled Jan 13 20:45:00.838852 kernel: raid6: using avx512x2 recovery algorithm Jan 13 20:45:00.865894 kernel: xor: automatically using best checksumming function avx Jan 13 20:45:01.214772 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 20:45:01.226095 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:45:01.237650 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:45:01.269145 systemd-udevd[397]: Using default interface naming scheme 'v255'. Jan 13 20:45:01.296909 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:45:01.311687 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 20:45:01.350645 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Jan 13 20:45:01.421764 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:45:01.432558 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:45:01.523853 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:45:01.532078 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 20:45:01.565735 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 20:45:01.569035 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:45:01.571943 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:45:01.575354 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:45:01.585039 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 20:45:01.619387 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:45:01.629902 kernel: cryptd: max_cpu_qlen set to 1000 Jan 13 20:45:01.658776 kernel: AVX2 version of gcm_enc/dec engaged. Jan 13 20:45:01.658846 kernel: AES CTR mode by8 optimization enabled Jan 13 20:45:01.660407 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:45:01.660714 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:45:01.665966 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 13 20:45:01.695922 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 13 20:45:01.696128 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jan 13 20:45:01.696297 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 13 20:45:01.696471 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 13 20:45:01.696490 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:e8:c9:cf:83:87 Jan 13 20:45:01.670259 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:45:01.671793 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:45:01.672018 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:45:01.674578 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:45:01.691740 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:45:01.709750 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 13 20:45:01.726166 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 20:45:01.726242 kernel: GPT:9289727 != 16777215 Jan 13 20:45:01.726261 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 20:45:01.726278 kernel: GPT:9289727 != 16777215 Jan 13 20:45:01.728990 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 20:45:01.729059 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 20:45:01.752361 (udev-worker)[462]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:45:01.873799 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (461) Jan 13 20:45:01.892773 kernel: BTRFS: device fsid 5e7921ba-229a-48a0-bc77-9b30aaa34aeb devid 1 transid 36 /dev/nvme0n1p3 scanned by (udev-worker) (449) Jan 13 20:45:01.907991 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:45:01.926953 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:45:01.979095 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 13 20:45:01.983034 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:45:02.000141 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 13 20:45:02.005911 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 13 20:45:02.006169 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 13 20:45:02.019646 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 13 20:45:02.029027 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 20:45:02.042214 disk-uuid[633]: Primary Header is updated. Jan 13 20:45:02.042214 disk-uuid[633]: Secondary Entries is updated. Jan 13 20:45:02.042214 disk-uuid[633]: Secondary Header is updated. Jan 13 20:45:02.052892 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 20:45:03.065112 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 20:45:03.065178 disk-uuid[634]: The operation has completed successfully. Jan 13 20:45:03.199776 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 20:45:03.199901 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 20:45:03.232982 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 20:45:03.247861 sh[894]: Success Jan 13 20:45:03.273145 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 13 20:45:03.379766 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 20:45:03.388320 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 20:45:03.400526 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 20:45:03.451712 kernel: BTRFS info (device dm-0): first mount of filesystem 5e7921ba-229a-48a0-bc77-9b30aaa34aeb Jan 13 20:45:03.451792 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:45:03.451812 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 20:45:03.451839 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 20:45:03.452506 kernel: BTRFS info (device dm-0): using free space tree Jan 13 20:45:03.581768 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 13 20:45:03.594947 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 20:45:03.597711 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 20:45:03.603946 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 20:45:03.611023 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 20:45:03.640161 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:45:03.640234 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:45:03.640257 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 13 20:45:03.646780 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 20:45:03.660767 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:45:03.661340 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 20:45:03.671285 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 20:45:03.677958 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 20:45:03.759928 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:45:03.767081 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:45:03.816666 systemd-networkd[1086]: lo: Link UP Jan 13 20:45:03.816678 systemd-networkd[1086]: lo: Gained carrier Jan 13 20:45:03.820305 systemd-networkd[1086]: Enumeration completed Jan 13 20:45:03.821442 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:45:03.822907 systemd[1]: Reached target network.target - Network. Jan 13 20:45:03.824844 systemd-networkd[1086]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:45:03.824848 systemd-networkd[1086]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:45:03.832765 systemd-networkd[1086]: eth0: Link UP Jan 13 20:45:03.832775 systemd-networkd[1086]: eth0: Gained carrier Jan 13 20:45:03.832791 systemd-networkd[1086]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:45:03.849827 systemd-networkd[1086]: eth0: DHCPv4 address 172.31.29.115/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 13 20:45:04.081781 ignition[1005]: Ignition 2.20.0 Jan 13 20:45:04.081792 ignition[1005]: Stage: fetch-offline Jan 13 20:45:04.084617 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:45:04.081961 ignition[1005]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:45:04.081988 ignition[1005]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:45:04.082270 ignition[1005]: Ignition finished successfully Jan 13 20:45:04.093925 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 13 20:45:04.127145 ignition[1094]: Ignition 2.20.0 Jan 13 20:45:04.127158 ignition[1094]: Stage: fetch Jan 13 20:45:04.128760 ignition[1094]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:45:04.128776 ignition[1094]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:45:04.128898 ignition[1094]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:45:04.153580 ignition[1094]: PUT result: OK Jan 13 20:45:04.161735 ignition[1094]: parsed url from cmdline: "" Jan 13 20:45:04.161762 ignition[1094]: no config URL provided Jan 13 20:45:04.161771 ignition[1094]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 20:45:04.161787 ignition[1094]: no config at "/usr/lib/ignition/user.ign" Jan 13 20:45:04.161810 ignition[1094]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:45:04.164864 ignition[1094]: PUT result: OK Jan 13 20:45:04.164930 ignition[1094]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 13 20:45:04.171920 ignition[1094]: GET result: OK Jan 13 20:45:04.172008 ignition[1094]: parsing config with SHA512: eaa2ac6b9a57dafe784008eaf8e4bc413b690ef2623310c7ff9d7c91c998dccedcccc0a75f7d7c30b58798b3df518a7bca845f8541f1423f631f0cb3bf2b2dcc Jan 13 20:45:04.186415 unknown[1094]: fetched base config from "system" Jan 13 20:45:04.186431 unknown[1094]: fetched base config from "system" Jan 13 20:45:04.187189 ignition[1094]: fetch: fetch complete Jan 13 20:45:04.186442 unknown[1094]: fetched user config from "aws" Jan 13 20:45:04.187196 ignition[1094]: fetch: fetch passed Jan 13 20:45:04.192705 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 13 20:45:04.187251 ignition[1094]: Ignition finished successfully Jan 13 20:45:04.205219 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 20:45:04.252612 ignition[1100]: Ignition 2.20.0 Jan 13 20:45:04.252627 ignition[1100]: Stage: kargs Jan 13 20:45:04.253177 ignition[1100]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:45:04.253187 ignition[1100]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:45:04.253279 ignition[1100]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:45:04.255541 ignition[1100]: PUT result: OK Jan 13 20:45:04.268316 ignition[1100]: kargs: kargs passed Jan 13 20:45:04.268412 ignition[1100]: Ignition finished successfully Jan 13 20:45:04.271901 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 20:45:04.301045 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 20:45:04.335337 ignition[1106]: Ignition 2.20.0 Jan 13 20:45:04.335352 ignition[1106]: Stage: disks Jan 13 20:45:04.335934 ignition[1106]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:45:04.336028 ignition[1106]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:45:04.336149 ignition[1106]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:45:04.337551 ignition[1106]: PUT result: OK Jan 13 20:45:04.344356 ignition[1106]: disks: disks passed Jan 13 20:45:04.344415 ignition[1106]: Ignition finished successfully Jan 13 20:45:04.346296 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 20:45:04.349994 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 20:45:04.352640 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 20:45:04.357609 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:45:04.360189 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:45:04.362643 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:45:04.376022 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 20:45:04.417553 systemd-fsck[1114]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 13 20:45:04.423091 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 20:45:04.442902 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 20:45:04.589058 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 84bcd1b2-5573-4e91-8fd5-f97782397085 r/w with ordered data mode. Quota mode: none. Jan 13 20:45:04.589807 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 20:45:04.590474 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 20:45:04.605888 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:45:04.609037 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 20:45:04.610918 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 20:45:04.610971 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 20:45:04.611004 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:45:04.633452 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 20:45:04.639847 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1133) Jan 13 20:45:04.641920 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 20:45:04.644362 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:45:04.644401 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:45:04.644423 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 13 20:45:04.654771 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 20:45:04.656166 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:45:05.010043 initrd-setup-root[1157]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 20:45:05.028902 initrd-setup-root[1164]: cut: /sysroot/etc/group: No such file or directory Jan 13 20:45:05.035046 initrd-setup-root[1171]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 20:45:05.040898 initrd-setup-root[1178]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 20:45:05.358253 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 20:45:05.366080 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 20:45:05.369943 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 20:45:05.383363 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 20:45:05.384577 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:45:05.404117 systemd-networkd[1086]: eth0: Gained IPv6LL Jan 13 20:45:05.429957 ignition[1246]: INFO : Ignition 2.20.0 Jan 13 20:45:05.429957 ignition[1246]: INFO : Stage: mount Jan 13 20:45:05.429957 ignition[1246]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:45:05.429957 ignition[1246]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:45:05.429957 ignition[1246]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:45:05.437140 ignition[1246]: INFO : PUT result: OK Jan 13 20:45:05.440167 ignition[1246]: INFO : mount: mount passed Jan 13 20:45:05.441129 ignition[1246]: INFO : Ignition finished successfully Jan 13 20:45:05.442379 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 20:45:05.443773 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 20:45:05.449851 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 20:45:05.595001 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:45:05.629158 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1257) Jan 13 20:45:05.629306 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:45:05.631042 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:45:05.631091 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 13 20:45:05.636775 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 20:45:05.638681 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:45:05.660628 ignition[1274]: INFO : Ignition 2.20.0 Jan 13 20:45:05.660628 ignition[1274]: INFO : Stage: files Jan 13 20:45:05.663347 ignition[1274]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:45:05.663347 ignition[1274]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:45:05.663347 ignition[1274]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:45:05.668065 ignition[1274]: INFO : PUT result: OK Jan 13 20:45:05.671448 ignition[1274]: DEBUG : files: compiled without relabeling support, skipping Jan 13 20:45:05.673968 ignition[1274]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 20:45:05.673968 ignition[1274]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 20:45:05.696488 ignition[1274]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 20:45:05.698295 ignition[1274]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 20:45:05.700074 unknown[1274]: wrote ssh authorized keys file for user: core Jan 13 20:45:05.701331 ignition[1274]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 20:45:05.704326 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 13 20:45:05.706309 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 13 20:45:05.706309 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 20:45:05.706309 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 13 20:45:05.822766 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 13 20:45:05.969576 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 20:45:05.969576 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 13 20:45:05.975101 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 20:45:05.975101 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:45:05.979337 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:45:05.979337 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:45:05.979337 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:45:05.979337 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:45:05.987676 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:45:05.989656 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:45:05.991922 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:45:05.993837 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 20:45:05.997169 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 20:45:05.997169 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 20:45:05.997169 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jan 13 20:45:06.463663 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 13 20:45:06.862346 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 20:45:06.862346 ignition[1274]: INFO : files: op(c): [started] processing unit "containerd.service" Jan 13 20:45:06.867925 ignition[1274]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 13 20:45:06.871632 ignition[1274]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 13 20:45:06.871632 ignition[1274]: INFO : files: op(c): [finished] processing unit "containerd.service" Jan 13 20:45:06.871632 ignition[1274]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jan 13 20:45:06.876817 ignition[1274]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:45:06.876817 ignition[1274]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:45:06.876817 ignition[1274]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jan 13 20:45:06.876817 ignition[1274]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jan 13 20:45:06.885307 ignition[1274]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 20:45:06.885307 ignition[1274]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:45:06.885307 ignition[1274]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:45:06.885307 ignition[1274]: INFO : files: files passed Jan 13 20:45:06.885307 ignition[1274]: INFO : Ignition finished successfully Jan 13 20:45:06.886997 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 20:45:06.903797 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 20:45:06.929250 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 20:45:06.936467 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 20:45:06.938144 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 20:45:06.946389 initrd-setup-root-after-ignition[1302]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:45:06.946389 initrd-setup-root-after-ignition[1302]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:45:06.950695 initrd-setup-root-after-ignition[1306]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:45:06.953402 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:45:06.956451 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 20:45:06.962985 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 20:45:06.997581 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 20:45:06.997716 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 20:45:07.000612 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 20:45:07.002475 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 20:45:07.004467 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 20:45:07.010069 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 20:45:07.029264 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:45:07.037178 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 20:45:07.050832 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:45:07.051060 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:45:07.059285 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 20:45:07.061178 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 20:45:07.064878 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:45:07.067643 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 20:45:07.068917 systemd[1]: Stopped target basic.target - Basic System. Jan 13 20:45:07.071739 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 20:45:07.074103 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:45:07.076649 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 20:45:07.079072 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 20:45:07.081354 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:45:07.083927 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 20:45:07.086463 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 20:45:07.089694 systemd[1]: Stopped target swap.target - Swaps. Jan 13 20:45:07.091530 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 20:45:07.092772 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:45:07.095206 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:45:07.095346 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:45:07.107045 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 20:45:07.111364 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:45:07.119371 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 20:45:07.119587 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 20:45:07.125480 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 20:45:07.127124 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:45:07.130392 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 20:45:07.131456 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 20:45:07.139027 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 20:45:07.140413 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 20:45:07.140658 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:45:07.154414 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 20:45:07.154545 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 20:45:07.154945 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:45:07.160664 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 20:45:07.161667 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:45:07.173783 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 20:45:07.173918 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 20:45:07.181074 ignition[1326]: INFO : Ignition 2.20.0 Jan 13 20:45:07.181074 ignition[1326]: INFO : Stage: umount Jan 13 20:45:07.183209 ignition[1326]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:45:07.183209 ignition[1326]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:45:07.183209 ignition[1326]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:45:07.187781 ignition[1326]: INFO : PUT result: OK Jan 13 20:45:07.193444 ignition[1326]: INFO : umount: umount passed Jan 13 20:45:07.194587 ignition[1326]: INFO : Ignition finished successfully Jan 13 20:45:07.195232 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 20:45:07.195355 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 20:45:07.201882 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 20:45:07.202020 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 20:45:07.204133 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 20:45:07.205212 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 20:45:07.210769 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 13 20:45:07.210881 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 13 20:45:07.211983 systemd[1]: Stopped target network.target - Network. Jan 13 20:45:07.212896 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 20:45:07.212950 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:45:07.214229 systemd[1]: Stopped target paths.target - Path Units. Jan 13 20:45:07.215134 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 20:45:07.218256 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:45:07.222337 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 20:45:07.228389 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 20:45:07.230776 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 20:45:07.230828 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:45:07.232268 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 20:45:07.232311 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:45:07.235942 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 20:45:07.236024 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 20:45:07.237761 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 20:45:07.239786 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 20:45:07.246045 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 20:45:07.247435 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 20:45:07.252158 systemd-networkd[1086]: eth0: DHCPv6 lease lost Jan 13 20:45:07.253858 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 20:45:07.255266 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 20:45:07.255363 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 20:45:07.257984 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 20:45:07.258118 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 20:45:07.262311 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 20:45:07.262391 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:45:07.268396 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 20:45:07.268477 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 20:45:07.282315 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 20:45:07.283956 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 20:45:07.284042 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:45:07.287808 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:45:07.295214 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 20:45:07.295356 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 20:45:07.317158 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 20:45:07.317355 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:45:07.325126 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 20:45:07.325385 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 20:45:07.331899 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 20:45:07.332040 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 20:45:07.339791 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 20:45:07.339867 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:45:07.341922 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 20:45:07.342018 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:45:07.346274 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 20:45:07.346357 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 20:45:07.351014 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:45:07.351103 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:45:07.360951 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 20:45:07.362368 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:45:07.362453 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:45:07.364056 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 20:45:07.364123 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 20:45:07.365651 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 20:45:07.365724 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:45:07.367692 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 20:45:07.367769 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:45:07.369496 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:45:07.369559 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:45:07.375791 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 20:45:07.375903 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 20:45:07.377877 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 20:45:07.384941 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 20:45:07.430812 systemd[1]: Switching root. Jan 13 20:45:07.464359 systemd-journald[179]: Journal stopped Jan 13 20:45:09.513459 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). Jan 13 20:45:09.513552 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 20:45:09.513575 kernel: SELinux: policy capability open_perms=1 Jan 13 20:45:09.513595 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 20:45:09.513619 kernel: SELinux: policy capability always_check_network=0 Jan 13 20:45:09.513638 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 20:45:09.513659 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 20:45:09.513678 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 20:45:09.513697 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 20:45:09.513723 kernel: audit: type=1403 audit(1736801108.083:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 20:45:09.513760 systemd[1]: Successfully loaded SELinux policy in 45.351ms. Jan 13 20:45:09.513788 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.636ms. Jan 13 20:45:09.513808 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:45:09.513830 systemd[1]: Detected virtualization amazon. Jan 13 20:45:09.515270 systemd[1]: Detected architecture x86-64. Jan 13 20:45:09.515308 systemd[1]: Detected first boot. Jan 13 20:45:09.515328 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:45:09.515347 zram_generator::config[1386]: No configuration found. Jan 13 20:45:09.515380 systemd[1]: Populated /etc with preset unit settings. Jan 13 20:45:09.515398 systemd[1]: Queued start job for default target multi-user.target. Jan 13 20:45:09.515417 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 13 20:45:09.515441 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 20:45:09.515459 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 20:45:09.515477 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 20:45:09.515495 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 20:45:09.515515 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 20:45:09.515534 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 20:45:09.515561 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 20:45:09.515597 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 20:45:09.515619 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:45:09.515638 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:45:09.515657 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 20:45:09.515676 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 20:45:09.515694 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 20:45:09.515713 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:45:09.515732 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 13 20:45:09.515768 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:45:09.515788 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 20:45:09.515810 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:45:09.515829 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:45:09.515848 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:45:09.515866 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:45:09.515886 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 20:45:09.515905 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 20:45:09.515925 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 20:45:09.515943 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 20:45:09.515961 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:45:09.515982 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:45:09.516002 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:45:09.516021 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 20:45:09.516041 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 20:45:09.516057 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 20:45:09.516076 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 20:45:09.516095 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:45:09.516121 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 20:45:09.516144 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 20:45:09.516166 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 20:45:09.516189 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 20:45:09.516213 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:45:09.516233 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:45:09.516254 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 20:45:09.516275 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:45:09.516294 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:45:09.516314 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:45:09.516345 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 20:45:09.516365 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:45:09.516385 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 20:45:09.516852 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 13 20:45:09.516884 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 13 20:45:09.516903 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:45:09.516922 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:45:09.516941 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 20:45:09.516960 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 20:45:09.516983 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:45:09.517002 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:45:09.517020 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 20:45:09.517039 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 20:45:09.517057 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 20:45:09.517076 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 20:45:09.517099 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 20:45:09.517152 systemd-journald[1487]: Collecting audit messages is disabled. Jan 13 20:45:09.517268 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 20:45:09.517288 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:45:09.517308 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 20:45:09.517326 kernel: fuse: init (API version 7.39) Jan 13 20:45:09.517346 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 20:45:09.517365 systemd-journald[1487]: Journal started Jan 13 20:45:09.517403 systemd-journald[1487]: Runtime Journal (/run/log/journal/ec237e024117fab3a051beadbab29822) is 4.8M, max 38.6M, 33.7M free. Jan 13 20:45:09.520452 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:45:09.522209 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:45:09.522484 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:45:09.524235 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:45:09.524445 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:45:09.549909 kernel: loop: module loaded Jan 13 20:45:09.526614 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 20:45:09.526856 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 20:45:09.529049 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:45:09.530717 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 20:45:09.534407 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:45:09.534625 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:45:09.554579 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 20:45:09.569402 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 20:45:09.574768 kernel: ACPI: bus type drm_connector registered Jan 13 20:45:09.575104 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:45:09.575362 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:45:09.580879 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 20:45:09.590899 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 20:45:09.602684 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 20:45:09.604093 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 20:45:09.619009 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 20:45:09.625980 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 20:45:09.627532 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:45:09.638029 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 20:45:09.640071 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:45:09.648941 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:45:09.665316 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:45:09.669850 systemd-journald[1487]: Time spent on flushing to /var/log/journal/ec237e024117fab3a051beadbab29822 is 90.232ms for 944 entries. Jan 13 20:45:09.669850 systemd-journald[1487]: System Journal (/var/log/journal/ec237e024117fab3a051beadbab29822) is 8.0M, max 195.6M, 187.6M free. Jan 13 20:45:09.770330 systemd-journald[1487]: Received client request to flush runtime journal. Jan 13 20:45:09.674417 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 20:45:09.678504 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 20:45:09.702305 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 20:45:09.704828 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 20:45:09.725291 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:45:09.744978 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 20:45:09.760817 systemd-tmpfiles[1536]: ACLs are not supported, ignoring. Jan 13 20:45:09.760852 systemd-tmpfiles[1536]: ACLs are not supported, ignoring. Jan 13 20:45:09.773893 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 20:45:09.782019 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:45:09.785694 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:45:09.793629 udevadm[1545]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 13 20:45:09.802291 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 20:45:09.875164 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 20:45:09.886032 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:45:09.910573 systemd-tmpfiles[1558]: ACLs are not supported, ignoring. Jan 13 20:45:09.911012 systemd-tmpfiles[1558]: ACLs are not supported, ignoring. Jan 13 20:45:09.920915 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:45:10.417197 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 20:45:10.424976 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:45:10.452329 systemd-udevd[1564]: Using default interface naming scheme 'v255'. Jan 13 20:45:10.541824 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:45:10.553704 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:45:10.601363 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 13 20:45:10.619918 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 20:45:10.622491 (udev-worker)[1571]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:45:10.740913 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 20:45:10.750820 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 13 20:45:10.754791 kernel: ACPI: button: Power Button [PWRF] Jan 13 20:45:10.762786 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Jan 13 20:45:10.766670 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Jan 13 20:45:10.776771 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input5 Jan 13 20:45:10.780778 kernel: ACPI: button: Sleep Button [SLPF] Jan 13 20:45:10.845858 kernel: mousedev: PS/2 mouse device common for all mice Jan 13 20:45:10.855366 systemd-networkd[1570]: lo: Link UP Jan 13 20:45:10.855379 systemd-networkd[1570]: lo: Gained carrier Jan 13 20:45:10.857206 systemd-networkd[1570]: Enumeration completed Jan 13 20:45:10.857655 systemd-networkd[1570]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:45:10.857661 systemd-networkd[1570]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:45:10.861462 systemd-networkd[1570]: eth0: Link UP Jan 13 20:45:10.861876 systemd-networkd[1570]: eth0: Gained carrier Jan 13 20:45:10.862003 systemd-networkd[1570]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:45:10.864254 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:45:10.866075 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:45:10.872840 systemd-networkd[1570]: eth0: DHCPv4 address 172.31.29.115/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 13 20:45:10.878119 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 20:45:10.934774 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (1578) Jan 13 20:45:11.121476 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 13 20:45:11.159456 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 20:45:11.170157 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 20:45:11.172253 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:45:11.196300 lvm[1687]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:45:11.224791 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 20:45:11.226832 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:45:11.235973 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 20:45:11.241380 lvm[1691]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:45:11.274093 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 20:45:11.277064 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 20:45:11.278956 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 20:45:11.278987 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:45:11.280362 systemd[1]: Reached target machines.target - Containers. Jan 13 20:45:11.283199 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 20:45:11.291956 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 20:45:11.306241 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 20:45:11.311320 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:45:11.321117 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 20:45:11.325989 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 20:45:11.339722 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 20:45:11.343556 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 20:45:11.357518 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 20:45:11.381769 kernel: loop0: detected capacity change from 0 to 140992 Jan 13 20:45:11.383033 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 20:45:11.384102 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 20:45:11.494773 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 20:45:11.521993 kernel: loop1: detected capacity change from 0 to 138184 Jan 13 20:45:11.649003 kernel: loop2: detected capacity change from 0 to 211296 Jan 13 20:45:11.715774 kernel: loop3: detected capacity change from 0 to 62848 Jan 13 20:45:11.817860 kernel: loop4: detected capacity change from 0 to 140992 Jan 13 20:45:11.840095 kernel: loop5: detected capacity change from 0 to 138184 Jan 13 20:45:11.859769 kernel: loop6: detected capacity change from 0 to 211296 Jan 13 20:45:11.887770 kernel: loop7: detected capacity change from 0 to 62848 Jan 13 20:45:11.897715 (sd-merge)[1712]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 13 20:45:11.898424 (sd-merge)[1712]: Merged extensions into '/usr'. Jan 13 20:45:11.903191 systemd[1]: Reloading requested from client PID 1699 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 20:45:11.903208 systemd[1]: Reloading... Jan 13 20:45:12.022887 zram_generator::config[1736]: No configuration found. Jan 13 20:45:12.244240 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:45:12.251970 systemd-networkd[1570]: eth0: Gained IPv6LL Jan 13 20:45:12.343841 systemd[1]: Reloading finished in 439 ms. Jan 13 20:45:12.360733 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 20:45:12.363145 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 20:45:12.373181 systemd[1]: Starting ensure-sysext.service... Jan 13 20:45:12.385078 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:45:12.399884 systemd[1]: Reloading requested from client PID 1796 ('systemctl') (unit ensure-sysext.service)... Jan 13 20:45:12.399912 systemd[1]: Reloading... Jan 13 20:45:12.429557 systemd-tmpfiles[1797]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 20:45:12.430120 systemd-tmpfiles[1797]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 20:45:12.432072 systemd-tmpfiles[1797]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 20:45:12.432793 systemd-tmpfiles[1797]: ACLs are not supported, ignoring. Jan 13 20:45:12.432888 systemd-tmpfiles[1797]: ACLs are not supported, ignoring. Jan 13 20:45:12.439333 systemd-tmpfiles[1797]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:45:12.439347 systemd-tmpfiles[1797]: Skipping /boot Jan 13 20:45:12.457341 systemd-tmpfiles[1797]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:45:12.457356 systemd-tmpfiles[1797]: Skipping /boot Jan 13 20:45:12.539385 zram_generator::config[1826]: No configuration found. Jan 13 20:45:12.695662 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:45:12.709584 ldconfig[1695]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 20:45:12.779341 systemd[1]: Reloading finished in 378 ms. Jan 13 20:45:12.798009 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 20:45:12.803693 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:45:12.817948 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:45:12.828058 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 20:45:12.835159 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 20:45:12.849358 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:45:12.856961 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 20:45:12.874409 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:45:12.874721 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:45:12.880066 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:45:12.886100 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:45:12.901199 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:45:12.903917 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:45:12.904183 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:45:12.925059 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:45:12.925319 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:45:12.930405 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:45:12.933091 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:45:12.934987 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:45:12.935244 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:45:12.942509 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 20:45:12.946596 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:45:12.946850 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:45:12.960242 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:45:12.960685 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:45:12.978684 systemd[1]: Finished ensure-sysext.service. Jan 13 20:45:12.981382 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:45:12.985906 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:45:12.997016 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:45:13.001587 augenrules[1925]: No rules Jan 13 20:45:13.011935 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:45:13.014030 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:45:13.014098 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:45:13.014164 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 20:45:13.015320 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:45:13.017203 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:45:13.017534 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:45:13.019309 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 20:45:13.027590 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:45:13.028251 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:45:13.045076 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 20:45:13.049400 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:45:13.052029 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:45:13.054928 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:45:13.064761 systemd-resolved[1888]: Positive Trust Anchors: Jan 13 20:45:13.064779 systemd-resolved[1888]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:45:13.064828 systemd-resolved[1888]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:45:13.072090 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 20:45:13.074667 systemd-resolved[1888]: Defaulting to hostname 'linux'. Jan 13 20:45:13.079389 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:45:13.080697 systemd[1]: Reached target network.target - Network. Jan 13 20:45:13.081663 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 20:45:13.082759 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:45:13.098150 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 20:45:13.099829 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 20:45:13.099894 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:45:13.104811 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 20:45:13.107315 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 20:45:13.109102 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 20:45:13.110410 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 20:45:13.111678 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 20:45:13.112987 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 20:45:13.113033 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:45:13.113948 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:45:13.115700 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 20:45:13.119556 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 20:45:13.122594 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 20:45:13.127626 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 20:45:13.129049 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:45:13.130117 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:45:13.132423 systemd[1]: System is tainted: cgroupsv1 Jan 13 20:45:13.132480 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:45:13.132506 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:45:13.141957 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 20:45:13.148961 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 13 20:45:13.158182 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 20:45:13.171904 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 20:45:13.178998 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 20:45:13.180968 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 20:45:13.193912 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:45:13.197779 jq[1950]: false Jan 13 20:45:13.202973 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 20:45:13.237086 systemd[1]: Started ntpd.service - Network Time Service. Jan 13 20:45:13.249955 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 20:45:13.268974 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 20:45:13.271217 dbus-daemon[1949]: [system] SELinux support is enabled Jan 13 20:45:13.275159 dbus-daemon[1949]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1570 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 13 20:45:13.289772 extend-filesystems[1951]: Found loop4 Jan 13 20:45:13.289772 extend-filesystems[1951]: Found loop5 Jan 13 20:45:13.289772 extend-filesystems[1951]: Found loop6 Jan 13 20:45:13.289772 extend-filesystems[1951]: Found loop7 Jan 13 20:45:13.289772 extend-filesystems[1951]: Found nvme0n1 Jan 13 20:45:13.289772 extend-filesystems[1951]: Found nvme0n1p1 Jan 13 20:45:13.289772 extend-filesystems[1951]: Found nvme0n1p2 Jan 13 20:45:13.289772 extend-filesystems[1951]: Found nvme0n1p3 Jan 13 20:45:13.289772 extend-filesystems[1951]: Found usr Jan 13 20:45:13.289772 extend-filesystems[1951]: Found nvme0n1p4 Jan 13 20:45:13.289772 extend-filesystems[1951]: Found nvme0n1p6 Jan 13 20:45:13.289772 extend-filesystems[1951]: Found nvme0n1p7 Jan 13 20:45:13.289772 extend-filesystems[1951]: Found nvme0n1p9 Jan 13 20:45:13.289772 extend-filesystems[1951]: Checking size of /dev/nvme0n1p9 Jan 13 20:45:13.289630 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 13 20:45:13.317951 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 20:45:13.329945 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 20:45:13.354500 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 20:45:13.356600 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 20:45:13.366455 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 20:45:13.376189 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 20:45:13.378965 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 20:45:13.402261 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 20:45:13.402676 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 20:45:13.418902 extend-filesystems[1951]: Resized partition /dev/nvme0n1p9 Jan 13 20:45:13.430238 jq[1979]: true Jan 13 20:45:13.450226 ntpd[1957]: ntpd 4.2.8p17@1.4004-o Mon Jan 13 18:29:22 UTC 2025 (1): Starting Jan 13 20:45:13.454822 ntpd[1957]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 13 20:45:13.463126 ntpd[1957]: 13 Jan 20:45:13 ntpd[1957]: ntpd 4.2.8p17@1.4004-o Mon Jan 13 18:29:22 UTC 2025 (1): Starting Jan 13 20:45:13.463126 ntpd[1957]: 13 Jan 20:45:13 ntpd[1957]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 13 20:45:13.463126 ntpd[1957]: 13 Jan 20:45:13 ntpd[1957]: ---------------------------------------------------- Jan 13 20:45:13.463126 ntpd[1957]: 13 Jan 20:45:13 ntpd[1957]: ntp-4 is maintained by Network Time Foundation, Jan 13 20:45:13.463126 ntpd[1957]: 13 Jan 20:45:13 ntpd[1957]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 13 20:45:13.463126 ntpd[1957]: 13 Jan 20:45:13 ntpd[1957]: corporation. Support and training for ntp-4 are Jan 13 20:45:13.463126 ntpd[1957]: 13 Jan 20:45:13 ntpd[1957]: available at https://www.nwtime.org/support Jan 13 20:45:13.463126 ntpd[1957]: 13 Jan 20:45:13 ntpd[1957]: ---------------------------------------------------- Jan 13 20:45:13.454834 ntpd[1957]: ---------------------------------------------------- Jan 13 20:45:13.454844 ntpd[1957]: ntp-4 is maintained by Network Time Foundation, Jan 13 20:45:13.454854 ntpd[1957]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 13 20:45:13.454863 ntpd[1957]: corporation. Support and training for ntp-4 are Jan 13 20:45:13.454874 ntpd[1957]: available at https://www.nwtime.org/support Jan 13 20:45:13.454884 ntpd[1957]: ---------------------------------------------------- Jan 13 20:45:13.464332 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 20:45:13.467902 extend-filesystems[1995]: resize2fs 1.47.1 (20-May-2024) Jan 13 20:45:13.488103 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jan 13 20:45:13.464685 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 20:45:13.475560 ntpd[1957]: proto: precision = 0.061 usec (-24) Jan 13 20:45:13.488366 ntpd[1957]: 13 Jan 20:45:13 ntpd[1957]: proto: precision = 0.061 usec (-24) Jan 13 20:45:13.488366 ntpd[1957]: 13 Jan 20:45:13 ntpd[1957]: basedate set to 2025-01-01 Jan 13 20:45:13.488366 ntpd[1957]: 13 Jan 20:45:13 ntpd[1957]: gps base set to 2025-01-05 (week 2348) Jan 13 20:45:13.481458 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 20:45:13.477380 ntpd[1957]: basedate set to 2025-01-01 Jan 13 20:45:13.481787 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 20:45:13.477417 ntpd[1957]: gps base set to 2025-01-05 (week 2348) Jan 13 20:45:13.505571 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 20:45:13.502911 ntpd[1957]: Listen and drop on 0 v6wildcard [::]:123 Jan 13 20:45:13.507583 ntpd[1957]: 13 Jan 20:45:13 ntpd[1957]: Listen and drop on 0 v6wildcard [::]:123 Jan 13 20:45:13.507583 ntpd[1957]: 13 Jan 20:45:13 ntpd[1957]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 13 20:45:13.505644 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 20:45:13.502970 ntpd[1957]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 13 20:45:13.517893 ntpd[1957]: 13 Jan 20:45:13 ntpd[1957]: Listen normally on 2 lo 127.0.0.1:123 Jan 13 20:45:13.517893 ntpd[1957]: 13 Jan 20:45:13 ntpd[1957]: Listen normally on 3 eth0 172.31.29.115:123 Jan 13 20:45:13.517893 ntpd[1957]: 13 Jan 20:45:13 ntpd[1957]: Listen normally on 4 lo [::1]:123 Jan 13 20:45:13.517893 ntpd[1957]: 13 Jan 20:45:13 ntpd[1957]: Listen normally on 5 eth0 [fe80::4e8:c9ff:fecf:8387%2]:123 Jan 13 20:45:13.517893 ntpd[1957]: 13 Jan 20:45:13 ntpd[1957]: Listening on routing socket on fd #22 for interface updates Jan 13 20:45:13.507189 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 20:45:13.509782 dbus-daemon[1949]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 13 20:45:13.507218 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 20:45:13.510289 ntpd[1957]: Listen normally on 2 lo 127.0.0.1:123 Jan 13 20:45:13.510341 ntpd[1957]: Listen normally on 3 eth0 172.31.29.115:123 Jan 13 20:45:13.510382 ntpd[1957]: Listen normally on 4 lo [::1]:123 Jan 13 20:45:13.510432 ntpd[1957]: Listen normally on 5 eth0 [fe80::4e8:c9ff:fecf:8387%2]:123 Jan 13 20:45:13.516896 ntpd[1957]: Listening on routing socket on fd #22 for interface updates Jan 13 20:45:13.518973 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 13 20:45:13.538850 (ntainerd)[2001]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 20:45:13.576316 ntpd[1957]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 20:45:13.577207 ntpd[1957]: 13 Jan 20:45:13 ntpd[1957]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 20:45:13.577207 ntpd[1957]: 13 Jan 20:45:13 ntpd[1957]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 20:45:13.576358 ntpd[1957]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 20:45:13.583333 jq[1996]: true Jan 13 20:45:13.617281 update_engine[1975]: I20250113 20:45:13.615844 1975 main.cc:92] Flatcar Update Engine starting Jan 13 20:45:13.589370 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 20:45:13.623703 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 13 20:45:13.647833 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jan 13 20:45:13.662802 tar[1990]: linux-amd64/helm Jan 13 20:45:13.669982 update_engine[1975]: I20250113 20:45:13.662462 1975 update_check_scheduler.cc:74] Next update check in 7m45s Jan 13 20:45:13.704037 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (2036) Jan 13 20:45:13.663165 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 13 20:45:13.704300 extend-filesystems[1995]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 13 20:45:13.704300 extend-filesystems[1995]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 20:45:13.704300 extend-filesystems[1995]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jan 13 20:45:13.664985 systemd[1]: Started update-engine.service - Update Engine. Jan 13 20:45:13.716383 extend-filesystems[1951]: Resized filesystem in /dev/nvme0n1p9 Jan 13 20:45:13.673222 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 20:45:13.680111 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 20:45:13.737569 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 20:45:13.738771 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 20:45:13.771894 systemd-logind[1973]: Watching system buttons on /dev/input/event1 (Power Button) Jan 13 20:45:13.771945 systemd-logind[1973]: Watching system buttons on /dev/input/event3 (Sleep Button) Jan 13 20:45:13.771972 systemd-logind[1973]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 13 20:45:13.774444 systemd-logind[1973]: New seat seat0. Jan 13 20:45:13.775377 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 20:45:13.778499 coreos-metadata[1948]: Jan 13 20:45:13.778 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 13 20:45:13.786620 coreos-metadata[1948]: Jan 13 20:45:13.786 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 13 20:45:13.793799 coreos-metadata[1948]: Jan 13 20:45:13.792 INFO Fetch successful Jan 13 20:45:13.793799 coreos-metadata[1948]: Jan 13 20:45:13.792 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 13 20:45:13.800013 coreos-metadata[1948]: Jan 13 20:45:13.799 INFO Fetch successful Jan 13 20:45:13.800095 coreos-metadata[1948]: Jan 13 20:45:13.800 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 13 20:45:13.805093 coreos-metadata[1948]: Jan 13 20:45:13.805 INFO Fetch successful Jan 13 20:45:13.805198 coreos-metadata[1948]: Jan 13 20:45:13.805 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 13 20:45:13.812786 coreos-metadata[1948]: Jan 13 20:45:13.810 INFO Fetch successful Jan 13 20:45:13.812786 coreos-metadata[1948]: Jan 13 20:45:13.811 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 13 20:45:13.816917 coreos-metadata[1948]: Jan 13 20:45:13.816 INFO Fetch failed with 404: resource not found Jan 13 20:45:13.817015 coreos-metadata[1948]: Jan 13 20:45:13.816 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 13 20:45:13.821154 coreos-metadata[1948]: Jan 13 20:45:13.821 INFO Fetch successful Jan 13 20:45:13.821266 coreos-metadata[1948]: Jan 13 20:45:13.821 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 13 20:45:13.823895 coreos-metadata[1948]: Jan 13 20:45:13.823 INFO Fetch successful Jan 13 20:45:13.823975 coreos-metadata[1948]: Jan 13 20:45:13.823 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 13 20:45:13.827818 coreos-metadata[1948]: Jan 13 20:45:13.826 INFO Fetch successful Jan 13 20:45:13.827818 coreos-metadata[1948]: Jan 13 20:45:13.826 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 13 20:45:13.827818 coreos-metadata[1948]: Jan 13 20:45:13.826 INFO Fetch successful Jan 13 20:45:13.827818 coreos-metadata[1948]: Jan 13 20:45:13.826 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 13 20:45:13.831081 coreos-metadata[1948]: Jan 13 20:45:13.831 INFO Fetch successful Jan 13 20:45:13.953888 bash[2076]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:45:13.955482 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 20:45:13.975207 systemd[1]: Starting sshkeys.service... Jan 13 20:45:13.994110 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 13 20:45:14.016620 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 13 20:45:14.030265 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 13 20:45:14.031853 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 20:45:14.064412 dbus-daemon[1949]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 13 20:45:14.064808 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 13 20:45:14.074035 dbus-daemon[1949]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2012 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 13 20:45:14.090001 systemd[1]: Starting polkit.service - Authorization Manager... Jan 13 20:45:14.108493 amazon-ssm-agent[2037]: Initializing new seelog logger Jan 13 20:45:14.116136 amazon-ssm-agent[2037]: New Seelog Logger Creation Complete Jan 13 20:45:14.116136 amazon-ssm-agent[2037]: 2025/01/13 20:45:14 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:45:14.116136 amazon-ssm-agent[2037]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:45:14.120863 amazon-ssm-agent[2037]: 2025/01/13 20:45:14 processing appconfig overrides Jan 13 20:45:14.120863 amazon-ssm-agent[2037]: 2025/01/13 20:45:14 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:45:14.120863 amazon-ssm-agent[2037]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:45:14.120863 amazon-ssm-agent[2037]: 2025/01/13 20:45:14 processing appconfig overrides Jan 13 20:45:14.120863 amazon-ssm-agent[2037]: 2025/01/13 20:45:14 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:45:14.120863 amazon-ssm-agent[2037]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:45:14.120863 amazon-ssm-agent[2037]: 2025/01/13 20:45:14 processing appconfig overrides Jan 13 20:45:14.123006 amazon-ssm-agent[2037]: 2025-01-13 20:45:14 INFO Proxy environment variables: Jan 13 20:45:14.149783 amazon-ssm-agent[2037]: 2025/01/13 20:45:14 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:45:14.149783 amazon-ssm-agent[2037]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:45:14.149783 amazon-ssm-agent[2037]: 2025/01/13 20:45:14 processing appconfig overrides Jan 13 20:45:14.155412 polkitd[2124]: Started polkitd version 121 Jan 13 20:45:14.208104 polkitd[2124]: Loading rules from directory /etc/polkit-1/rules.d Jan 13 20:45:14.208191 polkitd[2124]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 13 20:45:14.218503 polkitd[2124]: Finished loading, compiling and executing 2 rules Jan 13 20:45:14.223216 amazon-ssm-agent[2037]: 2025-01-13 20:45:14 INFO https_proxy: Jan 13 20:45:14.224220 dbus-daemon[1949]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 13 20:45:14.224645 systemd[1]: Started polkit.service - Authorization Manager. Jan 13 20:45:14.227956 polkitd[2124]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 13 20:45:14.324314 amazon-ssm-agent[2037]: 2025-01-13 20:45:14 INFO http_proxy: Jan 13 20:45:14.330652 systemd-hostnamed[2012]: Hostname set to (transient) Jan 13 20:45:14.331482 systemd-resolved[1888]: System hostname changed to 'ip-172-31-29-115'. Jan 13 20:45:14.384856 coreos-metadata[2115]: Jan 13 20:45:14.381 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 13 20:45:14.384856 coreos-metadata[2115]: Jan 13 20:45:14.383 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 13 20:45:14.390317 coreos-metadata[2115]: Jan 13 20:45:14.385 INFO Fetch successful Jan 13 20:45:14.390317 coreos-metadata[2115]: Jan 13 20:45:14.385 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 13 20:45:14.390317 coreos-metadata[2115]: Jan 13 20:45:14.388 INFO Fetch successful Jan 13 20:45:14.395537 unknown[2115]: wrote ssh authorized keys file for user: core Jan 13 20:45:14.426012 amazon-ssm-agent[2037]: 2025-01-13 20:45:14 INFO no_proxy: Jan 13 20:45:14.453949 update-ssh-keys[2195]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:45:14.462290 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 13 20:45:14.467469 systemd[1]: Finished sshkeys.service. Jan 13 20:45:14.528021 amazon-ssm-agent[2037]: 2025-01-13 20:45:14 INFO Checking if agent identity type OnPrem can be assumed Jan 13 20:45:14.530912 locksmithd[2038]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 20:45:14.578150 containerd[2001]: time="2025-01-13T20:45:14.578047916Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 13 20:45:14.631555 amazon-ssm-agent[2037]: 2025-01-13 20:45:14 INFO Checking if agent identity type EC2 can be assumed Jan 13 20:45:14.732408 amazon-ssm-agent[2037]: 2025-01-13 20:45:14 INFO Agent will take identity from EC2 Jan 13 20:45:14.736776 containerd[2001]: time="2025-01-13T20:45:14.735789365Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:45:14.741098 containerd[2001]: time="2025-01-13T20:45:14.741048149Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:45:14.741192 containerd[2001]: time="2025-01-13T20:45:14.741096447Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 20:45:14.741192 containerd[2001]: time="2025-01-13T20:45:14.741132442Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 20:45:14.743670 containerd[2001]: time="2025-01-13T20:45:14.742874603Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 20:45:14.743670 containerd[2001]: time="2025-01-13T20:45:14.742927448Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 20:45:14.743670 containerd[2001]: time="2025-01-13T20:45:14.743021356Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:45:14.743670 containerd[2001]: time="2025-01-13T20:45:14.743039983Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:45:14.743670 containerd[2001]: time="2025-01-13T20:45:14.743386945Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:45:14.743670 containerd[2001]: time="2025-01-13T20:45:14.743409170Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 20:45:14.743670 containerd[2001]: time="2025-01-13T20:45:14.743432140Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:45:14.743670 containerd[2001]: time="2025-01-13T20:45:14.743447003Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 20:45:14.743670 containerd[2001]: time="2025-01-13T20:45:14.743661805Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:45:14.747405 containerd[2001]: time="2025-01-13T20:45:14.746088964Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:45:14.747405 containerd[2001]: time="2025-01-13T20:45:14.746394365Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:45:14.747405 containerd[2001]: time="2025-01-13T20:45:14.746417392Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 20:45:14.748803 containerd[2001]: time="2025-01-13T20:45:14.748776744Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 20:45:14.748909 containerd[2001]: time="2025-01-13T20:45:14.748889623Z" level=info msg="metadata content store policy set" policy=shared Jan 13 20:45:14.756559 containerd[2001]: time="2025-01-13T20:45:14.754994733Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 20:45:14.756559 containerd[2001]: time="2025-01-13T20:45:14.755065289Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 20:45:14.756559 containerd[2001]: time="2025-01-13T20:45:14.755090210Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 20:45:14.756559 containerd[2001]: time="2025-01-13T20:45:14.755150817Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 20:45:14.756559 containerd[2001]: time="2025-01-13T20:45:14.755185715Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 20:45:14.756559 containerd[2001]: time="2025-01-13T20:45:14.755344005Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 20:45:14.756559 containerd[2001]: time="2025-01-13T20:45:14.755832764Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 20:45:14.756559 containerd[2001]: time="2025-01-13T20:45:14.755968730Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 20:45:14.756559 containerd[2001]: time="2025-01-13T20:45:14.755998429Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 20:45:14.756559 containerd[2001]: time="2025-01-13T20:45:14.756019433Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 20:45:14.756559 containerd[2001]: time="2025-01-13T20:45:14.756039649Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 20:45:14.756559 containerd[2001]: time="2025-01-13T20:45:14.756061554Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 20:45:14.756559 containerd[2001]: time="2025-01-13T20:45:14.756080458Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 20:45:14.756559 containerd[2001]: time="2025-01-13T20:45:14.756100127Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 20:45:14.757126 containerd[2001]: time="2025-01-13T20:45:14.756120380Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 20:45:14.757126 containerd[2001]: time="2025-01-13T20:45:14.756139657Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 20:45:14.757126 containerd[2001]: time="2025-01-13T20:45:14.756159312Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 20:45:14.757126 containerd[2001]: time="2025-01-13T20:45:14.756177194Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 20:45:14.757126 containerd[2001]: time="2025-01-13T20:45:14.756204764Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 20:45:14.757126 containerd[2001]: time="2025-01-13T20:45:14.756224686Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 20:45:14.757126 containerd[2001]: time="2025-01-13T20:45:14.756242770Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 20:45:14.757126 containerd[2001]: time="2025-01-13T20:45:14.756261427Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 20:45:14.757126 containerd[2001]: time="2025-01-13T20:45:14.756282303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 20:45:14.757126 containerd[2001]: time="2025-01-13T20:45:14.756301511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 20:45:14.757126 containerd[2001]: time="2025-01-13T20:45:14.756319420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 20:45:14.757126 containerd[2001]: time="2025-01-13T20:45:14.756338935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 20:45:14.757126 containerd[2001]: time="2025-01-13T20:45:14.756359511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 20:45:14.757126 containerd[2001]: time="2025-01-13T20:45:14.756379708Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 20:45:14.757643 containerd[2001]: time="2025-01-13T20:45:14.756396497Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 20:45:14.757643 containerd[2001]: time="2025-01-13T20:45:14.756415392Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 20:45:14.757643 containerd[2001]: time="2025-01-13T20:45:14.756432376Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 20:45:14.757643 containerd[2001]: time="2025-01-13T20:45:14.756452526Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 20:45:14.757643 containerd[2001]: time="2025-01-13T20:45:14.756481322Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 20:45:14.757643 containerd[2001]: time="2025-01-13T20:45:14.756505940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 20:45:14.757643 containerd[2001]: time="2025-01-13T20:45:14.756522227Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 20:45:14.757643 containerd[2001]: time="2025-01-13T20:45:14.756570785Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 20:45:14.757643 containerd[2001]: time="2025-01-13T20:45:14.756593098Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 20:45:14.757643 containerd[2001]: time="2025-01-13T20:45:14.756607306Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 20:45:14.757643 containerd[2001]: time="2025-01-13T20:45:14.756649702Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 20:45:14.757643 containerd[2001]: time="2025-01-13T20:45:14.756664984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 20:45:14.757643 containerd[2001]: time="2025-01-13T20:45:14.756682320Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 20:45:14.757643 containerd[2001]: time="2025-01-13T20:45:14.756698298Z" level=info msg="NRI interface is disabled by configuration." Jan 13 20:45:14.760164 containerd[2001]: time="2025-01-13T20:45:14.756719886Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 20:45:14.761128 containerd[2001]: time="2025-01-13T20:45:14.761045373Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 20:45:14.761128 containerd[2001]: time="2025-01-13T20:45:14.761122814Z" level=info msg="Connect containerd service" Jan 13 20:45:14.761404 containerd[2001]: time="2025-01-13T20:45:14.761173582Z" level=info msg="using legacy CRI server" Jan 13 20:45:14.761404 containerd[2001]: time="2025-01-13T20:45:14.761183471Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 20:45:14.761404 containerd[2001]: time="2025-01-13T20:45:14.761349486Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 20:45:14.763567 containerd[2001]: time="2025-01-13T20:45:14.762156504Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:45:14.766771 containerd[2001]: time="2025-01-13T20:45:14.765938491Z" level=info msg="Start subscribing containerd event" Jan 13 20:45:14.766771 containerd[2001]: time="2025-01-13T20:45:14.766019948Z" level=info msg="Start recovering state" Jan 13 20:45:14.766771 containerd[2001]: time="2025-01-13T20:45:14.766210190Z" level=info msg="Start event monitor" Jan 13 20:45:14.766771 containerd[2001]: time="2025-01-13T20:45:14.766268659Z" level=info msg="Start snapshots syncer" Jan 13 20:45:14.766771 containerd[2001]: time="2025-01-13T20:45:14.766281811Z" level=info msg="Start cni network conf syncer for default" Jan 13 20:45:14.766771 containerd[2001]: time="2025-01-13T20:45:14.766292478Z" level=info msg="Start streaming server" Jan 13 20:45:14.768753 containerd[2001]: time="2025-01-13T20:45:14.766468037Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 20:45:14.768753 containerd[2001]: time="2025-01-13T20:45:14.767935377Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 20:45:14.768141 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 20:45:14.770387 containerd[2001]: time="2025-01-13T20:45:14.770043234Z" level=info msg="containerd successfully booted in 0.195678s" Jan 13 20:45:14.834722 amazon-ssm-agent[2037]: 2025-01-13 20:45:14 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 13 20:45:14.938821 amazon-ssm-agent[2037]: 2025-01-13 20:45:14 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 13 20:45:15.038111 amazon-ssm-agent[2037]: 2025-01-13 20:45:14 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 13 20:45:15.137757 amazon-ssm-agent[2037]: 2025-01-13 20:45:14 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 13 20:45:15.149709 sshd_keygen[1992]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 20:45:15.188312 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 20:45:15.201349 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 20:45:15.214260 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 20:45:15.214611 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 20:45:15.223122 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 20:45:15.238882 amazon-ssm-agent[2037]: 2025-01-13 20:45:14 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jan 13 20:45:15.265318 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 20:45:15.279185 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 20:45:15.288134 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 13 20:45:15.294220 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 20:45:15.337893 amazon-ssm-agent[2037]: 2025-01-13 20:45:14 INFO [amazon-ssm-agent] Starting Core Agent Jan 13 20:45:15.410521 tar[1990]: linux-amd64/LICENSE Jan 13 20:45:15.410521 tar[1990]: linux-amd64/README.md Jan 13 20:45:15.440772 amazon-ssm-agent[2037]: 2025-01-13 20:45:14 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 13 20:45:15.441417 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 20:45:15.540890 amazon-ssm-agent[2037]: 2025-01-13 20:45:14 INFO [Registrar] Starting registrar module Jan 13 20:45:15.642981 amazon-ssm-agent[2037]: 2025-01-13 20:45:14 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 13 20:45:15.808346 amazon-ssm-agent[2037]: 2025-01-13 20:45:15 INFO [EC2Identity] EC2 registration was successful. Jan 13 20:45:15.837570 amazon-ssm-agent[2037]: 2025-01-13 20:45:15 INFO [CredentialRefresher] credentialRefresher has started Jan 13 20:45:15.837570 amazon-ssm-agent[2037]: 2025-01-13 20:45:15 INFO [CredentialRefresher] Starting credentials refresher loop Jan 13 20:45:15.837570 amazon-ssm-agent[2037]: 2025-01-13 20:45:15 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 13 20:45:15.876017 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:45:15.876354 (kubelet)[2243]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:45:15.878019 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 20:45:15.880336 systemd[1]: Startup finished in 9.227s (kernel) + 7.839s (userspace) = 17.066s. Jan 13 20:45:15.908918 amazon-ssm-agent[2037]: 2025-01-13 20:45:15 INFO [CredentialRefresher] Next credential rotation will be in 32.01666274421667 minutes Jan 13 20:45:16.875923 amazon-ssm-agent[2037]: 2025-01-13 20:45:16 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 13 20:45:16.981787 amazon-ssm-agent[2037]: 2025-01-13 20:45:16 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2254) started Jan 13 20:45:17.080071 amazon-ssm-agent[2037]: 2025-01-13 20:45:16 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 13 20:45:17.138947 kubelet[2243]: E0113 20:45:17.138843 2243 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:45:17.142425 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:45:17.142775 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:45:21.487188 systemd-resolved[1888]: Clock change detected. Flushing caches. Jan 13 20:45:22.155546 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 20:45:22.163949 systemd[1]: Started sshd@0-172.31.29.115:22-139.178.89.65:40984.service - OpenSSH per-connection server daemon (139.178.89.65:40984). Jan 13 20:45:22.401317 sshd[2269]: Accepted publickey for core from 139.178.89.65 port 40984 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:45:22.404619 sshd-session[2269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:45:22.425478 systemd-logind[1973]: New session 1 of user core. Jan 13 20:45:22.426449 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 20:45:22.441195 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 20:45:22.468725 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 20:45:22.484172 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 20:45:22.489504 (systemd)[2275]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 20:45:22.628189 systemd[2275]: Queued start job for default target default.target. Jan 13 20:45:22.628794 systemd[2275]: Created slice app.slice - User Application Slice. Jan 13 20:45:22.628827 systemd[2275]: Reached target paths.target - Paths. Jan 13 20:45:22.628863 systemd[2275]: Reached target timers.target - Timers. Jan 13 20:45:22.639662 systemd[2275]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 20:45:22.647674 systemd[2275]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 20:45:22.647759 systemd[2275]: Reached target sockets.target - Sockets. Jan 13 20:45:22.647781 systemd[2275]: Reached target basic.target - Basic System. Jan 13 20:45:22.647841 systemd[2275]: Reached target default.target - Main User Target. Jan 13 20:45:22.647881 systemd[2275]: Startup finished in 149ms. Jan 13 20:45:22.649249 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 20:45:22.658742 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 20:45:22.807444 systemd[1]: Started sshd@1-172.31.29.115:22-139.178.89.65:40986.service - OpenSSH per-connection server daemon (139.178.89.65:40986). Jan 13 20:45:22.971892 sshd[2287]: Accepted publickey for core from 139.178.89.65 port 40986 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:45:22.973309 sshd-session[2287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:45:22.979468 systemd-logind[1973]: New session 2 of user core. Jan 13 20:45:22.996037 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 20:45:23.119942 sshd[2290]: Connection closed by 139.178.89.65 port 40986 Jan 13 20:45:23.120998 sshd-session[2287]: pam_unix(sshd:session): session closed for user core Jan 13 20:45:23.128364 systemd[1]: sshd@1-172.31.29.115:22-139.178.89.65:40986.service: Deactivated successfully. Jan 13 20:45:23.132802 systemd-logind[1973]: Session 2 logged out. Waiting for processes to exit. Jan 13 20:45:23.133546 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 20:45:23.136365 systemd-logind[1973]: Removed session 2. Jan 13 20:45:23.154321 systemd[1]: Started sshd@2-172.31.29.115:22-139.178.89.65:40994.service - OpenSSH per-connection server daemon (139.178.89.65:40994). Jan 13 20:45:23.322009 sshd[2295]: Accepted publickey for core from 139.178.89.65 port 40994 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:45:23.323710 sshd-session[2295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:45:23.332319 systemd-logind[1973]: New session 3 of user core. Jan 13 20:45:23.339518 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 20:45:23.465290 sshd[2298]: Connection closed by 139.178.89.65 port 40994 Jan 13 20:45:23.466568 sshd-session[2295]: pam_unix(sshd:session): session closed for user core Jan 13 20:45:23.471151 systemd[1]: sshd@2-172.31.29.115:22-139.178.89.65:40994.service: Deactivated successfully. Jan 13 20:45:23.484004 systemd-logind[1973]: Session 3 logged out. Waiting for processes to exit. Jan 13 20:45:23.486521 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 20:45:23.491551 systemd-logind[1973]: Removed session 3. Jan 13 20:45:23.500928 systemd[1]: Started sshd@3-172.31.29.115:22-139.178.89.65:41008.service - OpenSSH per-connection server daemon (139.178.89.65:41008). Jan 13 20:45:23.690179 sshd[2303]: Accepted publickey for core from 139.178.89.65 port 41008 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:45:23.691869 sshd-session[2303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:45:23.701257 systemd-logind[1973]: New session 4 of user core. Jan 13 20:45:23.706862 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 20:45:23.840375 sshd[2306]: Connection closed by 139.178.89.65 port 41008 Jan 13 20:45:23.841698 sshd-session[2303]: pam_unix(sshd:session): session closed for user core Jan 13 20:45:23.847562 systemd[1]: sshd@3-172.31.29.115:22-139.178.89.65:41008.service: Deactivated successfully. Jan 13 20:45:23.856352 systemd-logind[1973]: Session 4 logged out. Waiting for processes to exit. Jan 13 20:45:23.857414 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 20:45:23.861777 systemd-logind[1973]: Removed session 4. Jan 13 20:45:23.875912 systemd[1]: Started sshd@4-172.31.29.115:22-139.178.89.65:41020.service - OpenSSH per-connection server daemon (139.178.89.65:41020). Jan 13 20:45:24.052150 sshd[2311]: Accepted publickey for core from 139.178.89.65 port 41020 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:45:24.053676 sshd-session[2311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:45:24.058646 systemd-logind[1973]: New session 5 of user core. Jan 13 20:45:24.065906 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 20:45:24.214707 sudo[2315]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 20:45:24.215108 sudo[2315]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:45:25.004103 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 20:45:25.006684 (dockerd)[2335]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 20:45:25.576746 dockerd[2335]: time="2025-01-13T20:45:25.576684969Z" level=info msg="Starting up" Jan 13 20:45:25.716212 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3992633186-merged.mount: Deactivated successfully. Jan 13 20:45:25.940697 dockerd[2335]: time="2025-01-13T20:45:25.940109236Z" level=info msg="Loading containers: start." Jan 13 20:45:26.186563 kernel: Initializing XFRM netlink socket Jan 13 20:45:26.223894 (udev-worker)[2442]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:45:26.307130 systemd-networkd[1570]: docker0: Link UP Jan 13 20:45:26.342570 dockerd[2335]: time="2025-01-13T20:45:26.342509021Z" level=info msg="Loading containers: done." Jan 13 20:45:26.366621 dockerd[2335]: time="2025-01-13T20:45:26.366578336Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 20:45:26.366811 dockerd[2335]: time="2025-01-13T20:45:26.366691776Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Jan 13 20:45:26.366870 dockerd[2335]: time="2025-01-13T20:45:26.366838213Z" level=info msg="Daemon has completed initialization" Jan 13 20:45:26.413334 dockerd[2335]: time="2025-01-13T20:45:26.412818286Z" level=info msg="API listen on /run/docker.sock" Jan 13 20:45:26.413095 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 20:45:27.594423 containerd[2001]: time="2025-01-13T20:45:27.594378840Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Jan 13 20:45:28.189646 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 20:45:28.201882 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:45:28.231460 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3564443774.mount: Deactivated successfully. Jan 13 20:45:28.504739 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:45:28.514338 (kubelet)[2551]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:45:28.610221 kubelet[2551]: E0113 20:45:28.609858 2551 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:45:28.614791 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:45:28.615289 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:45:30.752161 containerd[2001]: time="2025-01-13T20:45:30.752106635Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:30.753573 containerd[2001]: time="2025-01-13T20:45:30.753514886Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=35139254" Jan 13 20:45:30.755897 containerd[2001]: time="2025-01-13T20:45:30.754479253Z" level=info msg="ImageCreate event name:\"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:30.757390 containerd[2001]: time="2025-01-13T20:45:30.757354059Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:30.758617 containerd[2001]: time="2025-01-13T20:45:30.758587234Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"35136054\" in 3.164161576s" Jan 13 20:45:30.758728 containerd[2001]: time="2025-01-13T20:45:30.758708696Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Jan 13 20:45:30.782926 containerd[2001]: time="2025-01-13T20:45:30.782884550Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Jan 13 20:45:33.388027 containerd[2001]: time="2025-01-13T20:45:33.387974845Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:33.390840 containerd[2001]: time="2025-01-13T20:45:33.390774186Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=32217732" Jan 13 20:45:33.393401 containerd[2001]: time="2025-01-13T20:45:33.393334082Z" level=info msg="ImageCreate event name:\"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:33.399254 containerd[2001]: time="2025-01-13T20:45:33.397940907Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:33.399254 containerd[2001]: time="2025-01-13T20:45:33.399104864Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"33662844\" in 2.61617304s" Jan 13 20:45:33.399254 containerd[2001]: time="2025-01-13T20:45:33.399143889Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Jan 13 20:45:33.424655 containerd[2001]: time="2025-01-13T20:45:33.424609841Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Jan 13 20:45:35.112582 containerd[2001]: time="2025-01-13T20:45:35.112513960Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:35.114790 containerd[2001]: time="2025-01-13T20:45:35.114728367Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=17332822" Jan 13 20:45:35.117343 containerd[2001]: time="2025-01-13T20:45:35.117291734Z" level=info msg="ImageCreate event name:\"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:35.121549 containerd[2001]: time="2025-01-13T20:45:35.121471958Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:35.122820 containerd[2001]: time="2025-01-13T20:45:35.122687412Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"18777952\" in 1.698036156s" Jan 13 20:45:35.122820 containerd[2001]: time="2025-01-13T20:45:35.122725113Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Jan 13 20:45:35.151350 containerd[2001]: time="2025-01-13T20:45:35.151293238Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Jan 13 20:45:36.369266 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1517717391.mount: Deactivated successfully. Jan 13 20:45:36.913771 containerd[2001]: time="2025-01-13T20:45:36.913714072Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:36.915596 containerd[2001]: time="2025-01-13T20:45:36.915545029Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=28619958" Jan 13 20:45:36.917963 containerd[2001]: time="2025-01-13T20:45:36.917904519Z" level=info msg="ImageCreate event name:\"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:36.921207 containerd[2001]: time="2025-01-13T20:45:36.921086426Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:36.922040 containerd[2001]: time="2025-01-13T20:45:36.921866845Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"28618977\" in 1.770514332s" Jan 13 20:45:36.922040 containerd[2001]: time="2025-01-13T20:45:36.921923029Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Jan 13 20:45:36.946742 containerd[2001]: time="2025-01-13T20:45:36.946706916Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 13 20:45:37.511566 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3712912056.mount: Deactivated successfully. Jan 13 20:45:38.640826 containerd[2001]: time="2025-01-13T20:45:38.640769297Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:38.642898 containerd[2001]: time="2025-01-13T20:45:38.642813422Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 13 20:45:38.645248 containerd[2001]: time="2025-01-13T20:45:38.645190523Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:38.649266 containerd[2001]: time="2025-01-13T20:45:38.649224961Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:38.650544 containerd[2001]: time="2025-01-13T20:45:38.650357839Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.703612429s" Jan 13 20:45:38.650544 containerd[2001]: time="2025-01-13T20:45:38.650398029Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 13 20:45:38.675834 containerd[2001]: time="2025-01-13T20:45:38.675789902Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 13 20:45:38.865258 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 20:45:38.872772 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:45:39.447984 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:45:39.465361 (kubelet)[2697]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:45:39.469494 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount656485800.mount: Deactivated successfully. Jan 13 20:45:39.541801 containerd[2001]: time="2025-01-13T20:45:39.541746234Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:39.553575 containerd[2001]: time="2025-01-13T20:45:39.553341450Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jan 13 20:45:39.573331 containerd[2001]: time="2025-01-13T20:45:39.572131039Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:39.582059 containerd[2001]: time="2025-01-13T20:45:39.581988544Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:39.589930 containerd[2001]: time="2025-01-13T20:45:39.588617940Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 912.7827ms" Jan 13 20:45:39.589930 containerd[2001]: time="2025-01-13T20:45:39.588667074Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 13 20:45:39.604215 kubelet[2697]: E0113 20:45:39.604141 2697 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:45:39.608762 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:45:39.609219 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:45:39.626979 containerd[2001]: time="2025-01-13T20:45:39.626946327Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jan 13 20:45:40.273103 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1080572751.mount: Deactivated successfully. Jan 13 20:45:43.804249 containerd[2001]: time="2025-01-13T20:45:43.804196973Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:43.808815 containerd[2001]: time="2025-01-13T20:45:43.808760983Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Jan 13 20:45:43.811324 containerd[2001]: time="2025-01-13T20:45:43.811257751Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:43.815815 containerd[2001]: time="2025-01-13T20:45:43.815772050Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:43.817313 containerd[2001]: time="2025-01-13T20:45:43.817272555Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 4.190289838s" Jan 13 20:45:43.817458 containerd[2001]: time="2025-01-13T20:45:43.817439627Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jan 13 20:45:45.399895 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 13 20:45:46.611633 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:45:46.619819 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:45:46.646490 systemd[1]: Reloading requested from client PID 2831 ('systemctl') (unit session-5.scope)... Jan 13 20:45:46.646507 systemd[1]: Reloading... Jan 13 20:45:46.765551 zram_generator::config[2871]: No configuration found. Jan 13 20:45:46.938396 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:45:47.017234 systemd[1]: Reloading finished in 370 ms. Jan 13 20:45:47.073999 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 20:45:47.074126 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 20:45:47.075023 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:45:47.084147 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:45:47.251759 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:45:47.266108 (kubelet)[2943]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:45:47.322301 kubelet[2943]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:45:47.322301 kubelet[2943]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:45:47.322301 kubelet[2943]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:45:47.325509 kubelet[2943]: I0113 20:45:47.325441 2943 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:45:47.720411 kubelet[2943]: I0113 20:45:47.719878 2943 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 20:45:47.720411 kubelet[2943]: I0113 20:45:47.720333 2943 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:45:47.720924 kubelet[2943]: I0113 20:45:47.720884 2943 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 20:45:47.780305 kubelet[2943]: E0113 20:45:47.779941 2943 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.29.115:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.29.115:6443: connect: connection refused Jan 13 20:45:47.780458 kubelet[2943]: I0113 20:45:47.780428 2943 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:45:47.815847 kubelet[2943]: I0113 20:45:47.812898 2943 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:45:47.818181 kubelet[2943]: I0113 20:45:47.818140 2943 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:45:47.819941 kubelet[2943]: I0113 20:45:47.819905 2943 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 20:45:47.820114 kubelet[2943]: I0113 20:45:47.819958 2943 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:45:47.820114 kubelet[2943]: I0113 20:45:47.819973 2943 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 20:45:47.820114 kubelet[2943]: I0113 20:45:47.820104 2943 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:45:47.822019 kubelet[2943]: I0113 20:45:47.821995 2943 kubelet.go:396] "Attempting to sync node with API server" Jan 13 20:45:47.822133 kubelet[2943]: I0113 20:45:47.822025 2943 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:45:47.822133 kubelet[2943]: I0113 20:45:47.822061 2943 kubelet.go:312] "Adding apiserver pod source" Jan 13 20:45:47.822133 kubelet[2943]: I0113 20:45:47.822098 2943 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:45:47.826550 kubelet[2943]: W0113 20:45:47.824885 2943 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.29.115:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.29.115:6443: connect: connection refused Jan 13 20:45:47.826550 kubelet[2943]: E0113 20:45:47.824963 2943 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.29.115:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.29.115:6443: connect: connection refused Jan 13 20:45:47.826550 kubelet[2943]: W0113 20:45:47.826415 2943 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.29.115:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-115&limit=500&resourceVersion=0": dial tcp 172.31.29.115:6443: connect: connection refused Jan 13 20:45:47.826550 kubelet[2943]: E0113 20:45:47.826516 2943 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.29.115:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-115&limit=500&resourceVersion=0": dial tcp 172.31.29.115:6443: connect: connection refused Jan 13 20:45:47.827405 kubelet[2943]: I0113 20:45:47.827264 2943 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:45:47.838878 kubelet[2943]: I0113 20:45:47.838725 2943 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:45:47.844871 kubelet[2943]: W0113 20:45:47.844826 2943 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 20:45:47.846423 kubelet[2943]: I0113 20:45:47.846385 2943 server.go:1256] "Started kubelet" Jan 13 20:45:47.848642 kubelet[2943]: I0113 20:45:47.847470 2943 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:45:47.848997 kubelet[2943]: I0113 20:45:47.848970 2943 server.go:461] "Adding debug handlers to kubelet server" Jan 13 20:45:47.857012 kubelet[2943]: I0113 20:45:47.856949 2943 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:45:47.858173 kubelet[2943]: I0113 20:45:47.858144 2943 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:45:47.859491 kubelet[2943]: I0113 20:45:47.859227 2943 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:45:47.865495 kubelet[2943]: E0113 20:45:47.865466 2943 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.29.115:6443/api/v1/namespaces/default/events\": dial tcp 172.31.29.115:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-29-115.181a5b7126df536e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-29-115,UID:ip-172-31-29-115,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-29-115,},FirstTimestamp:2025-01-13 20:45:47.846349678 +0000 UTC m=+0.574881043,LastTimestamp:2025-01-13 20:45:47.846349678 +0000 UTC m=+0.574881043,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-29-115,}" Jan 13 20:45:47.872987 kubelet[2943]: I0113 20:45:47.872827 2943 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 20:45:47.877411 kubelet[2943]: E0113 20:45:47.877232 2943 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-115?timeout=10s\": dial tcp 172.31.29.115:6443: connect: connection refused" interval="200ms" Jan 13 20:45:47.879634 kubelet[2943]: I0113 20:45:47.879512 2943 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 20:45:47.879787 kubelet[2943]: I0113 20:45:47.879734 2943 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 20:45:47.881024 kubelet[2943]: W0113 20:45:47.880957 2943 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.29.115:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.115:6443: connect: connection refused Jan 13 20:45:47.881132 kubelet[2943]: E0113 20:45:47.881042 2943 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.29.115:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.115:6443: connect: connection refused Jan 13 20:45:47.881342 kubelet[2943]: I0113 20:45:47.881318 2943 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:45:47.881463 kubelet[2943]: I0113 20:45:47.881439 2943 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:45:47.894131 kubelet[2943]: I0113 20:45:47.893758 2943 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:45:47.894131 kubelet[2943]: E0113 20:45:47.893889 2943 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:45:47.930100 kubelet[2943]: I0113 20:45:47.930070 2943 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:45:47.932713 kubelet[2943]: I0113 20:45:47.932686 2943 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:45:47.932713 kubelet[2943]: I0113 20:45:47.932723 2943 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:45:47.933042 kubelet[2943]: I0113 20:45:47.932793 2943 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 20:45:47.933042 kubelet[2943]: E0113 20:45:47.932918 2943 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:45:47.945722 kubelet[2943]: W0113 20:45:47.945569 2943 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.29.115:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.115:6443: connect: connection refused Jan 13 20:45:47.945722 kubelet[2943]: E0113 20:45:47.945645 2943 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.29.115:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.115:6443: connect: connection refused Jan 13 20:45:47.987901 kubelet[2943]: I0113 20:45:47.986869 2943 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:45:47.987901 kubelet[2943]: I0113 20:45:47.986896 2943 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:45:47.987901 kubelet[2943]: I0113 20:45:47.986915 2943 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:45:47.987901 kubelet[2943]: I0113 20:45:47.987228 2943 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-29-115" Jan 13 20:45:47.987901 kubelet[2943]: E0113 20:45:47.987646 2943 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.29.115:6443/api/v1/nodes\": dial tcp 172.31.29.115:6443: connect: connection refused" node="ip-172-31-29-115" Jan 13 20:45:47.992307 kubelet[2943]: I0113 20:45:47.992262 2943 policy_none.go:49] "None policy: Start" Jan 13 20:45:47.993011 kubelet[2943]: I0113 20:45:47.992950 2943 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:45:47.993100 kubelet[2943]: I0113 20:45:47.993022 2943 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:45:48.000673 kubelet[2943]: I0113 20:45:48.000635 2943 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:45:48.000936 kubelet[2943]: I0113 20:45:48.000913 2943 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:45:48.006171 kubelet[2943]: E0113 20:45:48.006142 2943 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-29-115\" not found" Jan 13 20:45:48.033544 kubelet[2943]: I0113 20:45:48.033483 2943 topology_manager.go:215] "Topology Admit Handler" podUID="ea670ea30a1ec93f22cb40489ac8fc5a" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-29-115" Jan 13 20:45:48.038003 kubelet[2943]: I0113 20:45:48.037972 2943 topology_manager.go:215] "Topology Admit Handler" podUID="e02cae317d7fd1c13452992b233921a8" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-29-115" Jan 13 20:45:48.040071 kubelet[2943]: I0113 20:45:48.040001 2943 topology_manager.go:215] "Topology Admit Handler" podUID="9a016103985b9d42daf769f56cdc1e64" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-29-115" Jan 13 20:45:48.078334 kubelet[2943]: E0113 20:45:48.078296 2943 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-115?timeout=10s\": dial tcp 172.31.29.115:6443: connect: connection refused" interval="400ms" Jan 13 20:45:48.182290 kubelet[2943]: I0113 20:45:48.182234 2943 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e02cae317d7fd1c13452992b233921a8-k8s-certs\") pod \"kube-controller-manager-ip-172-31-29-115\" (UID: \"e02cae317d7fd1c13452992b233921a8\") " pod="kube-system/kube-controller-manager-ip-172-31-29-115" Jan 13 20:45:48.182290 kubelet[2943]: I0113 20:45:48.182298 2943 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e02cae317d7fd1c13452992b233921a8-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-29-115\" (UID: \"e02cae317d7fd1c13452992b233921a8\") " pod="kube-system/kube-controller-manager-ip-172-31-29-115" Jan 13 20:45:48.182478 kubelet[2943]: I0113 20:45:48.182327 2943 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ea670ea30a1ec93f22cb40489ac8fc5a-k8s-certs\") pod \"kube-apiserver-ip-172-31-29-115\" (UID: \"ea670ea30a1ec93f22cb40489ac8fc5a\") " pod="kube-system/kube-apiserver-ip-172-31-29-115" Jan 13 20:45:48.182478 kubelet[2943]: I0113 20:45:48.182355 2943 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e02cae317d7fd1c13452992b233921a8-ca-certs\") pod \"kube-controller-manager-ip-172-31-29-115\" (UID: \"e02cae317d7fd1c13452992b233921a8\") " pod="kube-system/kube-controller-manager-ip-172-31-29-115" Jan 13 20:45:48.182478 kubelet[2943]: I0113 20:45:48.182381 2943 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e02cae317d7fd1c13452992b233921a8-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-29-115\" (UID: \"e02cae317d7fd1c13452992b233921a8\") " pod="kube-system/kube-controller-manager-ip-172-31-29-115" Jan 13 20:45:48.182478 kubelet[2943]: I0113 20:45:48.182408 2943 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9a016103985b9d42daf769f56cdc1e64-kubeconfig\") pod \"kube-scheduler-ip-172-31-29-115\" (UID: \"9a016103985b9d42daf769f56cdc1e64\") " pod="kube-system/kube-scheduler-ip-172-31-29-115" Jan 13 20:45:48.182478 kubelet[2943]: I0113 20:45:48.182449 2943 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ea670ea30a1ec93f22cb40489ac8fc5a-ca-certs\") pod \"kube-apiserver-ip-172-31-29-115\" (UID: \"ea670ea30a1ec93f22cb40489ac8fc5a\") " pod="kube-system/kube-apiserver-ip-172-31-29-115" Jan 13 20:45:48.182724 kubelet[2943]: I0113 20:45:48.182492 2943 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ea670ea30a1ec93f22cb40489ac8fc5a-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-29-115\" (UID: \"ea670ea30a1ec93f22cb40489ac8fc5a\") " pod="kube-system/kube-apiserver-ip-172-31-29-115" Jan 13 20:45:48.182724 kubelet[2943]: I0113 20:45:48.182520 2943 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e02cae317d7fd1c13452992b233921a8-kubeconfig\") pod \"kube-controller-manager-ip-172-31-29-115\" (UID: \"e02cae317d7fd1c13452992b233921a8\") " pod="kube-system/kube-controller-manager-ip-172-31-29-115" Jan 13 20:45:48.189900 kubelet[2943]: I0113 20:45:48.189855 2943 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-29-115" Jan 13 20:45:48.190269 kubelet[2943]: E0113 20:45:48.190240 2943 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.29.115:6443/api/v1/nodes\": dial tcp 172.31.29.115:6443: connect: connection refused" node="ip-172-31-29-115" Jan 13 20:45:48.347123 containerd[2001]: time="2025-01-13T20:45:48.346995468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-29-115,Uid:e02cae317d7fd1c13452992b233921a8,Namespace:kube-system,Attempt:0,}" Jan 13 20:45:48.353317 containerd[2001]: time="2025-01-13T20:45:48.353008080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-29-115,Uid:9a016103985b9d42daf769f56cdc1e64,Namespace:kube-system,Attempt:0,}" Jan 13 20:45:48.353713 containerd[2001]: time="2025-01-13T20:45:48.353684492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-29-115,Uid:ea670ea30a1ec93f22cb40489ac8fc5a,Namespace:kube-system,Attempt:0,}" Jan 13 20:45:48.480511 kubelet[2943]: E0113 20:45:48.480246 2943 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-115?timeout=10s\": dial tcp 172.31.29.115:6443: connect: connection refused" interval="800ms" Jan 13 20:45:48.599480 kubelet[2943]: I0113 20:45:48.599360 2943 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-29-115" Jan 13 20:45:48.599827 kubelet[2943]: E0113 20:45:48.599803 2943 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.29.115:6443/api/v1/nodes\": dial tcp 172.31.29.115:6443: connect: connection refused" node="ip-172-31-29-115" Jan 13 20:45:48.918879 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount617731452.mount: Deactivated successfully. Jan 13 20:45:48.937977 containerd[2001]: time="2025-01-13T20:45:48.937917406Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:45:48.947660 containerd[2001]: time="2025-01-13T20:45:48.947243940Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 13 20:45:48.949688 containerd[2001]: time="2025-01-13T20:45:48.949640050Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:45:48.952467 containerd[2001]: time="2025-01-13T20:45:48.952422673Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:45:48.957267 containerd[2001]: time="2025-01-13T20:45:48.957066272Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:45:48.959903 containerd[2001]: time="2025-01-13T20:45:48.959852697Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:45:48.962959 containerd[2001]: time="2025-01-13T20:45:48.962898882Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:45:48.964511 containerd[2001]: time="2025-01-13T20:45:48.964341534Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:45:48.964511 containerd[2001]: time="2025-01-13T20:45:48.964403684Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 617.29709ms" Jan 13 20:45:48.972958 containerd[2001]: time="2025-01-13T20:45:48.971915973Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 618.027705ms" Jan 13 20:45:48.976127 kubelet[2943]: W0113 20:45:48.976053 2943 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.29.115:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.115:6443: connect: connection refused Jan 13 20:45:48.976467 kubelet[2943]: E0113 20:45:48.976139 2943 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.29.115:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.115:6443: connect: connection refused Jan 13 20:45:48.997565 containerd[2001]: time="2025-01-13T20:45:48.997106615Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 643.960555ms" Jan 13 20:45:49.044611 kubelet[2943]: W0113 20:45:49.039452 2943 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.29.115:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.29.115:6443: connect: connection refused Jan 13 20:45:49.044611 kubelet[2943]: E0113 20:45:49.039544 2943 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.29.115:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.29.115:6443: connect: connection refused Jan 13 20:45:49.154290 kubelet[2943]: W0113 20:45:49.154214 2943 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.29.115:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.115:6443: connect: connection refused Jan 13 20:45:49.154494 kubelet[2943]: E0113 20:45:49.154481 2943 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.29.115:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.115:6443: connect: connection refused Jan 13 20:45:49.217618 containerd[2001]: time="2025-01-13T20:45:49.216027151Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:45:49.217618 containerd[2001]: time="2025-01-13T20:45:49.216107239Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:45:49.217618 containerd[2001]: time="2025-01-13T20:45:49.216131890Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:45:49.217618 containerd[2001]: time="2025-01-13T20:45:49.216242525Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:45:49.220766 containerd[2001]: time="2025-01-13T20:45:49.214632548Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:45:49.220766 containerd[2001]: time="2025-01-13T20:45:49.220572781Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:45:49.220766 containerd[2001]: time="2025-01-13T20:45:49.220603798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:45:49.221710 containerd[2001]: time="2025-01-13T20:45:49.221566567Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:45:49.221843 containerd[2001]: time="2025-01-13T20:45:49.221690165Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:45:49.222187 containerd[2001]: time="2025-01-13T20:45:49.221933046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:45:49.222303 containerd[2001]: time="2025-01-13T20:45:49.222110590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:45:49.223101 containerd[2001]: time="2025-01-13T20:45:49.222993454Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:45:49.281363 kubelet[2943]: E0113 20:45:49.281328 2943 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-115?timeout=10s\": dial tcp 172.31.29.115:6443: connect: connection refused" interval="1.6s" Jan 13 20:45:49.313740 kubelet[2943]: W0113 20:45:49.313656 2943 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.29.115:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-115&limit=500&resourceVersion=0": dial tcp 172.31.29.115:6443: connect: connection refused Jan 13 20:45:49.313864 kubelet[2943]: E0113 20:45:49.313750 2943 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.29.115:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-115&limit=500&resourceVersion=0": dial tcp 172.31.29.115:6443: connect: connection refused Jan 13 20:45:49.346619 containerd[2001]: time="2025-01-13T20:45:49.346521522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-29-115,Uid:e02cae317d7fd1c13452992b233921a8,Namespace:kube-system,Attempt:0,} returns sandbox id \"b3f22f66a9c5bf8f6a17d7976d42c846fc8da05afcfaa78f9a91f6c4652f52de\"" Jan 13 20:45:49.354909 containerd[2001]: time="2025-01-13T20:45:49.354750673Z" level=info msg="CreateContainer within sandbox \"b3f22f66a9c5bf8f6a17d7976d42c846fc8da05afcfaa78f9a91f6c4652f52de\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 20:45:49.393853 containerd[2001]: time="2025-01-13T20:45:49.392055522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-29-115,Uid:ea670ea30a1ec93f22cb40489ac8fc5a,Namespace:kube-system,Attempt:0,} returns sandbox id \"f236dc210a490e709d2dc4b5af66b20a4bd9d74c46991193d32cb77300f80831\"" Jan 13 20:45:49.400048 containerd[2001]: time="2025-01-13T20:45:49.399891623Z" level=info msg="CreateContainer within sandbox \"f236dc210a490e709d2dc4b5af66b20a4bd9d74c46991193d32cb77300f80831\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 20:45:49.404009 kubelet[2943]: I0113 20:45:49.403602 2943 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-29-115" Jan 13 20:45:49.405301 kubelet[2943]: E0113 20:45:49.404949 2943 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.29.115:6443/api/v1/nodes\": dial tcp 172.31.29.115:6443: connect: connection refused" node="ip-172-31-29-115" Jan 13 20:45:49.406463 containerd[2001]: time="2025-01-13T20:45:49.406430140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-29-115,Uid:9a016103985b9d42daf769f56cdc1e64,Namespace:kube-system,Attempt:0,} returns sandbox id \"93e31b6a0cc425c3bd68f454726d4f2513a6535155e58616dbab1c9e305825b3\"" Jan 13 20:45:49.410469 containerd[2001]: time="2025-01-13T20:45:49.410441627Z" level=info msg="CreateContainer within sandbox \"93e31b6a0cc425c3bd68f454726d4f2513a6535155e58616dbab1c9e305825b3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 20:45:49.430259 containerd[2001]: time="2025-01-13T20:45:49.430192267Z" level=info msg="CreateContainer within sandbox \"b3f22f66a9c5bf8f6a17d7976d42c846fc8da05afcfaa78f9a91f6c4652f52de\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"15577735283cd2d0abb3451e348f00481c431af43056408ec9dcba975a7365ba\"" Jan 13 20:45:49.432564 containerd[2001]: time="2025-01-13T20:45:49.432047374Z" level=info msg="StartContainer for \"15577735283cd2d0abb3451e348f00481c431af43056408ec9dcba975a7365ba\"" Jan 13 20:45:49.461265 containerd[2001]: time="2025-01-13T20:45:49.461229271Z" level=info msg="CreateContainer within sandbox \"f236dc210a490e709d2dc4b5af66b20a4bd9d74c46991193d32cb77300f80831\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"fcb922302a7ca55bc8d04a32c1882884dc7be418fda8d4939b9834aac8c837f9\"" Jan 13 20:45:49.461996 containerd[2001]: time="2025-01-13T20:45:49.461970492Z" level=info msg="StartContainer for \"fcb922302a7ca55bc8d04a32c1882884dc7be418fda8d4939b9834aac8c837f9\"" Jan 13 20:45:49.482701 containerd[2001]: time="2025-01-13T20:45:49.481859854Z" level=info msg="CreateContainer within sandbox \"93e31b6a0cc425c3bd68f454726d4f2513a6535155e58616dbab1c9e305825b3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"804f94e9a5284fc07767229f3db9a7e8dffa9f4647ace5c33deaff19203d0794\"" Jan 13 20:45:49.486173 containerd[2001]: time="2025-01-13T20:45:49.485991464Z" level=info msg="StartContainer for \"804f94e9a5284fc07767229f3db9a7e8dffa9f4647ace5c33deaff19203d0794\"" Jan 13 20:45:49.593002 containerd[2001]: time="2025-01-13T20:45:49.592463863Z" level=info msg="StartContainer for \"15577735283cd2d0abb3451e348f00481c431af43056408ec9dcba975a7365ba\" returns successfully" Jan 13 20:45:49.655557 containerd[2001]: time="2025-01-13T20:45:49.654171006Z" level=info msg="StartContainer for \"fcb922302a7ca55bc8d04a32c1882884dc7be418fda8d4939b9834aac8c837f9\" returns successfully" Jan 13 20:45:49.707478 containerd[2001]: time="2025-01-13T20:45:49.707285698Z" level=info msg="StartContainer for \"804f94e9a5284fc07767229f3db9a7e8dffa9f4647ace5c33deaff19203d0794\" returns successfully" Jan 13 20:45:49.950685 kubelet[2943]: E0113 20:45:49.950584 2943 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.29.115:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.29.115:6443: connect: connection refused Jan 13 20:45:51.010567 kubelet[2943]: I0113 20:45:51.008024 2943 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-29-115" Jan 13 20:45:52.556230 kubelet[2943]: I0113 20:45:52.555909 2943 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-29-115" Jan 13 20:45:52.595265 kubelet[2943]: E0113 20:45:52.595215 2943 event.go:346] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-29-115.181a5b7126df536e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-29-115,UID:ip-172-31-29-115,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-29-115,},FirstTimestamp:2025-01-13 20:45:47.846349678 +0000 UTC m=+0.574881043,LastTimestamp:2025-01-13 20:45:47.846349678 +0000 UTC m=+0.574881043,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-29-115,}" Jan 13 20:45:52.828136 kubelet[2943]: I0113 20:45:52.828001 2943 apiserver.go:52] "Watching apiserver" Jan 13 20:45:52.880135 kubelet[2943]: I0113 20:45:52.880094 2943 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 20:45:55.452746 systemd[1]: Reloading requested from client PID 3221 ('systemctl') (unit session-5.scope)... Jan 13 20:45:55.452764 systemd[1]: Reloading... Jan 13 20:45:55.586570 zram_generator::config[3262]: No configuration found. Jan 13 20:45:55.741046 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:45:55.842052 systemd[1]: Reloading finished in 388 ms. Jan 13 20:45:55.882039 kubelet[2943]: I0113 20:45:55.881993 2943 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:45:55.882266 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:45:55.905298 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 20:45:55.905948 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:45:55.912026 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:45:56.212727 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:45:56.220102 (kubelet)[3328]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:45:56.336409 kubelet[3328]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:45:56.336409 kubelet[3328]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:45:56.336409 kubelet[3328]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:45:56.338042 kubelet[3328]: I0113 20:45:56.336547 3328 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:45:56.347203 kubelet[3328]: I0113 20:45:56.346737 3328 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 20:45:56.347203 kubelet[3328]: I0113 20:45:56.346763 3328 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:45:56.347203 kubelet[3328]: I0113 20:45:56.347002 3328 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 20:45:56.354416 kubelet[3328]: I0113 20:45:56.353519 3328 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 20:45:56.373177 kubelet[3328]: I0113 20:45:56.372432 3328 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:45:56.384657 kubelet[3328]: I0113 20:45:56.384608 3328 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:45:56.385817 kubelet[3328]: I0113 20:45:56.385399 3328 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:45:56.385817 kubelet[3328]: I0113 20:45:56.385662 3328 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 20:45:56.388751 kubelet[3328]: I0113 20:45:56.387272 3328 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:45:56.388751 kubelet[3328]: I0113 20:45:56.387302 3328 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 20:45:56.388751 kubelet[3328]: I0113 20:45:56.387358 3328 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:45:56.388751 kubelet[3328]: I0113 20:45:56.387588 3328 kubelet.go:396] "Attempting to sync node with API server" Jan 13 20:45:56.388751 kubelet[3328]: I0113 20:45:56.387610 3328 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:45:56.388751 kubelet[3328]: I0113 20:45:56.388278 3328 kubelet.go:312] "Adding apiserver pod source" Jan 13 20:45:56.388751 kubelet[3328]: I0113 20:45:56.388301 3328 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:45:56.394551 kubelet[3328]: I0113 20:45:56.390269 3328 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:45:56.394551 kubelet[3328]: I0113 20:45:56.390539 3328 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:45:56.394551 kubelet[3328]: I0113 20:45:56.391772 3328 server.go:1256] "Started kubelet" Jan 13 20:45:56.406793 kubelet[3328]: I0113 20:45:56.406740 3328 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:45:56.416595 kubelet[3328]: I0113 20:45:56.416566 3328 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:45:56.453283 kubelet[3328]: I0113 20:45:56.453250 3328 server.go:461] "Adding debug handlers to kubelet server" Jan 13 20:45:56.481275 kubelet[3328]: I0113 20:45:56.419028 3328 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:45:56.487962 kubelet[3328]: I0113 20:45:56.429714 3328 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 20:45:56.490249 kubelet[3328]: I0113 20:45:56.436548 3328 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 20:45:56.490610 kubelet[3328]: I0113 20:45:56.490592 3328 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:45:56.491543 kubelet[3328]: I0113 20:45:56.491510 3328 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 20:45:56.508739 kubelet[3328]: I0113 20:45:56.508703 3328 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:45:56.545682 kubelet[3328]: E0113 20:45:56.544603 3328 container_manager_linux.go:881] "Unable to get rootfs data from cAdvisor interface" err="unable to find data in memory cache" Jan 13 20:45:56.553238 kubelet[3328]: I0113 20:45:56.551999 3328 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:45:56.559546 kubelet[3328]: I0113 20:45:56.557742 3328 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:45:56.559546 kubelet[3328]: I0113 20:45:56.557759 3328 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:45:56.559830 kubelet[3328]: I0113 20:45:56.559811 3328 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:45:56.559923 kubelet[3328]: I0113 20:45:56.559913 3328 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:45:56.560003 kubelet[3328]: I0113 20:45:56.559994 3328 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 20:45:56.560818 kubelet[3328]: E0113 20:45:56.560803 3328 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:45:56.581429 kubelet[3328]: I0113 20:45:56.581392 3328 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-29-115" Jan 13 20:45:56.584544 kubelet[3328]: E0113 20:45:56.582899 3328 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:45:56.600626 kubelet[3328]: I0113 20:45:56.600599 3328 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-29-115" Jan 13 20:45:56.601261 kubelet[3328]: I0113 20:45:56.601244 3328 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-29-115" Jan 13 20:45:56.661135 kubelet[3328]: E0113 20:45:56.661095 3328 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 20:45:56.677389 kubelet[3328]: I0113 20:45:56.677352 3328 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:45:56.677389 kubelet[3328]: I0113 20:45:56.677375 3328 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:45:56.677389 kubelet[3328]: I0113 20:45:56.677394 3328 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:45:56.678620 kubelet[3328]: I0113 20:45:56.677695 3328 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 20:45:56.678620 kubelet[3328]: I0113 20:45:56.677731 3328 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 20:45:56.678620 kubelet[3328]: I0113 20:45:56.677742 3328 policy_none.go:49] "None policy: Start" Jan 13 20:45:56.681544 kubelet[3328]: I0113 20:45:56.680573 3328 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:45:56.681544 kubelet[3328]: I0113 20:45:56.680613 3328 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:45:56.681544 kubelet[3328]: I0113 20:45:56.680960 3328 state_mem.go:75] "Updated machine memory state" Jan 13 20:45:56.683455 kubelet[3328]: I0113 20:45:56.682394 3328 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:45:56.684104 kubelet[3328]: I0113 20:45:56.684079 3328 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:45:56.864630 kubelet[3328]: I0113 20:45:56.861349 3328 topology_manager.go:215] "Topology Admit Handler" podUID="ea670ea30a1ec93f22cb40489ac8fc5a" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-29-115" Jan 13 20:45:56.864630 kubelet[3328]: I0113 20:45:56.861471 3328 topology_manager.go:215] "Topology Admit Handler" podUID="e02cae317d7fd1c13452992b233921a8" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-29-115" Jan 13 20:45:56.864630 kubelet[3328]: I0113 20:45:56.861539 3328 topology_manager.go:215] "Topology Admit Handler" podUID="9a016103985b9d42daf769f56cdc1e64" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-29-115" Jan 13 20:45:56.870071 kubelet[3328]: E0113 20:45:56.870038 3328 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-29-115\" already exists" pod="kube-system/kube-scheduler-ip-172-31-29-115" Jan 13 20:45:56.871647 kubelet[3328]: E0113 20:45:56.871606 3328 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-29-115\" already exists" pod="kube-system/kube-apiserver-ip-172-31-29-115" Jan 13 20:45:56.894451 kubelet[3328]: I0113 20:45:56.894076 3328 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9a016103985b9d42daf769f56cdc1e64-kubeconfig\") pod \"kube-scheduler-ip-172-31-29-115\" (UID: \"9a016103985b9d42daf769f56cdc1e64\") " pod="kube-system/kube-scheduler-ip-172-31-29-115" Jan 13 20:45:56.894451 kubelet[3328]: I0113 20:45:56.894129 3328 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ea670ea30a1ec93f22cb40489ac8fc5a-ca-certs\") pod \"kube-apiserver-ip-172-31-29-115\" (UID: \"ea670ea30a1ec93f22cb40489ac8fc5a\") " pod="kube-system/kube-apiserver-ip-172-31-29-115" Jan 13 20:45:56.894451 kubelet[3328]: I0113 20:45:56.894155 3328 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ea670ea30a1ec93f22cb40489ac8fc5a-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-29-115\" (UID: \"ea670ea30a1ec93f22cb40489ac8fc5a\") " pod="kube-system/kube-apiserver-ip-172-31-29-115" Jan 13 20:45:56.894451 kubelet[3328]: I0113 20:45:56.894175 3328 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e02cae317d7fd1c13452992b233921a8-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-29-115\" (UID: \"e02cae317d7fd1c13452992b233921a8\") " pod="kube-system/kube-controller-manager-ip-172-31-29-115" Jan 13 20:45:56.894451 kubelet[3328]: I0113 20:45:56.894210 3328 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ea670ea30a1ec93f22cb40489ac8fc5a-k8s-certs\") pod \"kube-apiserver-ip-172-31-29-115\" (UID: \"ea670ea30a1ec93f22cb40489ac8fc5a\") " pod="kube-system/kube-apiserver-ip-172-31-29-115" Jan 13 20:45:56.894810 kubelet[3328]: I0113 20:45:56.894228 3328 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e02cae317d7fd1c13452992b233921a8-ca-certs\") pod \"kube-controller-manager-ip-172-31-29-115\" (UID: \"e02cae317d7fd1c13452992b233921a8\") " pod="kube-system/kube-controller-manager-ip-172-31-29-115" Jan 13 20:45:56.894810 kubelet[3328]: I0113 20:45:56.894282 3328 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e02cae317d7fd1c13452992b233921a8-k8s-certs\") pod \"kube-controller-manager-ip-172-31-29-115\" (UID: \"e02cae317d7fd1c13452992b233921a8\") " pod="kube-system/kube-controller-manager-ip-172-31-29-115" Jan 13 20:45:56.894810 kubelet[3328]: I0113 20:45:56.894305 3328 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e02cae317d7fd1c13452992b233921a8-kubeconfig\") pod \"kube-controller-manager-ip-172-31-29-115\" (UID: \"e02cae317d7fd1c13452992b233921a8\") " pod="kube-system/kube-controller-manager-ip-172-31-29-115" Jan 13 20:45:56.894810 kubelet[3328]: I0113 20:45:56.894333 3328 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e02cae317d7fd1c13452992b233921a8-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-29-115\" (UID: \"e02cae317d7fd1c13452992b233921a8\") " pod="kube-system/kube-controller-manager-ip-172-31-29-115" Jan 13 20:45:57.415272 kubelet[3328]: I0113 20:45:57.415231 3328 apiserver.go:52] "Watching apiserver" Jan 13 20:45:57.490870 kubelet[3328]: I0113 20:45:57.490827 3328 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 20:45:57.637580 kubelet[3328]: E0113 20:45:57.637214 3328 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-29-115\" already exists" pod="kube-system/kube-apiserver-ip-172-31-29-115" Jan 13 20:45:57.655700 sudo[2315]: pam_unix(sudo:session): session closed for user root Jan 13 20:45:57.673028 kubelet[3328]: I0113 20:45:57.671863 3328 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-29-115" podStartSLOduration=3.671822257 podStartE2EDuration="3.671822257s" podCreationTimestamp="2025-01-13 20:45:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:45:57.661743348 +0000 UTC m=+1.423823465" watchObservedRunningTime="2025-01-13 20:45:57.671822257 +0000 UTC m=+1.433902363" Jan 13 20:45:57.679472 sshd[2314]: Connection closed by 139.178.89.65 port 41020 Jan 13 20:45:57.680604 sshd-session[2311]: pam_unix(sshd:session): session closed for user core Jan 13 20:45:57.687024 systemd[1]: sshd@4-172.31.29.115:22-139.178.89.65:41020.service: Deactivated successfully. Jan 13 20:45:57.695590 kubelet[3328]: I0113 20:45:57.693782 3328 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-29-115" podStartSLOduration=4.693716865 podStartE2EDuration="4.693716865s" podCreationTimestamp="2025-01-13 20:45:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:45:57.672056766 +0000 UTC m=+1.434136883" watchObservedRunningTime="2025-01-13 20:45:57.693716865 +0000 UTC m=+1.455796964" Jan 13 20:45:57.695590 kubelet[3328]: I0113 20:45:57.693920 3328 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-29-115" podStartSLOduration=1.693893206 podStartE2EDuration="1.693893206s" podCreationTimestamp="2025-01-13 20:45:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:45:57.685481546 +0000 UTC m=+1.447561662" watchObservedRunningTime="2025-01-13 20:45:57.693893206 +0000 UTC m=+1.455973323" Jan 13 20:45:57.696689 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 20:45:57.698723 systemd-logind[1973]: Session 5 logged out. Waiting for processes to exit. Jan 13 20:45:57.700880 systemd-logind[1973]: Removed session 5. Jan 13 20:45:59.848347 update_engine[1975]: I20250113 20:45:59.848273 1975 update_attempter.cc:509] Updating boot flags... Jan 13 20:45:59.975461 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (3401) Jan 13 20:46:00.227567 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (3392) Jan 13 20:46:09.779045 kubelet[3328]: I0113 20:46:09.778993 3328 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 20:46:09.782476 kubelet[3328]: I0113 20:46:09.781734 3328 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 20:46:09.782560 containerd[2001]: time="2025-01-13T20:46:09.781288819Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 20:46:10.555928 kubelet[3328]: I0113 20:46:10.555881 3328 topology_manager.go:215] "Topology Admit Handler" podUID="d1944860-49a7-41ef-ad5d-382d0c4fa65c" podNamespace="kube-system" podName="kube-proxy-j59f6" Jan 13 20:46:10.580141 kubelet[3328]: I0113 20:46:10.577622 3328 topology_manager.go:215] "Topology Admit Handler" podUID="bbb528da-f204-47f5-9735-147cf102864a" podNamespace="kube-flannel" podName="kube-flannel-ds-22lnb" Jan 13 20:46:10.604422 kubelet[3328]: I0113 20:46:10.604391 3328 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d1944860-49a7-41ef-ad5d-382d0c4fa65c-xtables-lock\") pod \"kube-proxy-j59f6\" (UID: \"d1944860-49a7-41ef-ad5d-382d0c4fa65c\") " pod="kube-system/kube-proxy-j59f6" Jan 13 20:46:10.604422 kubelet[3328]: I0113 20:46:10.604443 3328 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/bbb528da-f204-47f5-9735-147cf102864a-run\") pod \"kube-flannel-ds-22lnb\" (UID: \"bbb528da-f204-47f5-9735-147cf102864a\") " pod="kube-flannel/kube-flannel-ds-22lnb" Jan 13 20:46:10.604629 kubelet[3328]: I0113 20:46:10.604471 3328 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/bbb528da-f204-47f5-9735-147cf102864a-cni\") pod \"kube-flannel-ds-22lnb\" (UID: \"bbb528da-f204-47f5-9735-147cf102864a\") " pod="kube-flannel/kube-flannel-ds-22lnb" Jan 13 20:46:10.604629 kubelet[3328]: I0113 20:46:10.604499 3328 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bbb528da-f204-47f5-9735-147cf102864a-xtables-lock\") pod \"kube-flannel-ds-22lnb\" (UID: \"bbb528da-f204-47f5-9735-147cf102864a\") " pod="kube-flannel/kube-flannel-ds-22lnb" Jan 13 20:46:10.604629 kubelet[3328]: I0113 20:46:10.604540 3328 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mk9z9\" (UniqueName: \"kubernetes.io/projected/bbb528da-f204-47f5-9735-147cf102864a-kube-api-access-mk9z9\") pod \"kube-flannel-ds-22lnb\" (UID: \"bbb528da-f204-47f5-9735-147cf102864a\") " pod="kube-flannel/kube-flannel-ds-22lnb" Jan 13 20:46:10.604629 kubelet[3328]: I0113 20:46:10.604567 3328 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d1944860-49a7-41ef-ad5d-382d0c4fa65c-kube-proxy\") pod \"kube-proxy-j59f6\" (UID: \"d1944860-49a7-41ef-ad5d-382d0c4fa65c\") " pod="kube-system/kube-proxy-j59f6" Jan 13 20:46:10.604629 kubelet[3328]: I0113 20:46:10.604596 3328 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d1944860-49a7-41ef-ad5d-382d0c4fa65c-lib-modules\") pod \"kube-proxy-j59f6\" (UID: \"d1944860-49a7-41ef-ad5d-382d0c4fa65c\") " pod="kube-system/kube-proxy-j59f6" Jan 13 20:46:10.605293 kubelet[3328]: I0113 20:46:10.604628 3328 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4xj8\" (UniqueName: \"kubernetes.io/projected/d1944860-49a7-41ef-ad5d-382d0c4fa65c-kube-api-access-v4xj8\") pod \"kube-proxy-j59f6\" (UID: \"d1944860-49a7-41ef-ad5d-382d0c4fa65c\") " pod="kube-system/kube-proxy-j59f6" Jan 13 20:46:10.605293 kubelet[3328]: I0113 20:46:10.604656 3328 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/bbb528da-f204-47f5-9735-147cf102864a-cni-plugin\") pod \"kube-flannel-ds-22lnb\" (UID: \"bbb528da-f204-47f5-9735-147cf102864a\") " pod="kube-flannel/kube-flannel-ds-22lnb" Jan 13 20:46:10.605293 kubelet[3328]: I0113 20:46:10.604684 3328 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/bbb528da-f204-47f5-9735-147cf102864a-flannel-cfg\") pod \"kube-flannel-ds-22lnb\" (UID: \"bbb528da-f204-47f5-9735-147cf102864a\") " pod="kube-flannel/kube-flannel-ds-22lnb" Jan 13 20:46:10.873236 containerd[2001]: time="2025-01-13T20:46:10.873118984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-j59f6,Uid:d1944860-49a7-41ef-ad5d-382d0c4fa65c,Namespace:kube-system,Attempt:0,}" Jan 13 20:46:10.885861 containerd[2001]: time="2025-01-13T20:46:10.885819631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-22lnb,Uid:bbb528da-f204-47f5-9735-147cf102864a,Namespace:kube-flannel,Attempt:0,}" Jan 13 20:46:10.932145 containerd[2001]: time="2025-01-13T20:46:10.931708144Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:46:10.932145 containerd[2001]: time="2025-01-13T20:46:10.931769700Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:46:10.932605 containerd[2001]: time="2025-01-13T20:46:10.931792971Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:10.933029 containerd[2001]: time="2025-01-13T20:46:10.932507187Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:10.972182 containerd[2001]: time="2025-01-13T20:46:10.971786903Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:46:10.972182 containerd[2001]: time="2025-01-13T20:46:10.971945226Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:46:10.972182 containerd[2001]: time="2025-01-13T20:46:10.972036912Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:10.974384 containerd[2001]: time="2025-01-13T20:46:10.972255171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:11.004505 containerd[2001]: time="2025-01-13T20:46:11.004365445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-j59f6,Uid:d1944860-49a7-41ef-ad5d-382d0c4fa65c,Namespace:kube-system,Attempt:0,} returns sandbox id \"fcd94baf3cd8bc7a2d8e23acfa4d7a26a42159987f10d966defb77a4024ede70\"" Jan 13 20:46:11.009909 containerd[2001]: time="2025-01-13T20:46:11.009687417Z" level=info msg="CreateContainer within sandbox \"fcd94baf3cd8bc7a2d8e23acfa4d7a26a42159987f10d966defb77a4024ede70\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 20:46:11.038420 containerd[2001]: time="2025-01-13T20:46:11.038376387Z" level=info msg="CreateContainer within sandbox \"fcd94baf3cd8bc7a2d8e23acfa4d7a26a42159987f10d966defb77a4024ede70\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"464caefa64e7096ab58363f5962d401406274bbb89a323ab4cf8800701f94713\"" Jan 13 20:46:11.040505 containerd[2001]: time="2025-01-13T20:46:11.040064489Z" level=info msg="StartContainer for \"464caefa64e7096ab58363f5962d401406274bbb89a323ab4cf8800701f94713\"" Jan 13 20:46:11.062460 containerd[2001]: time="2025-01-13T20:46:11.062415303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-22lnb,Uid:bbb528da-f204-47f5-9735-147cf102864a,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"7ef3c1ed7beae3722ba0f2f945c9502278b30e123a21fc87f08b83a10db5f10f\"" Jan 13 20:46:11.068181 containerd[2001]: time="2025-01-13T20:46:11.068142699Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Jan 13 20:46:11.124009 containerd[2001]: time="2025-01-13T20:46:11.123833027Z" level=info msg="StartContainer for \"464caefa64e7096ab58363f5962d401406274bbb89a323ab4cf8800701f94713\" returns successfully" Jan 13 20:46:11.661382 kubelet[3328]: I0113 20:46:11.661315 3328 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-j59f6" podStartSLOduration=1.6612683270000002 podStartE2EDuration="1.661268327s" podCreationTimestamp="2025-01-13 20:46:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:46:11.660625522 +0000 UTC m=+15.422705639" watchObservedRunningTime="2025-01-13 20:46:11.661268327 +0000 UTC m=+15.423348442" Jan 13 20:46:12.826499 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3700839925.mount: Deactivated successfully. Jan 13 20:46:12.896543 containerd[2001]: time="2025-01-13T20:46:12.896477966Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:12.898610 containerd[2001]: time="2025-01-13T20:46:12.898412893Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3852937" Jan 13 20:46:12.900974 containerd[2001]: time="2025-01-13T20:46:12.900940617Z" level=info msg="ImageCreate event name:\"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:12.904931 containerd[2001]: time="2025-01-13T20:46:12.904841587Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:12.905785 containerd[2001]: time="2025-01-13T20:46:12.905751128Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3842055\" in 1.837385737s" Jan 13 20:46:12.905855 containerd[2001]: time="2025-01-13T20:46:12.905791009Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" Jan 13 20:46:12.908421 containerd[2001]: time="2025-01-13T20:46:12.908394608Z" level=info msg="CreateContainer within sandbox \"7ef3c1ed7beae3722ba0f2f945c9502278b30e123a21fc87f08b83a10db5f10f\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Jan 13 20:46:12.931986 containerd[2001]: time="2025-01-13T20:46:12.931945701Z" level=info msg="CreateContainer within sandbox \"7ef3c1ed7beae3722ba0f2f945c9502278b30e123a21fc87f08b83a10db5f10f\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"8876647cc0078377ffe8cc048e1fcceefb3fc42e90e67fcbfcba81a24e8fc818\"" Jan 13 20:46:12.933097 containerd[2001]: time="2025-01-13T20:46:12.932737817Z" level=info msg="StartContainer for \"8876647cc0078377ffe8cc048e1fcceefb3fc42e90e67fcbfcba81a24e8fc818\"" Jan 13 20:46:12.998625 containerd[2001]: time="2025-01-13T20:46:12.998586977Z" level=info msg="StartContainer for \"8876647cc0078377ffe8cc048e1fcceefb3fc42e90e67fcbfcba81a24e8fc818\" returns successfully" Jan 13 20:46:13.046443 containerd[2001]: time="2025-01-13T20:46:13.046374778Z" level=info msg="shim disconnected" id=8876647cc0078377ffe8cc048e1fcceefb3fc42e90e67fcbfcba81a24e8fc818 namespace=k8s.io Jan 13 20:46:13.046443 containerd[2001]: time="2025-01-13T20:46:13.046440206Z" level=warning msg="cleaning up after shim disconnected" id=8876647cc0078377ffe8cc048e1fcceefb3fc42e90e67fcbfcba81a24e8fc818 namespace=k8s.io Jan 13 20:46:13.046443 containerd[2001]: time="2025-01-13T20:46:13.046454558Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:46:13.672470 containerd[2001]: time="2025-01-13T20:46:13.671604563Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Jan 13 20:46:13.826420 systemd[1]: run-containerd-runc-k8s.io-8876647cc0078377ffe8cc048e1fcceefb3fc42e90e67fcbfcba81a24e8fc818-runc.TpUhql.mount: Deactivated successfully. Jan 13 20:46:13.826702 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8876647cc0078377ffe8cc048e1fcceefb3fc42e90e67fcbfcba81a24e8fc818-rootfs.mount: Deactivated successfully. Jan 13 20:46:15.722239 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2535897078.mount: Deactivated successfully. Jan 13 20:46:16.602713 containerd[2001]: time="2025-01-13T20:46:16.602651783Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:16.604555 containerd[2001]: time="2025-01-13T20:46:16.604439891Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26866358" Jan 13 20:46:16.607260 containerd[2001]: time="2025-01-13T20:46:16.606888299Z" level=info msg="ImageCreate event name:\"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:16.611195 containerd[2001]: time="2025-01-13T20:46:16.611081619Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:16.612374 containerd[2001]: time="2025-01-13T20:46:16.612336802Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26855532\" in 2.939740849s" Jan 13 20:46:16.612463 containerd[2001]: time="2025-01-13T20:46:16.612381299Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" Jan 13 20:46:16.614700 containerd[2001]: time="2025-01-13T20:46:16.614672707Z" level=info msg="CreateContainer within sandbox \"7ef3c1ed7beae3722ba0f2f945c9502278b30e123a21fc87f08b83a10db5f10f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 13 20:46:16.639195 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2333328130.mount: Deactivated successfully. Jan 13 20:46:16.645816 containerd[2001]: time="2025-01-13T20:46:16.645768563Z" level=info msg="CreateContainer within sandbox \"7ef3c1ed7beae3722ba0f2f945c9502278b30e123a21fc87f08b83a10db5f10f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"311de3e40a23f67d07abe6391ea48d2e86841527ed1da0afb2ddf3e1664ac329\"" Jan 13 20:46:16.647613 containerd[2001]: time="2025-01-13T20:46:16.646430780Z" level=info msg="StartContainer for \"311de3e40a23f67d07abe6391ea48d2e86841527ed1da0afb2ddf3e1664ac329\"" Jan 13 20:46:16.717835 containerd[2001]: time="2025-01-13T20:46:16.717793782Z" level=info msg="StartContainer for \"311de3e40a23f67d07abe6391ea48d2e86841527ed1da0afb2ddf3e1664ac329\" returns successfully" Jan 13 20:46:16.737835 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-311de3e40a23f67d07abe6391ea48d2e86841527ed1da0afb2ddf3e1664ac329-rootfs.mount: Deactivated successfully. Jan 13 20:46:16.754215 containerd[2001]: time="2025-01-13T20:46:16.754140795Z" level=error msg="collecting metrics for 311de3e40a23f67d07abe6391ea48d2e86841527ed1da0afb2ddf3e1664ac329" error="cgroups: cgroup deleted: unknown" Jan 13 20:46:16.790924 kubelet[3328]: I0113 20:46:16.790897 3328 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 13 20:46:16.816840 kubelet[3328]: I0113 20:46:16.816797 3328 topology_manager.go:215] "Topology Admit Handler" podUID="9b4734b7-fbf7-49d9-9ee9-51d6b5f65779" podNamespace="kube-system" podName="coredns-76f75df574-dpq4s" Jan 13 20:46:16.825803 kubelet[3328]: I0113 20:46:16.822410 3328 topology_manager.go:215] "Topology Admit Handler" podUID="458d6de3-d87e-4a50-a1a7-dd434df2beb6" podNamespace="kube-system" podName="coredns-76f75df574-4sblt" Jan 13 20:46:16.854738 kubelet[3328]: I0113 20:46:16.854638 3328 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v48nv\" (UniqueName: \"kubernetes.io/projected/9b4734b7-fbf7-49d9-9ee9-51d6b5f65779-kube-api-access-v48nv\") pod \"coredns-76f75df574-dpq4s\" (UID: \"9b4734b7-fbf7-49d9-9ee9-51d6b5f65779\") " pod="kube-system/coredns-76f75df574-dpq4s" Jan 13 20:46:16.854738 kubelet[3328]: I0113 20:46:16.854687 3328 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/458d6de3-d87e-4a50-a1a7-dd434df2beb6-config-volume\") pod \"coredns-76f75df574-4sblt\" (UID: \"458d6de3-d87e-4a50-a1a7-dd434df2beb6\") " pod="kube-system/coredns-76f75df574-4sblt" Jan 13 20:46:16.854738 kubelet[3328]: I0113 20:46:16.854722 3328 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6qfp\" (UniqueName: \"kubernetes.io/projected/458d6de3-d87e-4a50-a1a7-dd434df2beb6-kube-api-access-x6qfp\") pod \"coredns-76f75df574-4sblt\" (UID: \"458d6de3-d87e-4a50-a1a7-dd434df2beb6\") " pod="kube-system/coredns-76f75df574-4sblt" Jan 13 20:46:16.854983 kubelet[3328]: I0113 20:46:16.854751 3328 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b4734b7-fbf7-49d9-9ee9-51d6b5f65779-config-volume\") pod \"coredns-76f75df574-dpq4s\" (UID: \"9b4734b7-fbf7-49d9-9ee9-51d6b5f65779\") " pod="kube-system/coredns-76f75df574-dpq4s" Jan 13 20:46:16.922219 containerd[2001]: time="2025-01-13T20:46:16.922147579Z" level=info msg="shim disconnected" id=311de3e40a23f67d07abe6391ea48d2e86841527ed1da0afb2ddf3e1664ac329 namespace=k8s.io Jan 13 20:46:16.922219 containerd[2001]: time="2025-01-13T20:46:16.922212714Z" level=warning msg="cleaning up after shim disconnected" id=311de3e40a23f67d07abe6391ea48d2e86841527ed1da0afb2ddf3e1664ac329 namespace=k8s.io Jan 13 20:46:16.922219 containerd[2001]: time="2025-01-13T20:46:16.922224677Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:46:17.135134 containerd[2001]: time="2025-01-13T20:46:17.134439156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-dpq4s,Uid:9b4734b7-fbf7-49d9-9ee9-51d6b5f65779,Namespace:kube-system,Attempt:0,}" Jan 13 20:46:17.135729 containerd[2001]: time="2025-01-13T20:46:17.135266247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-4sblt,Uid:458d6de3-d87e-4a50-a1a7-dd434df2beb6,Namespace:kube-system,Attempt:0,}" Jan 13 20:46:17.221878 containerd[2001]: time="2025-01-13T20:46:17.221828034Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-4sblt,Uid:458d6de3-d87e-4a50-a1a7-dd434df2beb6,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dc89f09ab7cfff0fba28d1eb1a11819550d8a811a2fa960c2053a6ee3a9192ac\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 13 20:46:17.222207 kubelet[3328]: E0113 20:46:17.222179 3328 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc89f09ab7cfff0fba28d1eb1a11819550d8a811a2fa960c2053a6ee3a9192ac\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 13 20:46:17.222346 kubelet[3328]: E0113 20:46:17.222247 3328 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc89f09ab7cfff0fba28d1eb1a11819550d8a811a2fa960c2053a6ee3a9192ac\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-4sblt" Jan 13 20:46:17.222346 kubelet[3328]: E0113 20:46:17.222280 3328 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc89f09ab7cfff0fba28d1eb1a11819550d8a811a2fa960c2053a6ee3a9192ac\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-4sblt" Jan 13 20:46:17.222442 kubelet[3328]: E0113 20:46:17.222359 3328 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-4sblt_kube-system(458d6de3-d87e-4a50-a1a7-dd434df2beb6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-4sblt_kube-system(458d6de3-d87e-4a50-a1a7-dd434df2beb6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dc89f09ab7cfff0fba28d1eb1a11819550d8a811a2fa960c2053a6ee3a9192ac\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-76f75df574-4sblt" podUID="458d6de3-d87e-4a50-a1a7-dd434df2beb6" Jan 13 20:46:17.223942 containerd[2001]: time="2025-01-13T20:46:17.223801852Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-dpq4s,Uid:9b4734b7-fbf7-49d9-9ee9-51d6b5f65779,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"70526f0492e5acdb35bed3849be9dfc441da91a6a58e84df0b8161686301f890\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 13 20:46:17.224232 kubelet[3328]: E0113 20:46:17.224210 3328 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70526f0492e5acdb35bed3849be9dfc441da91a6a58e84df0b8161686301f890\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 13 20:46:17.224310 kubelet[3328]: E0113 20:46:17.224292 3328 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70526f0492e5acdb35bed3849be9dfc441da91a6a58e84df0b8161686301f890\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-dpq4s" Jan 13 20:46:17.224371 kubelet[3328]: E0113 20:46:17.224339 3328 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70526f0492e5acdb35bed3849be9dfc441da91a6a58e84df0b8161686301f890\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-dpq4s" Jan 13 20:46:17.224604 kubelet[3328]: E0113 20:46:17.224436 3328 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-dpq4s_kube-system(9b4734b7-fbf7-49d9-9ee9-51d6b5f65779)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-dpq4s_kube-system(9b4734b7-fbf7-49d9-9ee9-51d6b5f65779)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"70526f0492e5acdb35bed3849be9dfc441da91a6a58e84df0b8161686301f890\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-76f75df574-dpq4s" podUID="9b4734b7-fbf7-49d9-9ee9-51d6b5f65779" Jan 13 20:46:17.695567 containerd[2001]: time="2025-01-13T20:46:17.695239749Z" level=info msg="CreateContainer within sandbox \"7ef3c1ed7beae3722ba0f2f945c9502278b30e123a21fc87f08b83a10db5f10f\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Jan 13 20:46:17.723536 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2220933638.mount: Deactivated successfully. Jan 13 20:46:17.727901 containerd[2001]: time="2025-01-13T20:46:17.727841854Z" level=info msg="CreateContainer within sandbox \"7ef3c1ed7beae3722ba0f2f945c9502278b30e123a21fc87f08b83a10db5f10f\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"87e9dc87a373edcc889826d504391b9b97f42b23c78be239eaef094fbfb2a5cd\"" Jan 13 20:46:17.729248 containerd[2001]: time="2025-01-13T20:46:17.728598859Z" level=info msg="StartContainer for \"87e9dc87a373edcc889826d504391b9b97f42b23c78be239eaef094fbfb2a5cd\"" Jan 13 20:46:17.805875 containerd[2001]: time="2025-01-13T20:46:17.805807754Z" level=info msg="StartContainer for \"87e9dc87a373edcc889826d504391b9b97f42b23c78be239eaef094fbfb2a5cd\" returns successfully" Jan 13 20:46:18.890474 (udev-worker)[4043]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:46:18.918121 systemd-networkd[1570]: flannel.1: Link UP Jan 13 20:46:18.918136 systemd-networkd[1570]: flannel.1: Gained carrier Jan 13 20:46:20.099822 systemd-networkd[1570]: flannel.1: Gained IPv6LL Jan 13 20:46:22.487032 ntpd[1957]: Listen normally on 6 flannel.1 192.168.0.0:123 Jan 13 20:46:22.487124 ntpd[1957]: Listen normally on 7 flannel.1 [fe80::6c17:65ff:feeb:a1d%4]:123 Jan 13 20:46:22.487650 ntpd[1957]: 13 Jan 20:46:22 ntpd[1957]: Listen normally on 6 flannel.1 192.168.0.0:123 Jan 13 20:46:22.487650 ntpd[1957]: 13 Jan 20:46:22 ntpd[1957]: Listen normally on 7 flannel.1 [fe80::6c17:65ff:feeb:a1d%4]:123 Jan 13 20:46:30.563593 containerd[2001]: time="2025-01-13T20:46:30.563517608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-dpq4s,Uid:9b4734b7-fbf7-49d9-9ee9-51d6b5f65779,Namespace:kube-system,Attempt:0,}" Jan 13 20:46:30.662697 systemd-networkd[1570]: cni0: Link UP Jan 13 20:46:30.662709 systemd-networkd[1570]: cni0: Gained carrier Jan 13 20:46:30.669239 (udev-worker)[4181]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:46:30.669869 systemd-networkd[1570]: cni0: Lost carrier Jan 13 20:46:30.678264 systemd-networkd[1570]: vethb761b188: Link UP Jan 13 20:46:30.680637 kernel: cni0: port 1(vethb761b188) entered blocking state Jan 13 20:46:30.680730 kernel: cni0: port 1(vethb761b188) entered disabled state Jan 13 20:46:30.682878 kernel: vethb761b188: entered allmulticast mode Jan 13 20:46:30.682948 kernel: vethb761b188: entered promiscuous mode Jan 13 20:46:30.684615 kernel: cni0: port 1(vethb761b188) entered blocking state Jan 13 20:46:30.684687 kernel: cni0: port 1(vethb761b188) entered forwarding state Jan 13 20:46:30.684715 kernel: cni0: port 1(vethb761b188) entered disabled state Jan 13 20:46:30.686031 (udev-worker)[4184]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:46:30.694653 kernel: cni0: port 1(vethb761b188) entered blocking state Jan 13 20:46:30.694736 kernel: cni0: port 1(vethb761b188) entered forwarding state Jan 13 20:46:30.694726 systemd-networkd[1570]: vethb761b188: Gained carrier Jan 13 20:46:30.695719 systemd-networkd[1570]: cni0: Gained carrier Jan 13 20:46:30.746177 containerd[2001]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009a8e8), "name":"cbr0", "type":"bridge"} Jan 13 20:46:30.746177 containerd[2001]: delegateAdd: netconf sent to delegate plugin: Jan 13 20:46:30.770746 containerd[2001]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":8951,"name":"cbr0","type":"bridge"}time="2025-01-13T20:46:30.769502308Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:46:30.770746 containerd[2001]: time="2025-01-13T20:46:30.770080169Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:46:30.770746 containerd[2001]: time="2025-01-13T20:46:30.770115104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:30.771972 containerd[2001]: time="2025-01-13T20:46:30.771896789Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:30.819594 systemd[1]: run-containerd-runc-k8s.io-1314d0556af53b0053effb73443a1140c8db6891e29507def05b8f44ddead062-runc.jENhwP.mount: Deactivated successfully. Jan 13 20:46:30.861078 containerd[2001]: time="2025-01-13T20:46:30.860934914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-dpq4s,Uid:9b4734b7-fbf7-49d9-9ee9-51d6b5f65779,Namespace:kube-system,Attempt:0,} returns sandbox id \"1314d0556af53b0053effb73443a1140c8db6891e29507def05b8f44ddead062\"" Jan 13 20:46:30.864589 containerd[2001]: time="2025-01-13T20:46:30.864554303Z" level=info msg="CreateContainer within sandbox \"1314d0556af53b0053effb73443a1140c8db6891e29507def05b8f44ddead062\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:46:30.887934 containerd[2001]: time="2025-01-13T20:46:30.887885019Z" level=info msg="CreateContainer within sandbox \"1314d0556af53b0053effb73443a1140c8db6891e29507def05b8f44ddead062\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dfc91745ce845ebdd00e37bc425a0105f0ce2accab5de3c0a3f79c8bd590da5c\"" Jan 13 20:46:30.888620 containerd[2001]: time="2025-01-13T20:46:30.888580129Z" level=info msg="StartContainer for \"dfc91745ce845ebdd00e37bc425a0105f0ce2accab5de3c0a3f79c8bd590da5c\"" Jan 13 20:46:30.944434 containerd[2001]: time="2025-01-13T20:46:30.944305619Z" level=info msg="StartContainer for \"dfc91745ce845ebdd00e37bc425a0105f0ce2accab5de3c0a3f79c8bd590da5c\" returns successfully" Jan 13 20:46:31.562487 containerd[2001]: time="2025-01-13T20:46:31.562436694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-4sblt,Uid:458d6de3-d87e-4a50-a1a7-dd434df2beb6,Namespace:kube-system,Attempt:0,}" Jan 13 20:46:31.588028 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3417362225.mount: Deactivated successfully. Jan 13 20:46:31.620309 systemd-networkd[1570]: veth56729522: Link UP Jan 13 20:46:31.621918 kernel: cni0: port 2(veth56729522) entered blocking state Jan 13 20:46:31.621986 kernel: cni0: port 2(veth56729522) entered disabled state Jan 13 20:46:31.622014 kernel: veth56729522: entered allmulticast mode Jan 13 20:46:31.623879 kernel: veth56729522: entered promiscuous mode Jan 13 20:46:31.623944 kernel: cni0: port 2(veth56729522) entered blocking state Jan 13 20:46:31.623969 kernel: cni0: port 2(veth56729522) entered forwarding state Jan 13 20:46:31.635435 systemd-networkd[1570]: veth56729522: Gained carrier Jan 13 20:46:31.636393 containerd[2001]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009a8e8), "name":"cbr0", "type":"bridge"} Jan 13 20:46:31.636393 containerd[2001]: delegateAdd: netconf sent to delegate plugin: Jan 13 20:46:31.665392 containerd[2001]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":8951,"name":"cbr0","type":"bridge"}time="2025-01-13T20:46:31.664978980Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:46:31.665392 containerd[2001]: time="2025-01-13T20:46:31.665090059Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:46:31.665392 containerd[2001]: time="2025-01-13T20:46:31.665132541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:31.665392 containerd[2001]: time="2025-01-13T20:46:31.665245472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:31.740553 kubelet[3328]: I0113 20:46:31.740337 3328 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-22lnb" podStartSLOduration=16.190999097 podStartE2EDuration="21.738986291s" podCreationTimestamp="2025-01-13 20:46:10 +0000 UTC" firstStartedPulling="2025-01-13 20:46:11.064683469 +0000 UTC m=+14.826763574" lastFinishedPulling="2025-01-13 20:46:16.612670661 +0000 UTC m=+20.374750768" observedRunningTime="2025-01-13 20:46:18.711542232 +0000 UTC m=+22.473622346" watchObservedRunningTime="2025-01-13 20:46:31.738986291 +0000 UTC m=+35.501066426" Jan 13 20:46:31.740553 kubelet[3328]: I0113 20:46:31.740507 3328 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-dpq4s" podStartSLOduration=21.740474069 podStartE2EDuration="21.740474069s" podCreationTimestamp="2025-01-13 20:46:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:46:31.737941131 +0000 UTC m=+35.500021250" watchObservedRunningTime="2025-01-13 20:46:31.740474069 +0000 UTC m=+35.502554189" Jan 13 20:46:31.749287 systemd-networkd[1570]: cni0: Gained IPv6LL Jan 13 20:46:31.783910 containerd[2001]: time="2025-01-13T20:46:31.783868377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-4sblt,Uid:458d6de3-d87e-4a50-a1a7-dd434df2beb6,Namespace:kube-system,Attempt:0,} returns sandbox id \"a0cac731d3081c011589532b1d7c408d5b2215fc2499a6218fd745662937919d\"" Jan 13 20:46:31.795502 containerd[2001]: time="2025-01-13T20:46:31.795064882Z" level=info msg="CreateContainer within sandbox \"a0cac731d3081c011589532b1d7c408d5b2215fc2499a6218fd745662937919d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:46:31.819880 containerd[2001]: time="2025-01-13T20:46:31.819766975Z" level=info msg="CreateContainer within sandbox \"a0cac731d3081c011589532b1d7c408d5b2215fc2499a6218fd745662937919d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"64afe785de208c0412ef3d4fa9f21e9170e5fad819d7b5e54df4385d36a2287a\"" Jan 13 20:46:31.822497 containerd[2001]: time="2025-01-13T20:46:31.822459425Z" level=info msg="StartContainer for \"64afe785de208c0412ef3d4fa9f21e9170e5fad819d7b5e54df4385d36a2287a\"" Jan 13 20:46:31.888609 containerd[2001]: time="2025-01-13T20:46:31.887702175Z" level=info msg="StartContainer for \"64afe785de208c0412ef3d4fa9f21e9170e5fad819d7b5e54df4385d36a2287a\" returns successfully" Jan 13 20:46:32.003960 systemd-networkd[1570]: vethb761b188: Gained IPv6LL Jan 13 20:46:32.587929 systemd[1]: run-containerd-runc-k8s.io-a0cac731d3081c011589532b1d7c408d5b2215fc2499a6218fd745662937919d-runc.uWwJN6.mount: Deactivated successfully. Jan 13 20:46:32.775383 kubelet[3328]: I0113 20:46:32.775341 3328 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-4sblt" podStartSLOduration=22.775286323 podStartE2EDuration="22.775286323s" podCreationTimestamp="2025-01-13 20:46:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:46:32.758273819 +0000 UTC m=+36.520353936" watchObservedRunningTime="2025-01-13 20:46:32.775286323 +0000 UTC m=+36.537366438" Jan 13 20:46:33.541649 systemd-networkd[1570]: veth56729522: Gained IPv6LL Jan 13 20:46:36.487094 ntpd[1957]: Listen normally on 8 cni0 192.168.0.1:123 Jan 13 20:46:36.487884 ntpd[1957]: 13 Jan 20:46:36 ntpd[1957]: Listen normally on 8 cni0 192.168.0.1:123 Jan 13 20:46:36.487884 ntpd[1957]: 13 Jan 20:46:36 ntpd[1957]: Listen normally on 9 cni0 [fe80::3c7a:8ff:fe85:3e71%5]:123 Jan 13 20:46:36.487884 ntpd[1957]: 13 Jan 20:46:36 ntpd[1957]: Listen normally on 10 vethb761b188 [fe80::98ed:2cff:fea4:4437%6]:123 Jan 13 20:46:36.487884 ntpd[1957]: 13 Jan 20:46:36 ntpd[1957]: Listen normally on 11 veth56729522 [fe80::85e:ecff:febc:d11f%7]:123 Jan 13 20:46:36.487190 ntpd[1957]: Listen normally on 9 cni0 [fe80::3c7a:8ff:fe85:3e71%5]:123 Jan 13 20:46:36.487255 ntpd[1957]: Listen normally on 10 vethb761b188 [fe80::98ed:2cff:fea4:4437%6]:123 Jan 13 20:46:36.487294 ntpd[1957]: Listen normally on 11 veth56729522 [fe80::85e:ecff:febc:d11f%7]:123 Jan 13 20:46:48.239878 systemd[1]: Started sshd@5-172.31.29.115:22-139.178.89.65:47610.service - OpenSSH per-connection server daemon (139.178.89.65:47610). Jan 13 20:46:48.463194 sshd[4453]: Accepted publickey for core from 139.178.89.65 port 47610 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:46:48.465754 sshd-session[4453]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:46:48.472698 systemd-logind[1973]: New session 6 of user core. Jan 13 20:46:48.479858 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 20:46:48.761441 sshd[4456]: Connection closed by 139.178.89.65 port 47610 Jan 13 20:46:48.764547 sshd-session[4453]: pam_unix(sshd:session): session closed for user core Jan 13 20:46:48.770367 systemd[1]: sshd@5-172.31.29.115:22-139.178.89.65:47610.service: Deactivated successfully. Jan 13 20:46:48.771954 systemd-logind[1973]: Session 6 logged out. Waiting for processes to exit. Jan 13 20:46:48.776241 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 20:46:48.778452 systemd-logind[1973]: Removed session 6. Jan 13 20:46:53.791370 systemd[1]: Started sshd@6-172.31.29.115:22-139.178.89.65:60892.service - OpenSSH per-connection server daemon (139.178.89.65:60892). Jan 13 20:46:53.959836 sshd[4488]: Accepted publickey for core from 139.178.89.65 port 60892 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:46:53.961652 sshd-session[4488]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:46:53.966883 systemd-logind[1973]: New session 7 of user core. Jan 13 20:46:53.974843 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 20:46:54.175049 sshd[4491]: Connection closed by 139.178.89.65 port 60892 Jan 13 20:46:54.177045 sshd-session[4488]: pam_unix(sshd:session): session closed for user core Jan 13 20:46:54.187592 systemd[1]: sshd@6-172.31.29.115:22-139.178.89.65:60892.service: Deactivated successfully. Jan 13 20:46:54.197966 systemd-logind[1973]: Session 7 logged out. Waiting for processes to exit. Jan 13 20:46:54.198870 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 20:46:54.200333 systemd-logind[1973]: Removed session 7. Jan 13 20:46:59.204236 systemd[1]: Started sshd@7-172.31.29.115:22-139.178.89.65:60908.service - OpenSSH per-connection server daemon (139.178.89.65:60908). Jan 13 20:46:59.397808 sshd[4526]: Accepted publickey for core from 139.178.89.65 port 60908 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:46:59.399538 sshd-session[4526]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:46:59.405234 systemd-logind[1973]: New session 8 of user core. Jan 13 20:46:59.410897 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 20:46:59.663441 sshd[4549]: Connection closed by 139.178.89.65 port 60908 Jan 13 20:46:59.664258 sshd-session[4526]: pam_unix(sshd:session): session closed for user core Jan 13 20:46:59.668017 systemd[1]: sshd@7-172.31.29.115:22-139.178.89.65:60908.service: Deactivated successfully. Jan 13 20:46:59.674286 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 20:46:59.676869 systemd-logind[1973]: Session 8 logged out. Waiting for processes to exit. Jan 13 20:46:59.678021 systemd-logind[1973]: Removed session 8. Jan 13 20:46:59.694335 systemd[1]: Started sshd@8-172.31.29.115:22-139.178.89.65:60914.service - OpenSSH per-connection server daemon (139.178.89.65:60914). Jan 13 20:46:59.860139 sshd[4561]: Accepted publickey for core from 139.178.89.65 port 60914 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:46:59.862056 sshd-session[4561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:46:59.868019 systemd-logind[1973]: New session 9 of user core. Jan 13 20:46:59.876468 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 20:47:00.255211 sshd[4564]: Connection closed by 139.178.89.65 port 60914 Jan 13 20:47:00.257154 sshd-session[4561]: pam_unix(sshd:session): session closed for user core Jan 13 20:47:00.293036 systemd[1]: Started sshd@9-172.31.29.115:22-139.178.89.65:60922.service - OpenSSH per-connection server daemon (139.178.89.65:60922). Jan 13 20:47:00.302260 systemd[1]: sshd@8-172.31.29.115:22-139.178.89.65:60914.service: Deactivated successfully. Jan 13 20:47:00.318700 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 20:47:00.330912 systemd-logind[1973]: Session 9 logged out. Waiting for processes to exit. Jan 13 20:47:00.339162 systemd-logind[1973]: Removed session 9. Jan 13 20:47:00.564389 sshd[4570]: Accepted publickey for core from 139.178.89.65 port 60922 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:47:00.565403 sshd-session[4570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:47:00.572268 systemd-logind[1973]: New session 10 of user core. Jan 13 20:47:00.578201 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 20:47:00.805388 sshd[4575]: Connection closed by 139.178.89.65 port 60922 Jan 13 20:47:00.807248 sshd-session[4570]: pam_unix(sshd:session): session closed for user core Jan 13 20:47:00.812493 systemd[1]: sshd@9-172.31.29.115:22-139.178.89.65:60922.service: Deactivated successfully. Jan 13 20:47:00.818034 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 20:47:00.819634 systemd-logind[1973]: Session 10 logged out. Waiting for processes to exit. Jan 13 20:47:00.821082 systemd-logind[1973]: Removed session 10. Jan 13 20:47:05.842408 systemd[1]: Started sshd@10-172.31.29.115:22-139.178.89.65:35810.service - OpenSSH per-connection server daemon (139.178.89.65:35810). Jan 13 20:47:06.056132 sshd[4607]: Accepted publickey for core from 139.178.89.65 port 35810 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:47:06.057804 sshd-session[4607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:47:06.082019 systemd-logind[1973]: New session 11 of user core. Jan 13 20:47:06.090997 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 20:47:06.342805 sshd[4610]: Connection closed by 139.178.89.65 port 35810 Jan 13 20:47:06.345914 sshd-session[4607]: pam_unix(sshd:session): session closed for user core Jan 13 20:47:06.366212 systemd[1]: sshd@10-172.31.29.115:22-139.178.89.65:35810.service: Deactivated successfully. Jan 13 20:47:06.376071 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 20:47:06.383518 systemd-logind[1973]: Session 11 logged out. Waiting for processes to exit. Jan 13 20:47:06.385515 systemd-logind[1973]: Removed session 11. Jan 13 20:47:11.375333 systemd[1]: Started sshd@11-172.31.29.115:22-139.178.89.65:33670.service - OpenSSH per-connection server daemon (139.178.89.65:33670). Jan 13 20:47:11.553653 sshd[4643]: Accepted publickey for core from 139.178.89.65 port 33670 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:47:11.557269 sshd-session[4643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:47:11.564580 systemd-logind[1973]: New session 12 of user core. Jan 13 20:47:11.574024 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 20:47:11.774086 sshd[4646]: Connection closed by 139.178.89.65 port 33670 Jan 13 20:47:11.774841 sshd-session[4643]: pam_unix(sshd:session): session closed for user core Jan 13 20:47:11.778824 systemd[1]: sshd@11-172.31.29.115:22-139.178.89.65:33670.service: Deactivated successfully. Jan 13 20:47:11.784710 systemd-logind[1973]: Session 12 logged out. Waiting for processes to exit. Jan 13 20:47:11.785881 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 20:47:11.788755 systemd-logind[1973]: Removed session 12. Jan 13 20:47:16.804888 systemd[1]: Started sshd@12-172.31.29.115:22-139.178.89.65:33678.service - OpenSSH per-connection server daemon (139.178.89.65:33678). Jan 13 20:47:16.997607 sshd[4678]: Accepted publickey for core from 139.178.89.65 port 33678 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:47:16.999625 sshd-session[4678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:47:17.013355 systemd-logind[1973]: New session 13 of user core. Jan 13 20:47:17.023852 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 20:47:17.218625 sshd[4681]: Connection closed by 139.178.89.65 port 33678 Jan 13 20:47:17.219474 sshd-session[4678]: pam_unix(sshd:session): session closed for user core Jan 13 20:47:17.224094 systemd[1]: sshd@12-172.31.29.115:22-139.178.89.65:33678.service: Deactivated successfully. Jan 13 20:47:17.225865 systemd-logind[1973]: Session 13 logged out. Waiting for processes to exit. Jan 13 20:47:17.227538 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 20:47:17.228615 systemd-logind[1973]: Removed session 13. Jan 13 20:47:22.247871 systemd[1]: Started sshd@13-172.31.29.115:22-139.178.89.65:49850.service - OpenSSH per-connection server daemon (139.178.89.65:49850). Jan 13 20:47:22.409417 sshd[4714]: Accepted publickey for core from 139.178.89.65 port 49850 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:47:22.411310 sshd-session[4714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:47:22.419743 systemd-logind[1973]: New session 14 of user core. Jan 13 20:47:22.428148 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 20:47:22.623722 sshd[4717]: Connection closed by 139.178.89.65 port 49850 Jan 13 20:47:22.624974 sshd-session[4714]: pam_unix(sshd:session): session closed for user core Jan 13 20:47:22.631456 systemd-logind[1973]: Session 14 logged out. Waiting for processes to exit. Jan 13 20:47:22.632285 systemd[1]: sshd@13-172.31.29.115:22-139.178.89.65:49850.service: Deactivated successfully. Jan 13 20:47:22.636979 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 20:47:22.638709 systemd-logind[1973]: Removed session 14. Jan 13 20:47:22.658156 systemd[1]: Started sshd@14-172.31.29.115:22-139.178.89.65:49858.service - OpenSSH per-connection server daemon (139.178.89.65:49858). Jan 13 20:47:22.837578 sshd[4729]: Accepted publickey for core from 139.178.89.65 port 49858 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:47:22.838840 sshd-session[4729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:47:22.859429 systemd-logind[1973]: New session 15 of user core. Jan 13 20:47:22.870016 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 20:47:23.455271 sshd[4732]: Connection closed by 139.178.89.65 port 49858 Jan 13 20:47:23.456900 sshd-session[4729]: pam_unix(sshd:session): session closed for user core Jan 13 20:47:23.459995 systemd[1]: sshd@14-172.31.29.115:22-139.178.89.65:49858.service: Deactivated successfully. Jan 13 20:47:23.466646 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 20:47:23.467654 systemd-logind[1973]: Session 15 logged out. Waiting for processes to exit. Jan 13 20:47:23.468779 systemd-logind[1973]: Removed session 15. Jan 13 20:47:23.481889 systemd[1]: Started sshd@15-172.31.29.115:22-139.178.89.65:49874.service - OpenSSH per-connection server daemon (139.178.89.65:49874). Jan 13 20:47:23.656425 sshd[4740]: Accepted publickey for core from 139.178.89.65 port 49874 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:47:23.658079 sshd-session[4740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:47:23.664834 systemd-logind[1973]: New session 16 of user core. Jan 13 20:47:23.674869 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 20:47:25.354102 sshd[4743]: Connection closed by 139.178.89.65 port 49874 Jan 13 20:47:25.358119 sshd-session[4740]: pam_unix(sshd:session): session closed for user core Jan 13 20:47:25.363997 systemd[1]: sshd@15-172.31.29.115:22-139.178.89.65:49874.service: Deactivated successfully. Jan 13 20:47:25.376010 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 20:47:25.377492 systemd-logind[1973]: Session 16 logged out. Waiting for processes to exit. Jan 13 20:47:25.388850 systemd[1]: Started sshd@16-172.31.29.115:22-139.178.89.65:49880.service - OpenSSH per-connection server daemon (139.178.89.65:49880). Jan 13 20:47:25.390012 systemd-logind[1973]: Removed session 16. Jan 13 20:47:25.551561 sshd[4780]: Accepted publickey for core from 139.178.89.65 port 49880 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:47:25.552907 sshd-session[4780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:47:25.559343 systemd-logind[1973]: New session 17 of user core. Jan 13 20:47:25.565954 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 20:47:25.868771 sshd[4783]: Connection closed by 139.178.89.65 port 49880 Jan 13 20:47:25.869552 sshd-session[4780]: pam_unix(sshd:session): session closed for user core Jan 13 20:47:25.874642 systemd[1]: sshd@16-172.31.29.115:22-139.178.89.65:49880.service: Deactivated successfully. Jan 13 20:47:25.877963 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 20:47:25.879769 systemd-logind[1973]: Session 17 logged out. Waiting for processes to exit. Jan 13 20:47:25.880837 systemd-logind[1973]: Removed session 17. Jan 13 20:47:25.898282 systemd[1]: Started sshd@17-172.31.29.115:22-139.178.89.65:49884.service - OpenSSH per-connection server daemon (139.178.89.65:49884). Jan 13 20:47:26.067274 sshd[4792]: Accepted publickey for core from 139.178.89.65 port 49884 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:47:26.069080 sshd-session[4792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:47:26.074686 systemd-logind[1973]: New session 18 of user core. Jan 13 20:47:26.085429 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 20:47:26.270693 sshd[4795]: Connection closed by 139.178.89.65 port 49884 Jan 13 20:47:26.271461 sshd-session[4792]: pam_unix(sshd:session): session closed for user core Jan 13 20:47:26.275893 systemd[1]: sshd@17-172.31.29.115:22-139.178.89.65:49884.service: Deactivated successfully. Jan 13 20:47:26.282060 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 20:47:26.283987 systemd-logind[1973]: Session 18 logged out. Waiting for processes to exit. Jan 13 20:47:26.285321 systemd-logind[1973]: Removed session 18. Jan 13 20:47:31.304332 systemd[1]: Started sshd@18-172.31.29.115:22-139.178.89.65:53142.service - OpenSSH per-connection server daemon (139.178.89.65:53142). Jan 13 20:47:31.508257 sshd[4827]: Accepted publickey for core from 139.178.89.65 port 53142 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:47:31.509032 sshd-session[4827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:47:31.514771 systemd-logind[1973]: New session 19 of user core. Jan 13 20:47:31.524218 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 20:47:31.810380 sshd[4830]: Connection closed by 139.178.89.65 port 53142 Jan 13 20:47:31.811348 sshd-session[4827]: pam_unix(sshd:session): session closed for user core Jan 13 20:47:31.823579 systemd[1]: sshd@18-172.31.29.115:22-139.178.89.65:53142.service: Deactivated successfully. Jan 13 20:47:31.835549 systemd-logind[1973]: Session 19 logged out. Waiting for processes to exit. Jan 13 20:47:31.839403 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 20:47:31.841426 systemd-logind[1973]: Removed session 19. Jan 13 20:47:36.842086 systemd[1]: Started sshd@19-172.31.29.115:22-139.178.89.65:53158.service - OpenSSH per-connection server daemon (139.178.89.65:53158). Jan 13 20:47:37.015281 sshd[4865]: Accepted publickey for core from 139.178.89.65 port 53158 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:47:37.017141 sshd-session[4865]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:47:37.023197 systemd-logind[1973]: New session 20 of user core. Jan 13 20:47:37.032132 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 13 20:47:37.244020 sshd[4868]: Connection closed by 139.178.89.65 port 53158 Jan 13 20:47:37.244911 sshd-session[4865]: pam_unix(sshd:session): session closed for user core Jan 13 20:47:37.253856 systemd[1]: sshd@19-172.31.29.115:22-139.178.89.65:53158.service: Deactivated successfully. Jan 13 20:47:37.259908 systemd[1]: session-20.scope: Deactivated successfully. Jan 13 20:47:37.261385 systemd-logind[1973]: Session 20 logged out. Waiting for processes to exit. Jan 13 20:47:37.262409 systemd-logind[1973]: Removed session 20. Jan 13 20:47:42.277007 systemd[1]: Started sshd@20-172.31.29.115:22-139.178.89.65:45316.service - OpenSSH per-connection server daemon (139.178.89.65:45316). Jan 13 20:47:42.444344 sshd[4901]: Accepted publickey for core from 139.178.89.65 port 45316 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:47:42.446654 sshd-session[4901]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:47:42.454004 systemd-logind[1973]: New session 21 of user core. Jan 13 20:47:42.460036 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 13 20:47:42.667160 sshd[4904]: Connection closed by 139.178.89.65 port 45316 Jan 13 20:47:42.668012 sshd-session[4901]: pam_unix(sshd:session): session closed for user core Jan 13 20:47:42.671972 systemd[1]: sshd@20-172.31.29.115:22-139.178.89.65:45316.service: Deactivated successfully. Jan 13 20:47:42.678522 systemd[1]: session-21.scope: Deactivated successfully. Jan 13 20:47:42.680566 systemd-logind[1973]: Session 21 logged out. Waiting for processes to exit. Jan 13 20:47:42.683135 systemd-logind[1973]: Removed session 21. Jan 13 20:47:47.695862 systemd[1]: Started sshd@21-172.31.29.115:22-139.178.89.65:45328.service - OpenSSH per-connection server daemon (139.178.89.65:45328). Jan 13 20:47:47.883992 sshd[4936]: Accepted publickey for core from 139.178.89.65 port 45328 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:47:47.885501 sshd-session[4936]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:47:47.892190 systemd-logind[1973]: New session 22 of user core. Jan 13 20:47:47.903964 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 13 20:47:48.138450 sshd[4939]: Connection closed by 139.178.89.65 port 45328 Jan 13 20:47:48.140186 sshd-session[4936]: pam_unix(sshd:session): session closed for user core Jan 13 20:47:48.150194 systemd[1]: sshd@21-172.31.29.115:22-139.178.89.65:45328.service: Deactivated successfully. Jan 13 20:47:48.150593 systemd-logind[1973]: Session 22 logged out. Waiting for processes to exit. Jan 13 20:47:48.155027 systemd[1]: session-22.scope: Deactivated successfully. Jan 13 20:47:48.161854 systemd-logind[1973]: Removed session 22. Jan 13 20:48:02.971749 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-15577735283cd2d0abb3451e348f00481c431af43056408ec9dcba975a7365ba-rootfs.mount: Deactivated successfully. Jan 13 20:48:02.975490 containerd[2001]: time="2025-01-13T20:48:02.975423466Z" level=info msg="shim disconnected" id=15577735283cd2d0abb3451e348f00481c431af43056408ec9dcba975a7365ba namespace=k8s.io Jan 13 20:48:02.975490 containerd[2001]: time="2025-01-13T20:48:02.975485747Z" level=warning msg="cleaning up after shim disconnected" id=15577735283cd2d0abb3451e348f00481c431af43056408ec9dcba975a7365ba namespace=k8s.io Jan 13 20:48:02.976144 containerd[2001]: time="2025-01-13T20:48:02.975616137Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:48:03.941811 kubelet[3328]: I0113 20:48:03.941777 3328 scope.go:117] "RemoveContainer" containerID="15577735283cd2d0abb3451e348f00481c431af43056408ec9dcba975a7365ba" Jan 13 20:48:03.944740 containerd[2001]: time="2025-01-13T20:48:03.944702449Z" level=info msg="CreateContainer within sandbox \"b3f22f66a9c5bf8f6a17d7976d42c846fc8da05afcfaa78f9a91f6c4652f52de\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 13 20:48:03.967613 containerd[2001]: time="2025-01-13T20:48:03.967568106Z" level=info msg="CreateContainer within sandbox \"b3f22f66a9c5bf8f6a17d7976d42c846fc8da05afcfaa78f9a91f6c4652f52de\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"71954daf131fad29263bbaab18ff11bb4e0eb3578ff47bb64b109f3b780c94d2\"" Jan 13 20:48:03.968228 containerd[2001]: time="2025-01-13T20:48:03.968200264Z" level=info msg="StartContainer for \"71954daf131fad29263bbaab18ff11bb4e0eb3578ff47bb64b109f3b780c94d2\"" Jan 13 20:48:04.060785 containerd[2001]: time="2025-01-13T20:48:04.060362649Z" level=info msg="StartContainer for \"71954daf131fad29263bbaab18ff11bb4e0eb3578ff47bb64b109f3b780c94d2\" returns successfully" Jan 13 20:48:08.627198 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-804f94e9a5284fc07767229f3db9a7e8dffa9f4647ace5c33deaff19203d0794-rootfs.mount: Deactivated successfully. Jan 13 20:48:08.645355 containerd[2001]: time="2025-01-13T20:48:08.645270276Z" level=info msg="shim disconnected" id=804f94e9a5284fc07767229f3db9a7e8dffa9f4647ace5c33deaff19203d0794 namespace=k8s.io Jan 13 20:48:08.646118 containerd[2001]: time="2025-01-13T20:48:08.645385149Z" level=warning msg="cleaning up after shim disconnected" id=804f94e9a5284fc07767229f3db9a7e8dffa9f4647ace5c33deaff19203d0794 namespace=k8s.io Jan 13 20:48:08.646118 containerd[2001]: time="2025-01-13T20:48:08.645400598Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:48:08.985404 kubelet[3328]: I0113 20:48:08.985367 3328 scope.go:117] "RemoveContainer" containerID="804f94e9a5284fc07767229f3db9a7e8dffa9f4647ace5c33deaff19203d0794" Jan 13 20:48:08.988431 containerd[2001]: time="2025-01-13T20:48:08.988393330Z" level=info msg="CreateContainer within sandbox \"93e31b6a0cc425c3bd68f454726d4f2513a6535155e58616dbab1c9e305825b3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 13 20:48:09.001710 kubelet[3328]: E0113 20:48:09.001669 3328 controller.go:195] "Failed to update lease" err="Put \"https://172.31.29.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-115?timeout=10s\": context deadline exceeded" Jan 13 20:48:09.017726 containerd[2001]: time="2025-01-13T20:48:09.017684786Z" level=info msg="CreateContainer within sandbox \"93e31b6a0cc425c3bd68f454726d4f2513a6535155e58616dbab1c9e305825b3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"a4b637722818aa060bd25ae1e281fa91e76cfed41292c2bd8e5d3ddf12e4ec2a\"" Jan 13 20:48:09.018285 containerd[2001]: time="2025-01-13T20:48:09.018231022Z" level=info msg="StartContainer for \"a4b637722818aa060bd25ae1e281fa91e76cfed41292c2bd8e5d3ddf12e4ec2a\"" Jan 13 20:48:09.104670 containerd[2001]: time="2025-01-13T20:48:09.104485266Z" level=info msg="StartContainer for \"a4b637722818aa060bd25ae1e281fa91e76cfed41292c2bd8e5d3ddf12e4ec2a\" returns successfully" Jan 13 20:48:09.632450 systemd[1]: run-containerd-runc-k8s.io-a4b637722818aa060bd25ae1e281fa91e76cfed41292c2bd8e5d3ddf12e4ec2a-runc.fAUgat.mount: Deactivated successfully. Jan 13 20:48:19.002497 kubelet[3328]: E0113 20:48:19.002391 3328 controller.go:195] "Failed to update lease" err="Put \"https://172.31.29.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-115?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"