Feb 13 15:34:44.998978 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 13:54:58 -00 2025 Feb 13 15:34:44.999027 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=cd73eba291b8356dfc2c39f651cabef9206685f772c8949188fd366788d672c2 Feb 13 15:34:44.999045 kernel: BIOS-provided physical RAM map: Feb 13 15:34:44.999217 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 13 15:34:44.999231 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 13 15:34:44.999242 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 13 15:34:44.999262 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Feb 13 15:34:44.999275 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Feb 13 15:34:44.999288 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Feb 13 15:34:44.999301 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 13 15:34:44.999314 kernel: NX (Execute Disable) protection: active Feb 13 15:34:44.999326 kernel: APIC: Static calls initialized Feb 13 15:34:44.999339 kernel: SMBIOS 2.7 present. Feb 13 15:34:44.999353 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Feb 13 15:34:44.999373 kernel: Hypervisor detected: KVM Feb 13 15:34:44.999387 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 15:34:44.999401 kernel: kvm-clock: using sched offset of 8778317184 cycles Feb 13 15:34:44.999417 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 15:34:44.999432 kernel: tsc: Detected 2499.998 MHz processor Feb 13 15:34:44.999446 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 15:34:44.999462 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 15:34:44.999480 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Feb 13 15:34:44.999495 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Feb 13 15:34:44.999510 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 15:34:44.999525 kernel: Using GB pages for direct mapping Feb 13 15:34:44.999540 kernel: ACPI: Early table checksum verification disabled Feb 13 15:34:44.999554 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Feb 13 15:34:44.999569 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Feb 13 15:34:44.999582 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Feb 13 15:34:44.999597 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Feb 13 15:34:44.999616 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Feb 13 15:34:44.999630 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Feb 13 15:34:44.999663 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Feb 13 15:34:44.999732 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Feb 13 15:34:44.999745 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Feb 13 15:34:44.999757 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Feb 13 15:34:44.999769 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Feb 13 15:34:44.999781 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Feb 13 15:34:44.999792 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Feb 13 15:34:44.999809 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Feb 13 15:34:44.999826 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Feb 13 15:34:45.001376 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Feb 13 15:34:45.001523 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Feb 13 15:34:45.001540 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Feb 13 15:34:45.001559 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Feb 13 15:34:45.001572 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Feb 13 15:34:45.001584 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Feb 13 15:34:45.001596 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Feb 13 15:34:45.001609 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 13 15:34:45.001623 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 13 15:34:45.001654 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Feb 13 15:34:45.001669 kernel: NUMA: Initialized distance table, cnt=1 Feb 13 15:34:45.001684 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Feb 13 15:34:45.001703 kernel: Zone ranges: Feb 13 15:34:45.001718 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 15:34:45.001734 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Feb 13 15:34:45.001749 kernel: Normal empty Feb 13 15:34:45.001764 kernel: Movable zone start for each node Feb 13 15:34:45.001779 kernel: Early memory node ranges Feb 13 15:34:45.001795 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 13 15:34:45.001810 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Feb 13 15:34:45.001827 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Feb 13 15:34:45.001843 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 15:34:45.001862 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 13 15:34:45.001877 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Feb 13 15:34:45.001892 kernel: ACPI: PM-Timer IO Port: 0xb008 Feb 13 15:34:45.001907 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 15:34:45.001922 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Feb 13 15:34:45.001937 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 15:34:45.001952 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 15:34:45.001967 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 15:34:45.001982 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 15:34:45.002001 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 15:34:45.002016 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 15:34:45.002029 kernel: TSC deadline timer available Feb 13 15:34:45.002042 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 13 15:34:45.002056 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 13 15:34:45.002071 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Feb 13 15:34:45.002084 kernel: Booting paravirtualized kernel on KVM Feb 13 15:34:45.002098 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 15:34:45.002114 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Feb 13 15:34:45.002132 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Feb 13 15:34:45.002146 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Feb 13 15:34:45.002160 kernel: pcpu-alloc: [0] 0 1 Feb 13 15:34:45.002174 kernel: kvm-guest: PV spinlocks enabled Feb 13 15:34:45.002188 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 15:34:45.002204 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=cd73eba291b8356dfc2c39f651cabef9206685f772c8949188fd366788d672c2 Feb 13 15:34:45.002219 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 15:34:45.002233 kernel: random: crng init done Feb 13 15:34:45.002251 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 15:34:45.002266 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 13 15:34:45.002281 kernel: Fallback order for Node 0: 0 Feb 13 15:34:45.002297 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Feb 13 15:34:45.002314 kernel: Policy zone: DMA32 Feb 13 15:34:45.002390 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 15:34:45.002408 kernel: Memory: 1932348K/2057760K available (12288K kernel code, 2299K rwdata, 22736K rodata, 42976K init, 2216K bss, 125152K reserved, 0K cma-reserved) Feb 13 15:34:45.002423 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 15:34:45.002439 kernel: Kernel/User page tables isolation: enabled Feb 13 15:34:45.002458 kernel: ftrace: allocating 37920 entries in 149 pages Feb 13 15:34:45.002473 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 15:34:45.002489 kernel: Dynamic Preempt: voluntary Feb 13 15:34:45.002622 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 15:34:45.002663 kernel: rcu: RCU event tracing is enabled. Feb 13 15:34:45.002679 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 15:34:45.002695 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 15:34:45.002711 kernel: Rude variant of Tasks RCU enabled. Feb 13 15:34:45.002727 kernel: Tracing variant of Tasks RCU enabled. Feb 13 15:34:45.002747 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 15:34:45.002763 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 15:34:45.002780 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 13 15:34:45.002796 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 15:34:45.002812 kernel: Console: colour VGA+ 80x25 Feb 13 15:34:45.002827 kernel: printk: console [ttyS0] enabled Feb 13 15:34:45.002843 kernel: ACPI: Core revision 20230628 Feb 13 15:34:45.002858 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Feb 13 15:34:45.002871 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 15:34:45.002889 kernel: x2apic enabled Feb 13 15:34:45.002905 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 15:34:45.002932 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Feb 13 15:34:45.002951 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Feb 13 15:34:45.002966 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 13 15:34:45.002981 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 13 15:34:45.002995 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 15:34:45.003009 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 15:34:45.003024 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 15:34:45.003038 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 15:34:45.003054 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Feb 13 15:34:45.003069 kernel: RETBleed: Vulnerable Feb 13 15:34:45.003085 kernel: Speculative Store Bypass: Vulnerable Feb 13 15:34:45.003104 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 15:34:45.003119 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 15:34:45.003135 kernel: GDS: Unknown: Dependent on hypervisor status Feb 13 15:34:45.003152 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 15:34:45.003168 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 15:34:45.003185 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 15:34:45.003205 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Feb 13 15:34:45.003222 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Feb 13 15:34:45.003238 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Feb 13 15:34:45.003253 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Feb 13 15:34:45.003267 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Feb 13 15:34:45.003282 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Feb 13 15:34:45.003297 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 15:34:45.003310 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Feb 13 15:34:45.003325 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Feb 13 15:34:45.003339 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Feb 13 15:34:45.003353 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Feb 13 15:34:45.003371 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Feb 13 15:34:45.003386 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Feb 13 15:34:45.003400 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Feb 13 15:34:45.003415 kernel: Freeing SMP alternatives memory: 32K Feb 13 15:34:45.003429 kernel: pid_max: default: 32768 minimum: 301 Feb 13 15:34:45.003443 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 15:34:45.003458 kernel: landlock: Up and running. Feb 13 15:34:45.003472 kernel: SELinux: Initializing. Feb 13 15:34:45.003487 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 15:34:45.003502 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 15:34:45.003517 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Feb 13 15:34:45.003535 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:34:45.003550 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:34:45.003566 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:34:45.003581 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Feb 13 15:34:45.003596 kernel: signal: max sigframe size: 3632 Feb 13 15:34:45.003611 kernel: rcu: Hierarchical SRCU implementation. Feb 13 15:34:45.003626 kernel: rcu: Max phase no-delay instances is 400. Feb 13 15:34:45.003658 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 13 15:34:45.003733 kernel: smp: Bringing up secondary CPUs ... Feb 13 15:34:45.003757 kernel: smpboot: x86: Booting SMP configuration: Feb 13 15:34:45.003773 kernel: .... node #0, CPUs: #1 Feb 13 15:34:45.003790 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Feb 13 15:34:45.003808 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 13 15:34:45.003825 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 15:34:45.003842 kernel: smpboot: Max logical packages: 1 Feb 13 15:34:45.003858 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Feb 13 15:34:45.003874 kernel: devtmpfs: initialized Feb 13 15:34:45.003891 kernel: x86/mm: Memory block size: 128MB Feb 13 15:34:45.003910 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 15:34:45.003927 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 15:34:45.003943 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 15:34:45.003960 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 15:34:45.003976 kernel: audit: initializing netlink subsys (disabled) Feb 13 15:34:45.003992 kernel: audit: type=2000 audit(1739460883.378:1): state=initialized audit_enabled=0 res=1 Feb 13 15:34:45.004008 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 15:34:45.004025 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 15:34:45.004041 kernel: cpuidle: using governor menu Feb 13 15:34:45.004060 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 15:34:45.004076 kernel: dca service started, version 1.12.1 Feb 13 15:34:45.004192 kernel: PCI: Using configuration type 1 for base access Feb 13 15:34:45.004210 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 15:34:45.004227 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 15:34:45.004244 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 15:34:45.004260 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 15:34:45.004277 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 15:34:45.004297 kernel: ACPI: Added _OSI(Module Device) Feb 13 15:34:45.004314 kernel: ACPI: Added _OSI(Processor Device) Feb 13 15:34:45.004331 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 15:34:45.004348 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 15:34:45.004364 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Feb 13 15:34:45.004381 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 15:34:45.004397 kernel: ACPI: Interpreter enabled Feb 13 15:34:45.004414 kernel: ACPI: PM: (supports S0 S5) Feb 13 15:34:45.004430 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 15:34:45.004446 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 15:34:45.004466 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 15:34:45.004482 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Feb 13 15:34:45.004498 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 15:34:45.004742 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 13 15:34:45.004887 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Feb 13 15:34:45.005071 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Feb 13 15:34:45.005091 kernel: acpiphp: Slot [3] registered Feb 13 15:34:45.005110 kernel: acpiphp: Slot [4] registered Feb 13 15:34:45.005124 kernel: acpiphp: Slot [5] registered Feb 13 15:34:45.005137 kernel: acpiphp: Slot [6] registered Feb 13 15:34:45.005150 kernel: acpiphp: Slot [7] registered Feb 13 15:34:45.005164 kernel: acpiphp: Slot [8] registered Feb 13 15:34:45.005177 kernel: acpiphp: Slot [9] registered Feb 13 15:34:45.005190 kernel: acpiphp: Slot [10] registered Feb 13 15:34:45.005204 kernel: acpiphp: Slot [11] registered Feb 13 15:34:45.005218 kernel: acpiphp: Slot [12] registered Feb 13 15:34:45.005235 kernel: acpiphp: Slot [13] registered Feb 13 15:34:45.005249 kernel: acpiphp: Slot [14] registered Feb 13 15:34:45.005262 kernel: acpiphp: Slot [15] registered Feb 13 15:34:45.005277 kernel: acpiphp: Slot [16] registered Feb 13 15:34:45.005290 kernel: acpiphp: Slot [17] registered Feb 13 15:34:45.005303 kernel: acpiphp: Slot [18] registered Feb 13 15:34:45.005316 kernel: acpiphp: Slot [19] registered Feb 13 15:34:45.005330 kernel: acpiphp: Slot [20] registered Feb 13 15:34:45.005343 kernel: acpiphp: Slot [21] registered Feb 13 15:34:45.005356 kernel: acpiphp: Slot [22] registered Feb 13 15:34:45.005372 kernel: acpiphp: Slot [23] registered Feb 13 15:34:45.005385 kernel: acpiphp: Slot [24] registered Feb 13 15:34:45.005398 kernel: acpiphp: Slot [25] registered Feb 13 15:34:45.005411 kernel: acpiphp: Slot [26] registered Feb 13 15:34:45.005425 kernel: acpiphp: Slot [27] registered Feb 13 15:34:45.005439 kernel: acpiphp: Slot [28] registered Feb 13 15:34:45.005452 kernel: acpiphp: Slot [29] registered Feb 13 15:34:45.005465 kernel: acpiphp: Slot [30] registered Feb 13 15:34:45.005479 kernel: acpiphp: Slot [31] registered Feb 13 15:34:45.005494 kernel: PCI host bridge to bus 0000:00 Feb 13 15:34:45.005625 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 15:34:45.005765 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 15:34:45.005876 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 15:34:45.005989 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Feb 13 15:34:45.011383 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 15:34:45.012012 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 13 15:34:45.012399 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 13 15:34:45.012561 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Feb 13 15:34:45.012710 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Feb 13 15:34:45.012841 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Feb 13 15:34:45.012966 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Feb 13 15:34:45.013091 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Feb 13 15:34:45.013215 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Feb 13 15:34:45.013478 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Feb 13 15:34:45.013660 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Feb 13 15:34:45.013798 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Feb 13 15:34:45.013933 kernel: pci 0000:00:01.3: quirk_piix4_acpi+0x0/0x180 took 36132 usecs Feb 13 15:34:45.014079 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Feb 13 15:34:45.014214 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Feb 13 15:34:45.014381 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Feb 13 15:34:45.014523 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 15:34:45.014687 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Feb 13 15:34:45.014931 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Feb 13 15:34:45.015079 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Feb 13 15:34:45.015208 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Feb 13 15:34:45.015227 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 15:34:45.015248 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 15:34:45.015261 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 15:34:45.015275 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 15:34:45.015288 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 13 15:34:45.015302 kernel: iommu: Default domain type: Translated Feb 13 15:34:45.015316 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 15:34:45.015329 kernel: PCI: Using ACPI for IRQ routing Feb 13 15:34:45.015342 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 15:34:45.015356 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 13 15:34:45.015373 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Feb 13 15:34:45.015502 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Feb 13 15:34:45.015784 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Feb 13 15:34:45.017307 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 15:34:45.017336 kernel: vgaarb: loaded Feb 13 15:34:45.017353 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Feb 13 15:34:45.017368 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Feb 13 15:34:45.017384 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 15:34:45.017400 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 15:34:45.017423 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 15:34:45.017439 kernel: pnp: PnP ACPI init Feb 13 15:34:45.017453 kernel: pnp: PnP ACPI: found 5 devices Feb 13 15:34:45.017469 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 15:34:45.017484 kernel: NET: Registered PF_INET protocol family Feb 13 15:34:45.017499 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 15:34:45.017515 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 13 15:34:45.017530 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 15:34:45.017546 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 15:34:45.017565 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 15:34:45.017581 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 13 15:34:45.017597 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 15:34:45.017609 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 15:34:45.017624 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 15:34:45.017661 kernel: NET: Registered PF_XDP protocol family Feb 13 15:34:45.018126 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 15:34:45.018380 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 15:34:45.018665 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 15:34:45.018895 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Feb 13 15:34:45.019168 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 13 15:34:45.019195 kernel: PCI: CLS 0 bytes, default 64 Feb 13 15:34:45.019212 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 13 15:34:45.019228 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Feb 13 15:34:45.019245 kernel: clocksource: Switched to clocksource tsc Feb 13 15:34:45.019261 kernel: Initialise system trusted keyrings Feb 13 15:34:45.019283 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 13 15:34:45.019426 kernel: Key type asymmetric registered Feb 13 15:34:45.019443 kernel: Asymmetric key parser 'x509' registered Feb 13 15:34:45.019460 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 15:34:45.019476 kernel: io scheduler mq-deadline registered Feb 13 15:34:45.019492 kernel: io scheduler kyber registered Feb 13 15:34:45.019508 kernel: io scheduler bfq registered Feb 13 15:34:45.019523 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 15:34:45.019576 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 15:34:45.019602 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 15:34:45.019620 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 15:34:45.019855 kernel: i8042: Warning: Keylock active Feb 13 15:34:45.019872 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 15:34:45.019886 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 15:34:45.020268 kernel: rtc_cmos 00:00: RTC can wake from S4 Feb 13 15:34:45.020408 kernel: rtc_cmos 00:00: registered as rtc0 Feb 13 15:34:45.020534 kernel: rtc_cmos 00:00: setting system clock to 2025-02-13T15:34:44 UTC (1739460884) Feb 13 15:34:45.020706 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Feb 13 15:34:45.020729 kernel: intel_pstate: CPU model not supported Feb 13 15:34:45.020746 kernel: NET: Registered PF_INET6 protocol family Feb 13 15:34:45.020763 kernel: Segment Routing with IPv6 Feb 13 15:34:45.020780 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 15:34:45.020796 kernel: NET: Registered PF_PACKET protocol family Feb 13 15:34:45.020813 kernel: Key type dns_resolver registered Feb 13 15:34:45.020829 kernel: IPI shorthand broadcast: enabled Feb 13 15:34:45.020846 kernel: sched_clock: Marking stable (750001934, 305427746)->(1185242280, -129812600) Feb 13 15:34:45.020868 kernel: registered taskstats version 1 Feb 13 15:34:45.020886 kernel: Loading compiled-in X.509 certificates Feb 13 15:34:45.020903 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 9ec780e1db69d46be90bbba73ae62b0106e27ae0' Feb 13 15:34:45.020919 kernel: Key type .fscrypt registered Feb 13 15:34:45.020935 kernel: Key type fscrypt-provisioning registered Feb 13 15:34:45.020952 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 15:34:45.020968 kernel: ima: Allocated hash algorithm: sha1 Feb 13 15:34:45.020984 kernel: ima: No architecture policies found Feb 13 15:34:45.021000 kernel: clk: Disabling unused clocks Feb 13 15:34:45.021020 kernel: Freeing unused kernel image (initmem) memory: 42976K Feb 13 15:34:45.021036 kernel: Write protecting the kernel read-only data: 36864k Feb 13 15:34:45.021053 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Feb 13 15:34:45.021070 kernel: Run /init as init process Feb 13 15:34:45.021086 kernel: with arguments: Feb 13 15:34:45.021102 kernel: /init Feb 13 15:34:45.021119 kernel: with environment: Feb 13 15:34:45.021135 kernel: HOME=/ Feb 13 15:34:45.021150 kernel: TERM=linux Feb 13 15:34:45.021171 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 15:34:45.021217 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:34:45.021238 systemd[1]: Detected virtualization amazon. Feb 13 15:34:45.021257 systemd[1]: Detected architecture x86-64. Feb 13 15:34:45.021275 systemd[1]: Running in initrd. Feb 13 15:34:45.021291 systemd[1]: No hostname configured, using default hostname. Feb 13 15:34:45.021309 systemd[1]: Hostname set to . Feb 13 15:34:45.021331 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:34:45.021353 systemd[1]: Queued start job for default target initrd.target. Feb 13 15:34:45.021452 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:34:45.021474 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:34:45.021494 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 15:34:45.021513 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:34:45.021531 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 15:34:45.021550 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 15:34:45.021575 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 15:34:45.021593 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 15:34:45.021609 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:34:45.021625 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:34:45.021678 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:34:45.021692 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:34:45.021706 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:34:45.021725 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:34:45.021740 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:34:45.021754 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:34:45.021769 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 15:34:45.021784 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 15:34:45.021799 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:34:45.021813 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:34:45.021828 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:34:45.021848 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:34:45.021863 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 15:34:45.021876 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:34:45.021891 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 15:34:45.021906 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 15:34:45.021924 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:34:45.021942 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:34:45.021957 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:34:45.021973 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 15:34:45.022021 systemd-journald[179]: Collecting audit messages is disabled. Feb 13 15:34:45.022061 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:34:45.022076 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 15:34:45.022092 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 15:34:45.022109 systemd-journald[179]: Journal started Feb 13 15:34:45.022144 systemd-journald[179]: Runtime Journal (/run/log/journal/ec245cf1e4584726fe2b7862653c9333) is 4.8M, max 38.6M, 33.7M free. Feb 13 15:34:45.026679 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:34:45.041198 systemd-modules-load[180]: Inserted module 'overlay' Feb 13 15:34:45.270550 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 15:34:45.270593 kernel: Bridge firewalling registered Feb 13 15:34:45.042837 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:34:45.088213 systemd-modules-load[180]: Inserted module 'br_netfilter' Feb 13 15:34:45.282897 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:34:45.287272 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:34:45.291627 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:34:45.302195 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:34:45.307533 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:34:45.316903 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:34:45.327969 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:34:45.335408 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:34:45.381540 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:34:45.388463 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:34:45.389329 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:34:45.404045 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 15:34:45.412927 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:34:45.430504 dracut-cmdline[214]: dracut-dracut-053 Feb 13 15:34:45.435685 dracut-cmdline[214]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=cd73eba291b8356dfc2c39f651cabef9206685f772c8949188fd366788d672c2 Feb 13 15:34:45.492086 systemd-resolved[215]: Positive Trust Anchors: Feb 13 15:34:45.492107 systemd-resolved[215]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:34:45.492171 systemd-resolved[215]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:34:45.497923 systemd-resolved[215]: Defaulting to hostname 'linux'. Feb 13 15:34:45.500963 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:34:45.503416 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:34:45.575755 kernel: SCSI subsystem initialized Feb 13 15:34:45.590663 kernel: Loading iSCSI transport class v2.0-870. Feb 13 15:34:45.607661 kernel: iscsi: registered transport (tcp) Feb 13 15:34:45.636112 kernel: iscsi: registered transport (qla4xxx) Feb 13 15:34:45.636320 kernel: QLogic iSCSI HBA Driver Feb 13 15:34:45.701518 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 15:34:45.710301 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 15:34:45.757354 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 15:34:45.757436 kernel: device-mapper: uevent: version 1.0.3 Feb 13 15:34:45.757690 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 15:34:45.812901 kernel: raid6: avx512x4 gen() 14915 MB/s Feb 13 15:34:45.829688 kernel: raid6: avx512x2 gen() 5791 MB/s Feb 13 15:34:45.848767 kernel: raid6: avx512x1 gen() 6541 MB/s Feb 13 15:34:45.865686 kernel: raid6: avx2x4 gen() 2600 MB/s Feb 13 15:34:45.882669 kernel: raid6: avx2x2 gen() 13381 MB/s Feb 13 15:34:45.899850 kernel: raid6: avx2x1 gen() 11415 MB/s Feb 13 15:34:45.899928 kernel: raid6: using algorithm avx512x4 gen() 14915 MB/s Feb 13 15:34:45.917992 kernel: raid6: .... xor() 6776 MB/s, rmw enabled Feb 13 15:34:45.918088 kernel: raid6: using avx512x2 recovery algorithm Feb 13 15:34:45.945665 kernel: xor: automatically using best checksumming function avx Feb 13 15:34:46.194663 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 15:34:46.207293 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:34:46.216865 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:34:46.235733 systemd-udevd[398]: Using default interface naming scheme 'v255'. Feb 13 15:34:46.241910 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:34:46.261114 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 15:34:46.293087 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Feb 13 15:34:46.329755 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:34:46.343023 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:34:46.423962 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:34:46.436180 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 15:34:46.478539 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 15:34:46.486167 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:34:46.488310 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:34:46.494202 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:34:46.509002 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 15:34:46.557440 kernel: ena 0000:00:05.0: ENA device version: 0.10 Feb 13 15:34:46.602835 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Feb 13 15:34:46.603049 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Feb 13 15:34:46.603219 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 15:34:46.603243 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:df:98:ef:63:53 Feb 13 15:34:46.571997 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:34:46.607539 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:34:46.607749 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:34:46.609625 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:34:46.621869 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 15:34:46.621908 kernel: AES CTR mode by8 optimization enabled Feb 13 15:34:46.623157 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:34:46.623389 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:34:46.628585 (udev-worker)[458]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:34:46.646156 kernel: nvme nvme0: pci function 0000:00:04.0 Feb 13 15:34:46.646412 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Feb 13 15:34:46.628717 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:34:46.641151 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:34:46.669315 kernel: nvme nvme0: 2/0/0 default/read/poll queues Feb 13 15:34:46.673670 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 15:34:46.673730 kernel: GPT:9289727 != 16777215 Feb 13 15:34:46.673749 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 15:34:46.673766 kernel: GPT:9289727 != 16777215 Feb 13 15:34:46.673782 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 15:34:46.673798 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 15:34:46.816928 kernel: BTRFS: device fsid 966d6124-9067-4089-b000-5e99065fe7e2 devid 1 transid 37 /dev/nvme0n1p3 scanned by (udev-worker) (450) Feb 13 15:34:46.830713 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (449) Feb 13 15:34:46.881159 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:34:46.908934 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:34:46.975391 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Feb 13 15:34:46.979300 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:34:46.993073 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 15:34:47.002233 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Feb 13 15:34:47.002392 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Feb 13 15:34:47.028989 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Feb 13 15:34:47.041917 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 15:34:47.053043 disk-uuid[631]: Primary Header is updated. Feb 13 15:34:47.053043 disk-uuid[631]: Secondary Entries is updated. Feb 13 15:34:47.053043 disk-uuid[631]: Secondary Header is updated. Feb 13 15:34:47.058662 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 15:34:47.078770 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 15:34:48.082763 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 15:34:48.082832 disk-uuid[632]: The operation has completed successfully. Feb 13 15:34:48.285498 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 15:34:48.285650 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 15:34:48.305010 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 15:34:48.324581 sh[890]: Success Feb 13 15:34:48.346037 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 15:34:48.445572 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 15:34:48.455760 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 15:34:48.461505 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 15:34:48.491711 kernel: BTRFS info (device dm-0): first mount of filesystem 966d6124-9067-4089-b000-5e99065fe7e2 Feb 13 15:34:48.491787 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:34:48.493719 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 15:34:48.493750 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 15:34:48.493764 kernel: BTRFS info (device dm-0): using free space tree Feb 13 15:34:48.611675 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 15:34:48.633447 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 15:34:48.638018 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 15:34:48.644942 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 15:34:48.649848 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 15:34:48.693028 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:34:48.693099 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:34:48.693119 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 15:34:48.702719 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 15:34:48.724532 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:34:48.724064 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 15:34:48.734106 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 15:34:48.741101 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 15:34:48.837437 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:34:48.851013 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:34:48.884218 systemd-networkd[1082]: lo: Link UP Feb 13 15:34:48.884231 systemd-networkd[1082]: lo: Gained carrier Feb 13 15:34:48.887192 systemd-networkd[1082]: Enumeration completed Feb 13 15:34:48.887553 systemd-networkd[1082]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:34:48.887557 systemd-networkd[1082]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:34:48.888623 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:34:48.892409 systemd-networkd[1082]: eth0: Link UP Feb 13 15:34:48.892414 systemd-networkd[1082]: eth0: Gained carrier Feb 13 15:34:48.892430 systemd-networkd[1082]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:34:48.894350 systemd[1]: Reached target network.target - Network. Feb 13 15:34:48.906734 systemd-networkd[1082]: eth0: DHCPv4 address 172.31.27.74/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 15:34:49.075691 ignition[1013]: Ignition 2.20.0 Feb 13 15:34:49.075703 ignition[1013]: Stage: fetch-offline Feb 13 15:34:49.075883 ignition[1013]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:34:49.075891 ignition[1013]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:34:49.080215 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:34:49.077296 ignition[1013]: Ignition finished successfully Feb 13 15:34:49.088055 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 15:34:49.112290 ignition[1091]: Ignition 2.20.0 Feb 13 15:34:49.112302 ignition[1091]: Stage: fetch Feb 13 15:34:49.113148 ignition[1091]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:34:49.113166 ignition[1091]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:34:49.113319 ignition[1091]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:34:49.127737 ignition[1091]: PUT result: OK Feb 13 15:34:49.130963 ignition[1091]: parsed url from cmdline: "" Feb 13 15:34:49.130977 ignition[1091]: no config URL provided Feb 13 15:34:49.130988 ignition[1091]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:34:49.131042 ignition[1091]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:34:49.131069 ignition[1091]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:34:49.133905 ignition[1091]: PUT result: OK Feb 13 15:34:49.134077 ignition[1091]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Feb 13 15:34:49.145013 ignition[1091]: GET result: OK Feb 13 15:34:49.146413 ignition[1091]: parsing config with SHA512: 49fd0af943f5b6fb37e0e24b560dd84fa8ad9bc3ca0b0564c3e80a8a0d34381f4a2698c1c6e292f7c74fc61b337fc7560e83a86d243ead0c597b74f207cfe23f Feb 13 15:34:49.161291 unknown[1091]: fetched base config from "system" Feb 13 15:34:49.161305 unknown[1091]: fetched base config from "system" Feb 13 15:34:49.161809 ignition[1091]: fetch: fetch complete Feb 13 15:34:49.161312 unknown[1091]: fetched user config from "aws" Feb 13 15:34:49.161816 ignition[1091]: fetch: fetch passed Feb 13 15:34:49.166029 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 15:34:49.161869 ignition[1091]: Ignition finished successfully Feb 13 15:34:49.177447 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 15:34:49.217427 ignition[1097]: Ignition 2.20.0 Feb 13 15:34:49.217442 ignition[1097]: Stage: kargs Feb 13 15:34:49.218032 ignition[1097]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:34:49.218046 ignition[1097]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:34:49.218157 ignition[1097]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:34:49.222428 ignition[1097]: PUT result: OK Feb 13 15:34:49.234951 ignition[1097]: kargs: kargs passed Feb 13 15:34:49.235040 ignition[1097]: Ignition finished successfully Feb 13 15:34:49.237958 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 15:34:49.244911 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 15:34:49.263025 ignition[1103]: Ignition 2.20.0 Feb 13 15:34:49.263040 ignition[1103]: Stage: disks Feb 13 15:34:49.263453 ignition[1103]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:34:49.264070 ignition[1103]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:34:49.264760 ignition[1103]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:34:49.267482 ignition[1103]: PUT result: OK Feb 13 15:34:49.273433 ignition[1103]: disks: disks passed Feb 13 15:34:49.273583 ignition[1103]: Ignition finished successfully Feb 13 15:34:49.275758 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 15:34:49.276614 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 15:34:49.279667 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 15:34:49.280296 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:34:49.280668 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:34:49.280821 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:34:49.288869 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 15:34:49.337628 systemd-fsck[1111]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 15:34:49.341941 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 15:34:49.350806 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 15:34:49.509389 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 85ed0b0d-7f0f-4eeb-80d8-6213e9fcc55d r/w with ordered data mode. Quota mode: none. Feb 13 15:34:49.509307 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 15:34:49.510908 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 15:34:49.527192 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:34:49.531939 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 15:34:49.535674 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 15:34:49.535748 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 15:34:49.535850 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:34:49.565857 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1130) Feb 13 15:34:49.569958 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:34:49.570024 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:34:49.570048 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 15:34:49.571372 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 15:34:49.576700 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 15:34:49.580921 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 15:34:49.590326 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:34:49.977820 initrd-setup-root[1154]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 15:34:49.998659 initrd-setup-root[1161]: cut: /sysroot/etc/group: No such file or directory Feb 13 15:34:50.006107 initrd-setup-root[1168]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 15:34:50.030729 initrd-setup-root[1175]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 15:34:50.175575 systemd-networkd[1082]: eth0: Gained IPv6LL Feb 13 15:34:50.395914 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 15:34:50.403819 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 15:34:50.414125 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 15:34:50.425235 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 15:34:50.426530 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:34:50.463120 ignition[1243]: INFO : Ignition 2.20.0 Feb 13 15:34:50.463120 ignition[1243]: INFO : Stage: mount Feb 13 15:34:50.465412 ignition[1243]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:34:50.466979 ignition[1243]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:34:50.468724 ignition[1243]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:34:50.471112 ignition[1243]: INFO : PUT result: OK Feb 13 15:34:50.473784 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 15:34:50.479093 ignition[1243]: INFO : mount: mount passed Feb 13 15:34:50.480150 ignition[1243]: INFO : Ignition finished successfully Feb 13 15:34:50.482616 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 15:34:50.498818 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 15:34:50.528038 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:34:50.557669 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1255) Feb 13 15:34:50.557723 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:34:50.557736 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:34:50.557749 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 15:34:50.564275 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 15:34:50.564186 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:34:50.595942 ignition[1272]: INFO : Ignition 2.20.0 Feb 13 15:34:50.595942 ignition[1272]: INFO : Stage: files Feb 13 15:34:50.598076 ignition[1272]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:34:50.598076 ignition[1272]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:34:50.598076 ignition[1272]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:34:50.598076 ignition[1272]: INFO : PUT result: OK Feb 13 15:34:50.605187 ignition[1272]: DEBUG : files: compiled without relabeling support, skipping Feb 13 15:34:50.607451 ignition[1272]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 15:34:50.607451 ignition[1272]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 15:34:50.635694 ignition[1272]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 15:34:50.637495 ignition[1272]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 15:34:50.642203 unknown[1272]: wrote ssh authorized keys file for user: core Feb 13 15:34:50.644196 ignition[1272]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 15:34:50.648068 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 15:34:50.650596 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 13 15:34:50.729097 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 15:34:50.974839 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 15:34:50.974839 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 15:34:50.982017 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 15:34:50.982017 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:34:50.982017 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:34:50.982017 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:34:50.994971 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:34:50.994971 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:34:50.994971 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:34:50.994971 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:34:50.994971 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:34:50.994971 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 15:34:50.994971 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 15:34:50.994971 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 15:34:50.994971 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Feb 13 15:34:51.503149 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 15:34:51.935240 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 15:34:51.935240 ignition[1272]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 15:34:51.942415 ignition[1272]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:34:51.946726 ignition[1272]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:34:51.946726 ignition[1272]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 15:34:51.946726 ignition[1272]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Feb 13 15:34:51.952960 ignition[1272]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 15:34:51.952960 ignition[1272]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:34:51.957873 ignition[1272]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:34:51.957873 ignition[1272]: INFO : files: files passed Feb 13 15:34:51.957873 ignition[1272]: INFO : Ignition finished successfully Feb 13 15:34:51.961733 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 15:34:51.972768 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 15:34:51.980752 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 15:34:51.999321 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 15:34:52.001378 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 15:34:52.017169 initrd-setup-root-after-ignition[1301]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:34:52.019727 initrd-setup-root-after-ignition[1301]: grep: Feb 13 15:34:52.019727 initrd-setup-root-after-ignition[1305]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:34:52.029631 initrd-setup-root-after-ignition[1301]: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:34:52.022904 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:34:52.024944 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 15:34:52.038971 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 15:34:52.077567 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 15:34:52.077792 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 15:34:52.081918 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 15:34:52.094029 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 15:34:52.102769 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 15:34:52.116307 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 15:34:52.151086 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:34:52.160248 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 15:34:52.195054 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:34:52.195312 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:34:52.201066 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 15:34:52.203295 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 15:34:52.204866 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:34:52.209372 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 15:34:52.211856 systemd[1]: Stopped target basic.target - Basic System. Feb 13 15:34:52.214619 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 15:34:52.217333 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:34:52.227741 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 15:34:52.231107 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 15:34:52.233995 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:34:52.235877 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 15:34:52.238804 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 15:34:52.242048 systemd[1]: Stopped target swap.target - Swaps. Feb 13 15:34:52.243603 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 15:34:52.243777 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:34:52.246830 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:34:52.256830 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:34:52.260357 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 15:34:52.260576 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:34:52.267848 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 15:34:52.271246 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 15:34:52.275494 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 15:34:52.275826 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:34:52.288251 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 15:34:52.288430 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 15:34:52.304974 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 15:34:52.307256 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 15:34:52.307476 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:34:52.319110 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 15:34:52.321883 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 15:34:52.322238 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:34:52.325182 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 15:34:52.325437 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:34:52.351853 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 15:34:52.352032 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 15:34:52.376456 ignition[1325]: INFO : Ignition 2.20.0 Feb 13 15:34:52.376456 ignition[1325]: INFO : Stage: umount Feb 13 15:34:52.376456 ignition[1325]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:34:52.376456 ignition[1325]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:34:52.376456 ignition[1325]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:34:52.376456 ignition[1325]: INFO : PUT result: OK Feb 13 15:34:52.393262 ignition[1325]: INFO : umount: umount passed Feb 13 15:34:52.393262 ignition[1325]: INFO : Ignition finished successfully Feb 13 15:34:52.383433 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 15:34:52.383565 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 15:34:52.388504 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 15:34:52.389851 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 15:34:52.390065 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 15:34:52.397056 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 15:34:52.397123 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 15:34:52.403740 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 15:34:52.403804 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 15:34:52.407211 systemd[1]: Stopped target network.target - Network. Feb 13 15:34:52.408240 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 15:34:52.408310 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:34:52.411193 systemd[1]: Stopped target paths.target - Path Units. Feb 13 15:34:52.412713 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 15:34:52.418870 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:34:52.422729 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 15:34:52.424204 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 15:34:52.430016 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 15:34:52.430067 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:34:52.431436 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 15:34:52.431479 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:34:52.432748 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 15:34:52.432812 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 15:34:52.436312 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 15:34:52.436365 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 15:34:52.440245 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 15:34:52.445992 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 15:34:52.449705 systemd-networkd[1082]: eth0: DHCPv6 lease lost Feb 13 15:34:52.459314 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 15:34:52.459457 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 15:34:52.461075 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 15:34:52.461183 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 15:34:52.467272 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 15:34:52.467407 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:34:52.476859 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 15:34:52.477773 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 15:34:52.477860 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:34:52.481515 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:34:52.481593 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:34:52.484245 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 15:34:52.484321 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 15:34:52.485268 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 15:34:52.485316 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:34:52.485503 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:34:52.535375 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 15:34:52.535596 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:34:52.541385 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 15:34:52.541469 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 15:34:52.548795 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 15:34:52.548876 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:34:52.550221 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 15:34:52.550315 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:34:52.554882 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 15:34:52.554962 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 15:34:52.555261 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:34:52.555368 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:34:52.570402 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 15:34:52.571962 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 15:34:52.572101 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:34:52.573978 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:34:52.574041 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:34:52.578223 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 15:34:52.583418 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 15:34:52.593111 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 15:34:52.593260 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 15:34:52.610174 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 15:34:52.610309 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 15:34:52.612751 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 15:34:52.614015 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 15:34:52.614957 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 15:34:52.623843 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 15:34:52.647099 systemd[1]: Switching root. Feb 13 15:34:52.695654 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). Feb 13 15:34:52.695737 systemd-journald[179]: Journal stopped Feb 13 15:34:55.199022 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 15:34:55.199928 kernel: SELinux: policy capability open_perms=1 Feb 13 15:34:55.199953 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 15:34:55.199971 kernel: SELinux: policy capability always_check_network=0 Feb 13 15:34:55.199989 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 15:34:55.200012 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 15:34:55.200034 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 15:34:55.200056 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 15:34:55.200080 kernel: audit: type=1403 audit(1739460893.089:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 15:34:55.200101 systemd[1]: Successfully loaded SELinux policy in 56.327ms. Feb 13 15:34:55.200123 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.289ms. Feb 13 15:34:55.200142 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:34:55.200162 systemd[1]: Detected virtualization amazon. Feb 13 15:34:55.200182 systemd[1]: Detected architecture x86-64. Feb 13 15:34:55.200203 systemd[1]: Detected first boot. Feb 13 15:34:55.200223 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:34:55.200242 zram_generator::config[1368]: No configuration found. Feb 13 15:34:55.200309 systemd[1]: Populated /etc with preset unit settings. Feb 13 15:34:55.200376 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 15:34:55.200400 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 15:34:55.200428 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 15:34:55.200451 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 15:34:55.200475 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 15:34:55.200494 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 15:34:55.200513 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 15:34:55.200534 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 15:34:55.200555 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 15:34:55.200576 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 15:34:55.200605 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 15:34:55.200625 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:34:55.200681 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:34:55.200705 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 15:34:55.200724 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 15:34:55.200744 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 15:34:55.200809 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:34:55.200829 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 15:34:55.200851 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:34:55.200872 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 15:34:55.200900 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 15:34:55.200925 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 15:34:55.200955 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 15:34:55.200979 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:34:55.201003 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:34:55.201027 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:34:55.201051 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:34:55.201077 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 15:34:55.201103 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 15:34:55.201128 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:34:55.201153 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:34:55.201175 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:34:55.201197 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 15:34:55.201216 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 15:34:55.201237 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 15:34:55.201257 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 15:34:55.201279 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:34:55.201299 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 15:34:55.201321 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 15:34:55.201338 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 15:34:55.201357 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 15:34:55.201374 systemd[1]: Reached target machines.target - Containers. Feb 13 15:34:55.201392 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 15:34:55.201413 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:34:55.201430 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:34:55.201448 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 15:34:55.201466 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:34:55.201488 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:34:55.201506 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:34:55.201524 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 15:34:55.201543 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:34:55.201562 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 15:34:55.201580 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 15:34:55.201601 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 15:34:55.201619 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 15:34:55.201655 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 15:34:55.201674 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:34:55.201691 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:34:55.201709 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 15:34:55.201727 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 15:34:55.201745 kernel: loop: module loaded Feb 13 15:34:55.201764 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:34:55.201783 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 15:34:55.201802 systemd[1]: Stopped verity-setup.service. Feb 13 15:34:55.201825 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:34:55.201844 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 15:34:55.201861 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 15:34:55.201879 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 15:34:55.201897 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 15:34:55.201915 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 15:34:55.201933 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 15:34:55.201955 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:34:55.201973 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 15:34:55.201992 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 15:34:55.202011 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:34:55.202029 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:34:55.202047 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:34:55.202065 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:34:55.202085 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:34:55.202104 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:34:55.202122 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 15:34:55.202141 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 15:34:55.202162 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 15:34:55.202183 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 15:34:55.202202 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 15:34:55.202220 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:34:55.202239 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 15:34:55.202256 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 15:34:55.202371 kernel: fuse: init (API version 7.39) Feb 13 15:34:55.202394 kernel: ACPI: bus type drm_connector registered Feb 13 15:34:55.202412 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 15:34:55.202561 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:34:55.202606 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 15:34:55.202629 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:34:55.202843 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 15:34:55.202870 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:34:55.202890 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 15:34:55.202910 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 15:34:55.202931 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:34:55.202953 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:34:55.202982 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 15:34:55.203005 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 15:34:55.205496 systemd-journald[1447]: Collecting audit messages is disabled. Feb 13 15:34:55.205579 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:34:55.205602 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 15:34:55.205629 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 15:34:55.205694 systemd-journald[1447]: Journal started Feb 13 15:34:55.205741 systemd-journald[1447]: Runtime Journal (/run/log/journal/ec245cf1e4584726fe2b7862653c9333) is 4.8M, max 38.6M, 33.7M free. Feb 13 15:34:54.496700 systemd[1]: Queued start job for default target multi-user.target. Feb 13 15:34:54.541415 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Feb 13 15:34:54.541863 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 15:34:55.209693 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:34:55.247378 kernel: loop0: detected capacity change from 0 to 140992 Feb 13 15:34:55.239710 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 15:34:55.260073 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 15:34:55.271932 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 15:34:55.289481 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 15:34:55.308845 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 15:34:55.318887 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:34:55.338368 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 15:34:55.342480 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:34:55.345547 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 15:34:55.374983 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 15:34:55.417866 systemd-journald[1447]: Time spent on flushing to /var/log/journal/ec245cf1e4584726fe2b7862653c9333 is 61.709ms for 966 entries. Feb 13 15:34:55.417866 systemd-journald[1447]: System Journal (/var/log/journal/ec245cf1e4584726fe2b7862653c9333) is 8.0M, max 195.6M, 187.6M free. Feb 13 15:34:55.496335 systemd-journald[1447]: Received client request to flush runtime journal. Feb 13 15:34:55.496396 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 15:34:55.496421 kernel: loop1: detected capacity change from 0 to 62848 Feb 13 15:34:55.448684 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 15:34:55.450707 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 15:34:55.502625 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 15:34:55.511226 udevadm[1502]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 15:34:55.548447 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:34:55.581764 kernel: loop2: detected capacity change from 0 to 138184 Feb 13 15:34:55.578445 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 15:34:55.591914 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:34:55.638100 systemd-tmpfiles[1515]: ACLs are not supported, ignoring. Feb 13 15:34:55.638129 systemd-tmpfiles[1515]: ACLs are not supported, ignoring. Feb 13 15:34:55.647922 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:34:55.719661 kernel: loop3: detected capacity change from 0 to 210664 Feb 13 15:34:55.783124 kernel: loop4: detected capacity change from 0 to 140992 Feb 13 15:34:55.834664 kernel: loop5: detected capacity change from 0 to 62848 Feb 13 15:34:55.862758 kernel: loop6: detected capacity change from 0 to 138184 Feb 13 15:34:55.896702 kernel: loop7: detected capacity change from 0 to 210664 Feb 13 15:34:55.936742 (sd-merge)[1521]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Feb 13 15:34:55.937568 (sd-merge)[1521]: Merged extensions into '/usr'. Feb 13 15:34:55.944205 systemd[1]: Reloading requested from client PID 1474 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 15:34:55.944348 systemd[1]: Reloading... Feb 13 15:34:56.112681 zram_generator::config[1543]: No configuration found. Feb 13 15:34:56.463852 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:34:56.622022 systemd[1]: Reloading finished in 668 ms. Feb 13 15:34:56.676868 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 15:34:56.690007 systemd[1]: Starting ensure-sysext.service... Feb 13 15:34:56.704098 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:34:56.741351 systemd-tmpfiles[1596]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 15:34:56.742819 systemd-tmpfiles[1596]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 15:34:56.744235 systemd[1]: Reloading requested from client PID 1595 ('systemctl') (unit ensure-sysext.service)... Feb 13 15:34:56.744257 systemd[1]: Reloading... Feb 13 15:34:56.745098 systemd-tmpfiles[1596]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 15:34:56.746064 systemd-tmpfiles[1596]: ACLs are not supported, ignoring. Feb 13 15:34:56.746202 systemd-tmpfiles[1596]: ACLs are not supported, ignoring. Feb 13 15:34:56.766306 systemd-tmpfiles[1596]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:34:56.768996 systemd-tmpfiles[1596]: Skipping /boot Feb 13 15:34:56.809838 systemd-tmpfiles[1596]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:34:56.809902 systemd-tmpfiles[1596]: Skipping /boot Feb 13 15:34:56.894707 zram_generator::config[1623]: No configuration found. Feb 13 15:34:57.153410 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:34:57.211683 ldconfig[1471]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 15:34:57.244400 systemd[1]: Reloading finished in 499 ms. Feb 13 15:34:57.282944 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 15:34:57.286417 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 15:34:57.300237 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:34:57.326178 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:34:57.340821 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 15:34:57.352930 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 15:34:57.377298 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:34:57.397620 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:34:57.409113 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 15:34:57.420412 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:34:57.420837 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:34:57.431813 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:34:57.443071 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:34:57.453787 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:34:57.458681 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:34:57.471207 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 15:34:57.472708 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:34:57.474315 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:34:57.476588 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:34:57.480153 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:34:57.481101 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:34:57.483407 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:34:57.486006 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:34:57.502454 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:34:57.505000 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:34:57.511173 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:34:57.511574 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:34:57.522055 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:34:57.545850 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:34:57.559254 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:34:57.561294 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:34:57.561613 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:34:57.565356 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 15:34:57.569625 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 15:34:57.586184 systemd-udevd[1686]: Using default interface naming scheme 'v255'. Feb 13 15:34:57.595079 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:34:57.595457 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:34:57.610072 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:34:57.613276 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:34:57.613589 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 15:34:57.633149 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 15:34:57.634865 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:34:57.663855 systemd[1]: Finished ensure-sysext.service. Feb 13 15:34:57.669339 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:34:57.670396 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:34:57.684751 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 15:34:57.687531 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 15:34:57.730581 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:34:57.731068 augenrules[1718]: No rules Feb 13 15:34:57.732731 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:34:57.740746 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:34:57.741032 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:34:57.745961 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:34:57.746369 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:34:57.749292 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 15:34:57.754316 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:34:57.776605 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:34:57.778055 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:34:57.778145 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:34:57.778399 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 15:34:57.788388 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:34:57.788783 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:34:57.946795 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 15:34:57.960001 systemd-networkd[1735]: lo: Link UP Feb 13 15:34:57.960014 systemd-networkd[1735]: lo: Gained carrier Feb 13 15:34:57.960890 systemd-networkd[1735]: Enumeration completed Feb 13 15:34:57.961110 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:34:57.971874 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 15:34:57.987893 (udev-worker)[1736]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:34:58.014925 systemd-resolved[1684]: Positive Trust Anchors: Feb 13 15:34:58.014952 systemd-resolved[1684]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:34:58.015038 systemd-resolved[1684]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:34:58.026357 systemd-resolved[1684]: Defaulting to hostname 'linux'. Feb 13 15:34:58.031324 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:34:58.032960 systemd[1]: Reached target network.target - Network. Feb 13 15:34:58.034651 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:34:58.044710 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 13 15:34:58.063663 kernel: ACPI: button: Power Button [PWRF] Feb 13 15:34:58.118980 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (1744) Feb 13 15:34:58.141439 systemd-networkd[1735]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:34:58.141454 systemd-networkd[1735]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:34:58.153532 systemd-networkd[1735]: eth0: Link UP Feb 13 15:34:58.153756 systemd-networkd[1735]: eth0: Gained carrier Feb 13 15:34:58.153790 systemd-networkd[1735]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:34:58.158717 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Feb 13 15:34:58.162655 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Feb 13 15:34:58.168059 kernel: ACPI: button: Sleep Button [SLPF] Feb 13 15:34:58.169209 systemd-networkd[1735]: eth0: DHCPv4 address 172.31.27.74/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 15:34:58.220699 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input5 Feb 13 15:34:58.358692 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 15:34:58.390140 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:34:58.425032 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 15:34:58.431886 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 15:34:58.433611 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 15:34:58.440033 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 15:34:58.452159 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 15:34:58.464661 lvm[1850]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:34:58.508963 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 15:34:58.510723 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:34:58.533267 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 15:34:58.541581 lvm[1854]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:34:58.576192 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 15:34:58.701434 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:34:58.703132 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:34:58.705032 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 15:34:58.706770 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 15:34:58.709182 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 15:34:58.710906 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 15:34:58.712576 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 15:34:58.714311 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 15:34:58.714344 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:34:58.715681 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:34:58.718062 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 15:34:58.721428 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 15:34:58.728937 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 15:34:58.730848 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 15:34:58.732427 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:34:58.733423 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:34:58.734654 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:34:58.734682 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:34:58.750901 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 15:34:58.754043 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 15:34:58.761105 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 15:34:58.773175 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 15:34:58.783040 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 15:34:58.786956 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 15:34:58.816165 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 15:34:58.822315 systemd[1]: Started ntpd.service - Network Time Service. Feb 13 15:34:58.828865 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 15:34:58.835966 systemd[1]: Starting setup-oem.service - Setup OEM... Feb 13 15:34:58.840914 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 15:34:58.855736 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 15:34:58.859950 jq[1864]: false Feb 13 15:34:58.866860 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 15:34:58.869505 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 15:34:58.870814 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 15:34:58.880877 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 15:34:58.891798 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 15:34:58.896757 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 15:34:58.898709 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 15:34:58.926471 extend-filesystems[1865]: Found loop4 Feb 13 15:34:58.928854 extend-filesystems[1865]: Found loop5 Feb 13 15:34:58.928854 extend-filesystems[1865]: Found loop6 Feb 13 15:34:58.928854 extend-filesystems[1865]: Found loop7 Feb 13 15:34:58.928854 extend-filesystems[1865]: Found nvme0n1 Feb 13 15:34:58.928854 extend-filesystems[1865]: Found nvme0n1p1 Feb 13 15:34:58.928854 extend-filesystems[1865]: Found nvme0n1p2 Feb 13 15:34:58.928854 extend-filesystems[1865]: Found nvme0n1p3 Feb 13 15:34:58.928854 extend-filesystems[1865]: Found usr Feb 13 15:34:58.928854 extend-filesystems[1865]: Found nvme0n1p4 Feb 13 15:34:58.928854 extend-filesystems[1865]: Found nvme0n1p6 Feb 13 15:34:58.928854 extend-filesystems[1865]: Found nvme0n1p7 Feb 13 15:34:58.928854 extend-filesystems[1865]: Found nvme0n1p9 Feb 13 15:34:58.928854 extend-filesystems[1865]: Checking size of /dev/nvme0n1p9 Feb 13 15:34:58.976665 jq[1878]: true Feb 13 15:34:58.992105 ntpd[1867]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 13:21:05 UTC 2025 (1): Starting Feb 13 15:34:58.992145 ntpd[1867]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 15:34:58.994928 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 15:34:58.997383 ntpd[1867]: 13 Feb 15:34:58 ntpd[1867]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 13:21:05 UTC 2025 (1): Starting Feb 13 15:34:58.997383 ntpd[1867]: 13 Feb 15:34:58 ntpd[1867]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 15:34:58.997383 ntpd[1867]: 13 Feb 15:34:58 ntpd[1867]: ---------------------------------------------------- Feb 13 15:34:58.997383 ntpd[1867]: 13 Feb 15:34:58 ntpd[1867]: ntp-4 is maintained by Network Time Foundation, Feb 13 15:34:58.997383 ntpd[1867]: 13 Feb 15:34:58 ntpd[1867]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 15:34:58.997383 ntpd[1867]: 13 Feb 15:34:58 ntpd[1867]: corporation. Support and training for ntp-4 are Feb 13 15:34:58.997383 ntpd[1867]: 13 Feb 15:34:58 ntpd[1867]: available at https://www.nwtime.org/support Feb 13 15:34:58.997383 ntpd[1867]: 13 Feb 15:34:58 ntpd[1867]: ---------------------------------------------------- Feb 13 15:34:58.997383 ntpd[1867]: 13 Feb 15:34:58 ntpd[1867]: proto: precision = 0.096 usec (-23) Feb 13 15:34:58.992156 ntpd[1867]: ---------------------------------------------------- Feb 13 15:34:58.992165 ntpd[1867]: ntp-4 is maintained by Network Time Foundation, Feb 13 15:34:58.992174 ntpd[1867]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 15:34:59.008042 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 15:34:59.027998 ntpd[1867]: 13 Feb 15:34:59 ntpd[1867]: basedate set to 2025-02-01 Feb 13 15:34:59.027998 ntpd[1867]: 13 Feb 15:34:59 ntpd[1867]: gps base set to 2025-02-02 (week 2352) Feb 13 15:34:59.027998 ntpd[1867]: 13 Feb 15:34:59 ntpd[1867]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 15:34:59.027998 ntpd[1867]: 13 Feb 15:34:59 ntpd[1867]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 15:34:59.027998 ntpd[1867]: 13 Feb 15:34:59 ntpd[1867]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 15:34:59.027998 ntpd[1867]: 13 Feb 15:34:59 ntpd[1867]: Listen normally on 3 eth0 172.31.27.74:123 Feb 13 15:34:59.027998 ntpd[1867]: 13 Feb 15:34:59 ntpd[1867]: Listen normally on 4 lo [::1]:123 Feb 13 15:34:59.027998 ntpd[1867]: 13 Feb 15:34:59 ntpd[1867]: bind(21) AF_INET6 fe80::4df:98ff:feef:6353%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 15:34:59.027998 ntpd[1867]: 13 Feb 15:34:59 ntpd[1867]: unable to create socket on eth0 (5) for fe80::4df:98ff:feef:6353%2#123 Feb 13 15:34:59.027998 ntpd[1867]: 13 Feb 15:34:59 ntpd[1867]: failed to init interface for address fe80::4df:98ff:feef:6353%2 Feb 13 15:34:59.027998 ntpd[1867]: 13 Feb 15:34:59 ntpd[1867]: Listening on routing socket on fd #21 for interface updates Feb 13 15:34:59.027998 ntpd[1867]: 13 Feb 15:34:59 ntpd[1867]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:34:59.027998 ntpd[1867]: 13 Feb 15:34:59 ntpd[1867]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:34:58.992184 ntpd[1867]: corporation. Support and training for ntp-4 are Feb 13 15:34:59.008551 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 15:34:58.992284 ntpd[1867]: available at https://www.nwtime.org/support Feb 13 15:34:59.014830 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 15:34:58.992297 ntpd[1867]: ---------------------------------------------------- Feb 13 15:34:59.014896 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 15:34:58.992948 dbus-daemon[1863]: [system] SELinux support is enabled Feb 13 15:34:59.018024 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 15:34:59.084049 update_engine[1877]: I20250213 15:34:59.060484 1877 main.cc:92] Flatcar Update Engine starting Feb 13 15:34:59.084049 update_engine[1877]: I20250213 15:34:59.065127 1877 update_check_scheduler.cc:74] Next update check in 4m43s Feb 13 15:34:58.996110 dbus-daemon[1863]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1735 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 13 15:34:59.018059 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 15:34:58.997265 ntpd[1867]: proto: precision = 0.096 usec (-23) Feb 13 15:34:59.063926 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 15:34:59.001327 ntpd[1867]: basedate set to 2025-02-01 Feb 13 15:34:59.064287 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 15:34:59.001350 ntpd[1867]: gps base set to 2025-02-02 (week 2352) Feb 13 15:34:59.081919 systemd[1]: Started update-engine.service - Update Engine. Feb 13 15:34:59.006296 ntpd[1867]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 15:34:59.006355 ntpd[1867]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 15:34:59.007517 ntpd[1867]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 15:34:59.007565 ntpd[1867]: Listen normally on 3 eth0 172.31.27.74:123 Feb 13 15:34:59.007667 ntpd[1867]: Listen normally on 4 lo [::1]:123 Feb 13 15:34:59.007776 ntpd[1867]: bind(21) AF_INET6 fe80::4df:98ff:feef:6353%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 15:34:59.007801 ntpd[1867]: unable to create socket on eth0 (5) for fe80::4df:98ff:feef:6353%2#123 Feb 13 15:34:59.007819 ntpd[1867]: failed to init interface for address fe80::4df:98ff:feef:6353%2 Feb 13 15:34:59.007858 ntpd[1867]: Listening on routing socket on fd #21 for interface updates Feb 13 15:34:59.012498 ntpd[1867]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:34:59.012537 ntpd[1867]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:34:59.083449 dbus-daemon[1863]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 15:34:59.113661 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Feb 13 15:34:59.119932 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 15:34:59.130579 systemd[1]: Finished setup-oem.service - Setup OEM. Feb 13 15:34:59.136101 jq[1896]: true Feb 13 15:34:59.138369 (ntainerd)[1904]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 15:34:59.148360 tar[1894]: linux-amd64/helm Feb 13 15:34:59.156681 extend-filesystems[1865]: Resized partition /dev/nvme0n1p9 Feb 13 15:34:59.166557 extend-filesystems[1916]: resize2fs 1.47.1 (20-May-2024) Feb 13 15:34:59.181409 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Feb 13 15:34:59.336322 systemd-logind[1873]: Watching system buttons on /dev/input/event1 (Power Button) Feb 13 15:34:59.336359 systemd-logind[1873]: Watching system buttons on /dev/input/event2 (Sleep Button) Feb 13 15:34:59.336383 systemd-logind[1873]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 15:34:59.362717 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Feb 13 15:34:59.373619 systemd-logind[1873]: New seat seat0. Feb 13 15:34:59.383489 extend-filesystems[1916]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Feb 13 15:34:59.383489 extend-filesystems[1916]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 15:34:59.383489 extend-filesystems[1916]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Feb 13 15:34:59.389127 extend-filesystems[1865]: Resized filesystem in /dev/nvme0n1p9 Feb 13 15:34:59.392856 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 15:34:59.395777 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 15:34:59.396011 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 15:34:59.425955 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (1740) Feb 13 15:34:59.421123 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 15:34:59.426156 bash[1938]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:34:59.430666 coreos-metadata[1862]: Feb 13 15:34:59.428 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 15:34:59.430978 coreos-metadata[1862]: Feb 13 15:34:59.430 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Feb 13 15:34:59.439004 systemd[1]: Starting sshkeys.service... Feb 13 15:34:59.452471 coreos-metadata[1862]: Feb 13 15:34:59.449 INFO Fetch successful Feb 13 15:34:59.452471 coreos-metadata[1862]: Feb 13 15:34:59.449 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Feb 13 15:34:59.452471 coreos-metadata[1862]: Feb 13 15:34:59.452 INFO Fetch successful Feb 13 15:34:59.452471 coreos-metadata[1862]: Feb 13 15:34:59.452 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Feb 13 15:34:59.452809 coreos-metadata[1862]: Feb 13 15:34:59.452 INFO Fetch successful Feb 13 15:34:59.452809 coreos-metadata[1862]: Feb 13 15:34:59.452 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Feb 13 15:34:59.453394 coreos-metadata[1862]: Feb 13 15:34:59.453 INFO Fetch successful Feb 13 15:34:59.453489 coreos-metadata[1862]: Feb 13 15:34:59.453 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Feb 13 15:34:59.456316 coreos-metadata[1862]: Feb 13 15:34:59.453 INFO Fetch failed with 404: resource not found Feb 13 15:34:59.456316 coreos-metadata[1862]: Feb 13 15:34:59.453 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Feb 13 15:34:59.462322 coreos-metadata[1862]: Feb 13 15:34:59.462 INFO Fetch successful Feb 13 15:34:59.462322 coreos-metadata[1862]: Feb 13 15:34:59.462 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Feb 13 15:34:59.470730 coreos-metadata[1862]: Feb 13 15:34:59.470 INFO Fetch successful Feb 13 15:34:59.470828 coreos-metadata[1862]: Feb 13 15:34:59.470 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Feb 13 15:34:59.480159 coreos-metadata[1862]: Feb 13 15:34:59.480 INFO Fetch successful Feb 13 15:34:59.480159 coreos-metadata[1862]: Feb 13 15:34:59.480 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Feb 13 15:34:59.480741 coreos-metadata[1862]: Feb 13 15:34:59.480 INFO Fetch successful Feb 13 15:34:59.480741 coreos-metadata[1862]: Feb 13 15:34:59.480 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Feb 13 15:34:59.481484 coreos-metadata[1862]: Feb 13 15:34:59.481 INFO Fetch successful Feb 13 15:34:59.535393 dbus-daemon[1863]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 13 15:34:59.535759 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Feb 13 15:34:59.545402 dbus-daemon[1863]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1909 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 13 15:34:59.552838 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 15:34:59.566770 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 15:34:59.580826 systemd-networkd[1735]: eth0: Gained IPv6LL Feb 13 15:34:59.583073 systemd[1]: Starting polkit.service - Authorization Manager... Feb 13 15:34:59.597719 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 15:34:59.601182 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 15:34:59.641025 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Feb 13 15:34:59.697993 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:34:59.710712 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 15:34:59.785658 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 15:34:59.797902 polkitd[1970]: Started polkitd version 121 Feb 13 15:34:59.799206 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 15:34:59.863328 polkitd[1970]: Loading rules from directory /etc/polkit-1/rules.d Feb 13 15:34:59.863808 polkitd[1970]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 13 15:34:59.880253 polkitd[1970]: Finished loading, compiling and executing 2 rules Feb 13 15:34:59.881171 systemd[1]: Started polkit.service - Authorization Manager. Feb 13 15:34:59.880958 dbus-daemon[1863]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 13 15:34:59.886914 polkitd[1970]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 13 15:34:59.916548 coreos-metadata[1968]: Feb 13 15:34:59.916 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 15:34:59.919082 coreos-metadata[1968]: Feb 13 15:34:59.918 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Feb 13 15:34:59.931026 coreos-metadata[1968]: Feb 13 15:34:59.922 INFO Fetch successful Feb 13 15:34:59.931026 coreos-metadata[1968]: Feb 13 15:34:59.922 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 13 15:34:59.940388 coreos-metadata[1968]: Feb 13 15:34:59.934 INFO Fetch successful Feb 13 15:34:59.946025 unknown[1968]: wrote ssh authorized keys file for user: core Feb 13 15:34:59.988371 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 15:35:00.080357 systemd-hostnamed[1909]: Hostname set to (transient) Feb 13 15:35:00.083759 systemd-resolved[1684]: System hostname changed to 'ip-172-31-27-74'. Feb 13 15:35:00.133377 amazon-ssm-agent[1972]: Initializing new seelog logger Feb 13 15:35:00.133377 amazon-ssm-agent[1972]: New Seelog Logger Creation Complete Feb 13 15:35:00.133377 amazon-ssm-agent[1972]: 2025/02/13 15:35:00 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:35:00.133377 amazon-ssm-agent[1972]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:35:00.138051 amazon-ssm-agent[1972]: 2025/02/13 15:35:00 processing appconfig overrides Feb 13 15:35:00.140723 amazon-ssm-agent[1972]: 2025/02/13 15:35:00 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:35:00.141859 locksmithd[1910]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 15:35:00.146937 amazon-ssm-agent[1972]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:35:00.146937 amazon-ssm-agent[1972]: 2025/02/13 15:35:00 processing appconfig overrides Feb 13 15:35:00.146937 amazon-ssm-agent[1972]: 2025/02/13 15:35:00 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:35:00.146937 amazon-ssm-agent[1972]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:35:00.146937 amazon-ssm-agent[1972]: 2025/02/13 15:35:00 processing appconfig overrides Feb 13 15:35:00.150581 update-ssh-keys[2038]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:35:00.150954 amazon-ssm-agent[1972]: 2025-02-13 15:35:00 INFO Proxy environment variables: Feb 13 15:35:00.151045 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 15:35:00.167154 amazon-ssm-agent[1972]: 2025/02/13 15:35:00 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:35:00.167154 amazon-ssm-agent[1972]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:35:00.167154 amazon-ssm-agent[1972]: 2025/02/13 15:35:00 processing appconfig overrides Feb 13 15:35:00.155771 systemd[1]: Finished sshkeys.service. Feb 13 15:35:00.254656 amazon-ssm-agent[1972]: 2025-02-13 15:35:00 INFO https_proxy: Feb 13 15:35:00.362471 amazon-ssm-agent[1972]: 2025-02-13 15:35:00 INFO http_proxy: Feb 13 15:35:00.371435 containerd[1904]: time="2025-02-13T15:35:00.371335548Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 15:35:00.468284 amazon-ssm-agent[1972]: 2025-02-13 15:35:00 INFO no_proxy: Feb 13 15:35:00.576728 amazon-ssm-agent[1972]: 2025-02-13 15:35:00 INFO Checking if agent identity type OnPrem can be assumed Feb 13 15:35:00.578715 containerd[1904]: time="2025-02-13T15:35:00.578661222Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:35:00.582949 containerd[1904]: time="2025-02-13T15:35:00.582458469Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:35:00.583168 containerd[1904]: time="2025-02-13T15:35:00.583144180Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 15:35:00.583414 containerd[1904]: time="2025-02-13T15:35:00.583392840Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 15:35:00.584051 containerd[1904]: time="2025-02-13T15:35:00.584026642Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 15:35:00.584171 containerd[1904]: time="2025-02-13T15:35:00.584154626Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 15:35:00.584345 containerd[1904]: time="2025-02-13T15:35:00.584325271Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:35:00.584428 containerd[1904]: time="2025-02-13T15:35:00.584415002Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:35:00.584877 containerd[1904]: time="2025-02-13T15:35:00.584837742Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:35:00.585475 containerd[1904]: time="2025-02-13T15:35:00.585453309Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 15:35:00.585737 containerd[1904]: time="2025-02-13T15:35:00.585713342Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:35:00.585832 containerd[1904]: time="2025-02-13T15:35:00.585814964Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 15:35:00.586181 containerd[1904]: time="2025-02-13T15:35:00.586159040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:35:00.586713 containerd[1904]: time="2025-02-13T15:35:00.586692208Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:35:00.587101 containerd[1904]: time="2025-02-13T15:35:00.587063964Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:35:00.587198 containerd[1904]: time="2025-02-13T15:35:00.587184304Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 15:35:00.587482 containerd[1904]: time="2025-02-13T15:35:00.587455070Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 15:35:00.587869 containerd[1904]: time="2025-02-13T15:35:00.587849780Z" level=info msg="metadata content store policy set" policy=shared Feb 13 15:35:00.605906 containerd[1904]: time="2025-02-13T15:35:00.605847659Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 15:35:00.606113 containerd[1904]: time="2025-02-13T15:35:00.605932487Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 15:35:00.606113 containerd[1904]: time="2025-02-13T15:35:00.605958126Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 15:35:00.606113 containerd[1904]: time="2025-02-13T15:35:00.606000879Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 15:35:00.606113 containerd[1904]: time="2025-02-13T15:35:00.606022926Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 15:35:00.607654 containerd[1904]: time="2025-02-13T15:35:00.606217202Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 15:35:00.609730 containerd[1904]: time="2025-02-13T15:35:00.608038609Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 15:35:00.609730 containerd[1904]: time="2025-02-13T15:35:00.608277491Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 15:35:00.609730 containerd[1904]: time="2025-02-13T15:35:00.608303407Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 15:35:00.609730 containerd[1904]: time="2025-02-13T15:35:00.608322027Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 15:35:00.609730 containerd[1904]: time="2025-02-13T15:35:00.608343882Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 15:35:00.609730 containerd[1904]: time="2025-02-13T15:35:00.608365608Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 15:35:00.609730 containerd[1904]: time="2025-02-13T15:35:00.608384494Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 15:35:00.609730 containerd[1904]: time="2025-02-13T15:35:00.608404877Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 15:35:00.609730 containerd[1904]: time="2025-02-13T15:35:00.608426774Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 15:35:00.609730 containerd[1904]: time="2025-02-13T15:35:00.608448404Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 15:35:00.609730 containerd[1904]: time="2025-02-13T15:35:00.608467624Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 15:35:00.609730 containerd[1904]: time="2025-02-13T15:35:00.608484649Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 15:35:00.609730 containerd[1904]: time="2025-02-13T15:35:00.608512961Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 15:35:00.609730 containerd[1904]: time="2025-02-13T15:35:00.608532551Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 15:35:00.610341 containerd[1904]: time="2025-02-13T15:35:00.608549763Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 15:35:00.610341 containerd[1904]: time="2025-02-13T15:35:00.608568614Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 15:35:00.610341 containerd[1904]: time="2025-02-13T15:35:00.608586886Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 15:35:00.610341 containerd[1904]: time="2025-02-13T15:35:00.608605831Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 15:35:00.610341 containerd[1904]: time="2025-02-13T15:35:00.608632071Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 15:35:00.610341 containerd[1904]: time="2025-02-13T15:35:00.608674176Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 15:35:00.610341 containerd[1904]: time="2025-02-13T15:35:00.608693132Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 15:35:00.610341 containerd[1904]: time="2025-02-13T15:35:00.608711646Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 15:35:00.610341 containerd[1904]: time="2025-02-13T15:35:00.608728872Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 15:35:00.610341 containerd[1904]: time="2025-02-13T15:35:00.608748382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 15:35:00.610341 containerd[1904]: time="2025-02-13T15:35:00.608766520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 15:35:00.610341 containerd[1904]: time="2025-02-13T15:35:00.608950435Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 15:35:00.610341 containerd[1904]: time="2025-02-13T15:35:00.608989847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 15:35:00.610341 containerd[1904]: time="2025-02-13T15:35:00.609012555Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 15:35:00.610341 containerd[1904]: time="2025-02-13T15:35:00.609029540Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 15:35:00.611421 containerd[1904]: time="2025-02-13T15:35:00.609086304Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 15:35:00.611421 containerd[1904]: time="2025-02-13T15:35:00.609111811Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 15:35:00.611421 containerd[1904]: time="2025-02-13T15:35:00.609127877Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 15:35:00.611421 containerd[1904]: time="2025-02-13T15:35:00.609146992Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 15:35:00.611421 containerd[1904]: time="2025-02-13T15:35:00.609161372Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 15:35:00.611421 containerd[1904]: time="2025-02-13T15:35:00.609181363Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 15:35:00.611421 containerd[1904]: time="2025-02-13T15:35:00.609252767Z" level=info msg="NRI interface is disabled by configuration." Feb 13 15:35:00.611421 containerd[1904]: time="2025-02-13T15:35:00.609303829Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 15:35:00.615884 containerd[1904]: time="2025-02-13T15:35:00.614769872Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 15:35:00.615884 containerd[1904]: time="2025-02-13T15:35:00.614871788Z" level=info msg="Connect containerd service" Feb 13 15:35:00.615884 containerd[1904]: time="2025-02-13T15:35:00.614934662Z" level=info msg="using legacy CRI server" Feb 13 15:35:00.615884 containerd[1904]: time="2025-02-13T15:35:00.614944803Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 15:35:00.615884 containerd[1904]: time="2025-02-13T15:35:00.615328646Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 15:35:00.617359 containerd[1904]: time="2025-02-13T15:35:00.617319222Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:35:00.626676 containerd[1904]: time="2025-02-13T15:35:00.623619354Z" level=info msg="Start subscribing containerd event" Feb 13 15:35:00.626676 containerd[1904]: time="2025-02-13T15:35:00.623706015Z" level=info msg="Start recovering state" Feb 13 15:35:00.626676 containerd[1904]: time="2025-02-13T15:35:00.623801814Z" level=info msg="Start event monitor" Feb 13 15:35:00.626676 containerd[1904]: time="2025-02-13T15:35:00.623816385Z" level=info msg="Start snapshots syncer" Feb 13 15:35:00.626676 containerd[1904]: time="2025-02-13T15:35:00.623833810Z" level=info msg="Start cni network conf syncer for default" Feb 13 15:35:00.626676 containerd[1904]: time="2025-02-13T15:35:00.623846773Z" level=info msg="Start streaming server" Feb 13 15:35:00.626676 containerd[1904]: time="2025-02-13T15:35:00.625482186Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 15:35:00.626676 containerd[1904]: time="2025-02-13T15:35:00.625547496Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 15:35:00.631374 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 15:35:00.634951 containerd[1904]: time="2025-02-13T15:35:00.632989247Z" level=info msg="containerd successfully booted in 0.267053s" Feb 13 15:35:00.662627 sshd_keygen[1900]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 15:35:00.675724 amazon-ssm-agent[1972]: 2025-02-13 15:35:00 INFO Checking if agent identity type EC2 can be assumed Feb 13 15:35:00.733280 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 15:35:00.746340 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 15:35:00.774425 amazon-ssm-agent[1972]: 2025-02-13 15:35:00 INFO Agent will take identity from EC2 Feb 13 15:35:00.779254 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 15:35:00.780706 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 15:35:00.796086 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 15:35:00.848595 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 15:35:00.860146 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 15:35:00.869081 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 15:35:00.873054 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 15:35:00.873869 amazon-ssm-agent[1972]: 2025-02-13 15:35:00 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 15:35:00.972552 amazon-ssm-agent[1972]: 2025-02-13 15:35:00 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 15:35:01.056446 amazon-ssm-agent[1972]: 2025-02-13 15:35:00 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 15:35:01.059248 amazon-ssm-agent[1972]: 2025-02-13 15:35:00 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Feb 13 15:35:01.059248 amazon-ssm-agent[1972]: 2025-02-13 15:35:00 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Feb 13 15:35:01.059248 amazon-ssm-agent[1972]: 2025-02-13 15:35:00 INFO [amazon-ssm-agent] Starting Core Agent Feb 13 15:35:01.059248 amazon-ssm-agent[1972]: 2025-02-13 15:35:00 INFO [amazon-ssm-agent] registrar detected. Attempting registration Feb 13 15:35:01.059248 amazon-ssm-agent[1972]: 2025-02-13 15:35:00 INFO [Registrar] Starting registrar module Feb 13 15:35:01.059248 amazon-ssm-agent[1972]: 2025-02-13 15:35:00 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Feb 13 15:35:01.059248 amazon-ssm-agent[1972]: 2025-02-13 15:35:01 INFO [EC2Identity] EC2 registration was successful. Feb 13 15:35:01.059248 amazon-ssm-agent[1972]: 2025-02-13 15:35:01 INFO [CredentialRefresher] credentialRefresher has started Feb 13 15:35:01.059248 amazon-ssm-agent[1972]: 2025-02-13 15:35:01 INFO [CredentialRefresher] Starting credentials refresher loop Feb 13 15:35:01.059248 amazon-ssm-agent[1972]: 2025-02-13 15:35:01 INFO EC2RoleProvider Successfully connected with instance profile role credentials Feb 13 15:35:01.072125 amazon-ssm-agent[1972]: 2025-02-13 15:35:01 INFO [CredentialRefresher] Next credential rotation will be in 31.574955957866667 minutes Feb 13 15:35:01.190012 tar[1894]: linux-amd64/LICENSE Feb 13 15:35:01.190777 tar[1894]: linux-amd64/README.md Feb 13 15:35:01.206962 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 15:35:01.993084 ntpd[1867]: Listen normally on 6 eth0 [fe80::4df:98ff:feef:6353%2]:123 Feb 13 15:35:01.994085 ntpd[1867]: 13 Feb 15:35:01 ntpd[1867]: Listen normally on 6 eth0 [fe80::4df:98ff:feef:6353%2]:123 Feb 13 15:35:02.084060 amazon-ssm-agent[1972]: 2025-02-13 15:35:02 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Feb 13 15:35:02.190617 amazon-ssm-agent[1972]: 2025-02-13 15:35:02 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2106) started Feb 13 15:35:02.286925 amazon-ssm-agent[1972]: 2025-02-13 15:35:02 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Feb 13 15:35:02.616915 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:35:02.621890 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 15:35:02.633743 systemd[1]: Startup finished in 887ms (kernel) + 8.377s (initrd) + 9.597s (userspace) = 18.862s. Feb 13 15:35:02.820417 (kubelet)[2122]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:35:04.249726 kubelet[2122]: E0213 15:35:04.249658 2122 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:35:04.254479 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:35:04.254702 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:35:04.256145 systemd[1]: kubelet.service: Consumed 1.048s CPU time. Feb 13 15:35:08.700785 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 15:35:08.708057 systemd[1]: Started sshd@0-172.31.27.74:22-139.178.89.65:35936.service - OpenSSH per-connection server daemon (139.178.89.65:35936). Feb 13 15:35:08.923277 sshd[2135]: Accepted publickey for core from 139.178.89.65 port 35936 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:35:08.924959 sshd-session[2135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:35:08.943666 systemd-logind[1873]: New session 1 of user core. Feb 13 15:35:08.945372 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 15:35:08.959354 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 15:35:08.974677 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 15:35:08.984540 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 15:35:08.993942 (systemd)[2139]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 15:35:09.137723 systemd[2139]: Queued start job for default target default.target. Feb 13 15:35:09.146301 systemd[2139]: Created slice app.slice - User Application Slice. Feb 13 15:35:09.146349 systemd[2139]: Reached target paths.target - Paths. Feb 13 15:35:09.146372 systemd[2139]: Reached target timers.target - Timers. Feb 13 15:35:09.147795 systemd[2139]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 15:35:09.174460 systemd[2139]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 15:35:09.174616 systemd[2139]: Reached target sockets.target - Sockets. Feb 13 15:35:09.174653 systemd[2139]: Reached target basic.target - Basic System. Feb 13 15:35:09.174710 systemd[2139]: Reached target default.target - Main User Target. Feb 13 15:35:09.174747 systemd[2139]: Startup finished in 172ms. Feb 13 15:35:09.175228 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 15:35:09.187878 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 15:35:09.360795 systemd[1]: Started sshd@1-172.31.27.74:22-139.178.89.65:35938.service - OpenSSH per-connection server daemon (139.178.89.65:35938). Feb 13 15:35:09.544584 sshd[2150]: Accepted publickey for core from 139.178.89.65 port 35938 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:35:09.546301 sshd-session[2150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:35:09.555311 systemd-logind[1873]: New session 2 of user core. Feb 13 15:35:09.562891 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 15:35:09.702141 sshd[2152]: Connection closed by 139.178.89.65 port 35938 Feb 13 15:35:09.702811 sshd-session[2150]: pam_unix(sshd:session): session closed for user core Feb 13 15:35:09.706669 systemd[1]: sshd@1-172.31.27.74:22-139.178.89.65:35938.service: Deactivated successfully. Feb 13 15:35:09.708769 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 15:35:09.710364 systemd-logind[1873]: Session 2 logged out. Waiting for processes to exit. Feb 13 15:35:09.714079 systemd-logind[1873]: Removed session 2. Feb 13 15:35:09.742117 systemd[1]: Started sshd@2-172.31.27.74:22-139.178.89.65:35950.service - OpenSSH per-connection server daemon (139.178.89.65:35950). Feb 13 15:35:09.943297 sshd[2157]: Accepted publickey for core from 139.178.89.65 port 35950 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:35:09.945060 sshd-session[2157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:35:09.971873 systemd-logind[1873]: New session 3 of user core. Feb 13 15:35:09.983338 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 15:35:10.100092 sshd[2159]: Connection closed by 139.178.89.65 port 35950 Feb 13 15:35:10.100836 sshd-session[2157]: pam_unix(sshd:session): session closed for user core Feb 13 15:35:10.105303 systemd[1]: sshd@2-172.31.27.74:22-139.178.89.65:35950.service: Deactivated successfully. Feb 13 15:35:10.107568 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 15:35:10.109628 systemd-logind[1873]: Session 3 logged out. Waiting for processes to exit. Feb 13 15:35:10.111222 systemd-logind[1873]: Removed session 3. Feb 13 15:35:10.139030 systemd[1]: Started sshd@3-172.31.27.74:22-139.178.89.65:35964.service - OpenSSH per-connection server daemon (139.178.89.65:35964). Feb 13 15:35:10.337532 sshd[2164]: Accepted publickey for core from 139.178.89.65 port 35964 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:35:10.339281 sshd-session[2164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:35:10.345091 systemd-logind[1873]: New session 4 of user core. Feb 13 15:35:10.353344 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 15:35:10.482221 sshd[2166]: Connection closed by 139.178.89.65 port 35964 Feb 13 15:35:10.482899 sshd-session[2164]: pam_unix(sshd:session): session closed for user core Feb 13 15:35:10.495059 systemd[1]: sshd@3-172.31.27.74:22-139.178.89.65:35964.service: Deactivated successfully. Feb 13 15:35:10.500070 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 15:35:10.508507 systemd-logind[1873]: Session 4 logged out. Waiting for processes to exit. Feb 13 15:35:10.538164 systemd[1]: Started sshd@4-172.31.27.74:22-139.178.89.65:35978.service - OpenSSH per-connection server daemon (139.178.89.65:35978). Feb 13 15:35:10.540433 systemd-logind[1873]: Removed session 4. Feb 13 15:35:10.713119 sshd[2171]: Accepted publickey for core from 139.178.89.65 port 35978 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:35:10.715296 sshd-session[2171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:35:10.725654 systemd-logind[1873]: New session 5 of user core. Feb 13 15:35:10.729847 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 15:35:10.872385 sudo[2174]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 15:35:10.872810 sudo[2174]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:35:11.655972 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 15:35:11.660018 (dockerd)[2191]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 15:35:12.388331 dockerd[2191]: time="2025-02-13T15:35:12.387680531Z" level=info msg="Starting up" Feb 13 15:35:12.553049 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport4131418262-merged.mount: Deactivated successfully. Feb 13 15:35:12.589220 dockerd[2191]: time="2025-02-13T15:35:12.588865265Z" level=info msg="Loading containers: start." Feb 13 15:35:12.837882 kernel: Initializing XFRM netlink socket Feb 13 15:35:12.882685 (udev-worker)[2298]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:35:12.948502 systemd-networkd[1735]: docker0: Link UP Feb 13 15:35:12.985164 dockerd[2191]: time="2025-02-13T15:35:12.985107476Z" level=info msg="Loading containers: done." Feb 13 15:35:13.005982 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3237771422-merged.mount: Deactivated successfully. Feb 13 15:35:13.014981 dockerd[2191]: time="2025-02-13T15:35:13.014923628Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 15:35:13.015227 dockerd[2191]: time="2025-02-13T15:35:13.015075393Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Feb 13 15:35:13.015282 dockerd[2191]: time="2025-02-13T15:35:13.015228143Z" level=info msg="Daemon has completed initialization" Feb 13 15:35:13.086923 dockerd[2191]: time="2025-02-13T15:35:13.086089151Z" level=info msg="API listen on /run/docker.sock" Feb 13 15:35:13.086414 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 15:35:14.470927 containerd[1904]: time="2025-02-13T15:35:14.470878102Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\"" Feb 13 15:35:14.507102 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 15:35:14.516903 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:35:14.777621 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:35:14.789199 (kubelet)[2390]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:35:14.867005 kubelet[2390]: E0213 15:35:14.866923 2390 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:35:14.871682 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:35:14.871892 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:35:15.207212 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3602677213.mount: Deactivated successfully. Feb 13 15:35:18.186035 containerd[1904]: time="2025-02-13T15:35:18.185903661Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:35:18.187582 containerd[1904]: time="2025-02-13T15:35:18.187546256Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.10: active requests=0, bytes read=32678214" Feb 13 15:35:18.188666 containerd[1904]: time="2025-02-13T15:35:18.188375515Z" level=info msg="ImageCreate event name:\"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:35:18.192957 containerd[1904]: time="2025-02-13T15:35:18.192895698Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:35:18.194521 containerd[1904]: time="2025-02-13T15:35:18.194055259Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.10\" with image id \"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\", size \"32675014\" in 3.723046403s" Feb 13 15:35:18.194521 containerd[1904]: time="2025-02-13T15:35:18.194098729Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\" returns image reference \"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\"" Feb 13 15:35:18.221787 containerd[1904]: time="2025-02-13T15:35:18.221749855Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\"" Feb 13 15:35:20.855776 containerd[1904]: time="2025-02-13T15:35:20.855721178Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:35:20.857977 containerd[1904]: time="2025-02-13T15:35:20.857793586Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.10: active requests=0, bytes read=29611545" Feb 13 15:35:20.861664 containerd[1904]: time="2025-02-13T15:35:20.860268511Z" level=info msg="ImageCreate event name:\"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:35:20.864852 containerd[1904]: time="2025-02-13T15:35:20.864789288Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:35:20.872687 containerd[1904]: time="2025-02-13T15:35:20.868298343Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.10\" with image id \"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\", size \"31058091\" in 2.64650943s" Feb 13 15:35:20.872687 containerd[1904]: time="2025-02-13T15:35:20.868343235Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\" returns image reference \"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\"" Feb 13 15:35:20.899061 containerd[1904]: time="2025-02-13T15:35:20.898823768Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\"" Feb 13 15:35:22.712320 containerd[1904]: time="2025-02-13T15:35:22.712268143Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:35:22.714159 containerd[1904]: time="2025-02-13T15:35:22.713950069Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.10: active requests=0, bytes read=17782130" Feb 13 15:35:22.716293 containerd[1904]: time="2025-02-13T15:35:22.715059349Z" level=info msg="ImageCreate event name:\"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:35:22.717732 containerd[1904]: time="2025-02-13T15:35:22.717698345Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:35:22.719001 containerd[1904]: time="2025-02-13T15:35:22.718963254Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.10\" with image id \"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\", size \"19228694\" in 1.820094709s" Feb 13 15:35:22.719096 containerd[1904]: time="2025-02-13T15:35:22.719005277Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\" returns image reference \"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\"" Feb 13 15:35:22.743597 containerd[1904]: time="2025-02-13T15:35:22.743560720Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 13 15:35:24.018279 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount813286470.mount: Deactivated successfully. Feb 13 15:35:25.123141 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 15:35:25.141881 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:35:25.224199 containerd[1904]: time="2025-02-13T15:35:25.223881807Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:35:25.228589 containerd[1904]: time="2025-02-13T15:35:25.228468570Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=29057858" Feb 13 15:35:25.235132 containerd[1904]: time="2025-02-13T15:35:25.234854348Z" level=info msg="ImageCreate event name:\"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:35:25.242870 containerd[1904]: time="2025-02-13T15:35:25.242706476Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:35:25.244195 containerd[1904]: time="2025-02-13T15:35:25.244031830Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"29056877\" in 2.500430199s" Feb 13 15:35:25.244195 containerd[1904]: time="2025-02-13T15:35:25.244076421Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\"" Feb 13 15:35:25.284017 containerd[1904]: time="2025-02-13T15:35:25.283964266Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 15:35:25.409506 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:35:25.423118 (kubelet)[2495]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:35:25.484057 kubelet[2495]: E0213 15:35:25.483998 2495 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:35:25.487166 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:35:25.487368 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:35:25.939470 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1412654540.mount: Deactivated successfully. Feb 13 15:35:27.483084 containerd[1904]: time="2025-02-13T15:35:27.483030246Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:35:27.485675 containerd[1904]: time="2025-02-13T15:35:27.485450423Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Feb 13 15:35:27.487430 containerd[1904]: time="2025-02-13T15:35:27.486918964Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:35:27.494287 containerd[1904]: time="2025-02-13T15:35:27.494233505Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:35:27.508008 containerd[1904]: time="2025-02-13T15:35:27.507828113Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.223808159s" Feb 13 15:35:27.508008 containerd[1904]: time="2025-02-13T15:35:27.507883021Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Feb 13 15:35:27.539112 containerd[1904]: time="2025-02-13T15:35:27.539039203Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 15:35:28.031411 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2153341317.mount: Deactivated successfully. Feb 13 15:35:28.044543 containerd[1904]: time="2025-02-13T15:35:28.044487740Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:35:28.046687 containerd[1904]: time="2025-02-13T15:35:28.046600692Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Feb 13 15:35:28.056800 containerd[1904]: time="2025-02-13T15:35:28.054191400Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:35:28.066119 containerd[1904]: time="2025-02-13T15:35:28.066045427Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:35:28.069669 containerd[1904]: time="2025-02-13T15:35:28.069259242Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 530.136621ms" Feb 13 15:35:28.069669 containerd[1904]: time="2025-02-13T15:35:28.069305673Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 13 15:35:28.131832 containerd[1904]: time="2025-02-13T15:35:28.131795798Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Feb 13 15:35:28.716182 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1849724112.mount: Deactivated successfully. Feb 13 15:35:30.114743 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 13 15:35:31.816457 containerd[1904]: time="2025-02-13T15:35:31.816386470Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:35:31.823105 containerd[1904]: time="2025-02-13T15:35:31.823010434Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Feb 13 15:35:31.829047 containerd[1904]: time="2025-02-13T15:35:31.828865306Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:35:31.842254 containerd[1904]: time="2025-02-13T15:35:31.842174370Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:35:31.843825 containerd[1904]: time="2025-02-13T15:35:31.843570813Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.711731527s" Feb 13 15:35:31.843825 containerd[1904]: time="2025-02-13T15:35:31.843622461Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Feb 13 15:35:35.600973 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Feb 13 15:35:35.608091 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:35:35.632052 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 15:35:35.632163 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 15:35:35.632825 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:35:35.650037 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:35:35.691848 systemd[1]: Reloading requested from client PID 2683 ('systemctl') (unit session-5.scope)... Feb 13 15:35:35.691864 systemd[1]: Reloading... Feb 13 15:35:35.910665 zram_generator::config[2726]: No configuration found. Feb 13 15:35:36.137177 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:35:36.295879 systemd[1]: Reloading finished in 602 ms. Feb 13 15:35:36.361764 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 15:35:36.362184 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 15:35:36.362997 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:35:36.371111 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:35:36.660141 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:35:36.665276 (kubelet)[2782]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:35:36.741862 kubelet[2782]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:35:36.741862 kubelet[2782]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:35:36.741862 kubelet[2782]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:35:36.743807 kubelet[2782]: I0213 15:35:36.743742 2782 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:35:37.491178 kubelet[2782]: I0213 15:35:37.491010 2782 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 15:35:37.491178 kubelet[2782]: I0213 15:35:37.491138 2782 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:35:37.492061 kubelet[2782]: I0213 15:35:37.492032 2782 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 15:35:37.537073 kubelet[2782]: I0213 15:35:37.536975 2782 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:35:37.541308 kubelet[2782]: E0213 15:35:37.541273 2782 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.27.74:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.27.74:6443: connect: connection refused Feb 13 15:35:37.580278 kubelet[2782]: I0213 15:35:37.580125 2782 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:35:37.580630 kubelet[2782]: I0213 15:35:37.580590 2782 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:35:37.580898 kubelet[2782]: I0213 15:35:37.580630 2782 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-27-74","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:35:37.582284 kubelet[2782]: I0213 15:35:37.582232 2782 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:35:37.582284 kubelet[2782]: I0213 15:35:37.582285 2782 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:35:37.582623 kubelet[2782]: I0213 15:35:37.582603 2782 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:35:37.585271 kubelet[2782]: I0213 15:35:37.584789 2782 kubelet.go:400] "Attempting to sync node with API server" Feb 13 15:35:37.585271 kubelet[2782]: I0213 15:35:37.584865 2782 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:35:37.585271 kubelet[2782]: I0213 15:35:37.585027 2782 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:35:37.585271 kubelet[2782]: I0213 15:35:37.585133 2782 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:35:37.589267 kubelet[2782]: W0213 15:35:37.589193 2782 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.27.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-27-74&limit=500&resourceVersion=0": dial tcp 172.31.27.74:6443: connect: connection refused Feb 13 15:35:37.590126 kubelet[2782]: E0213 15:35:37.589626 2782 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.27.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-27-74&limit=500&resourceVersion=0": dial tcp 172.31.27.74:6443: connect: connection refused Feb 13 15:35:37.592370 kubelet[2782]: W0213 15:35:37.592279 2782 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.27.74:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.27.74:6443: connect: connection refused Feb 13 15:35:37.593849 kubelet[2782]: E0213 15:35:37.592387 2782 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.27.74:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.27.74:6443: connect: connection refused Feb 13 15:35:37.594935 kubelet[2782]: I0213 15:35:37.594755 2782 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:35:37.598427 kubelet[2782]: I0213 15:35:37.597453 2782 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:35:37.598427 kubelet[2782]: W0213 15:35:37.597534 2782 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 15:35:37.598611 kubelet[2782]: I0213 15:35:37.598493 2782 server.go:1264] "Started kubelet" Feb 13 15:35:37.610668 kubelet[2782]: I0213 15:35:37.610035 2782 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:35:37.612972 kubelet[2782]: I0213 15:35:37.612517 2782 server.go:455] "Adding debug handlers to kubelet server" Feb 13 15:35:37.620558 kubelet[2782]: I0213 15:35:37.613991 2782 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:35:37.622120 kubelet[2782]: I0213 15:35:37.621927 2782 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:35:37.622784 kubelet[2782]: E0213 15:35:37.622612 2782 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.27.74:6443/api/v1/namespaces/default/events\": dial tcp 172.31.27.74:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-27-74.1823ce81b6396f28 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-27-74,UID:ip-172-31-27-74,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-27-74,},FirstTimestamp:2025-02-13 15:35:37.598463784 +0000 UTC m=+0.925309005,LastTimestamp:2025-02-13 15:35:37.598463784 +0000 UTC m=+0.925309005,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-27-74,}" Feb 13 15:35:37.638648 kubelet[2782]: I0213 15:35:37.635328 2782 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:35:37.648180 kubelet[2782]: I0213 15:35:37.647359 2782 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:35:37.664838 kubelet[2782]: I0213 15:35:37.664803 2782 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 15:35:37.666252 kubelet[2782]: I0213 15:35:37.666175 2782 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:35:37.670253 kubelet[2782]: E0213 15:35:37.668249 2782 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-74?timeout=10s\": dial tcp 172.31.27.74:6443: connect: connection refused" interval="200ms" Feb 13 15:35:37.670253 kubelet[2782]: W0213 15:35:37.668619 2782 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.27.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.27.74:6443: connect: connection refused Feb 13 15:35:37.670724 kubelet[2782]: E0213 15:35:37.670296 2782 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.27.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.27.74:6443: connect: connection refused Feb 13 15:35:37.671454 kubelet[2782]: I0213 15:35:37.671412 2782 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:35:37.671753 kubelet[2782]: I0213 15:35:37.671536 2782 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:35:37.679442 kubelet[2782]: I0213 15:35:37.678272 2782 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:35:37.702964 kubelet[2782]: I0213 15:35:37.702909 2782 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:35:37.705110 kubelet[2782]: I0213 15:35:37.705071 2782 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:35:37.705241 kubelet[2782]: I0213 15:35:37.705119 2782 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:35:37.705241 kubelet[2782]: I0213 15:35:37.705145 2782 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 15:35:37.705241 kubelet[2782]: E0213 15:35:37.705196 2782 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:35:37.717407 kubelet[2782]: E0213 15:35:37.717233 2782 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:35:37.719823 kubelet[2782]: W0213 15:35:37.719764 2782 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.27.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.27.74:6443: connect: connection refused Feb 13 15:35:37.720765 kubelet[2782]: E0213 15:35:37.719832 2782 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.27.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.27.74:6443: connect: connection refused Feb 13 15:35:37.731060 kubelet[2782]: I0213 15:35:37.731024 2782 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:35:37.731060 kubelet[2782]: I0213 15:35:37.731044 2782 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:35:37.731060 kubelet[2782]: I0213 15:35:37.731066 2782 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:35:37.734670 kubelet[2782]: I0213 15:35:37.734626 2782 policy_none.go:49] "None policy: Start" Feb 13 15:35:37.735532 kubelet[2782]: I0213 15:35:37.735506 2782 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:35:37.735532 kubelet[2782]: I0213 15:35:37.735535 2782 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:35:37.746220 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 15:35:37.755824 kubelet[2782]: I0213 15:35:37.755786 2782 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-27-74" Feb 13 15:35:37.764837 kubelet[2782]: E0213 15:35:37.758779 2782 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.27.74:6443/api/v1/nodes\": dial tcp 172.31.27.74:6443: connect: connection refused" node="ip-172-31-27-74" Feb 13 15:35:37.779873 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 15:35:37.795569 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 15:35:37.798710 kubelet[2782]: I0213 15:35:37.798340 2782 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:35:37.798710 kubelet[2782]: I0213 15:35:37.798630 2782 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:35:37.798867 kubelet[2782]: I0213 15:35:37.798793 2782 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:35:37.803278 kubelet[2782]: E0213 15:35:37.803247 2782 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-27-74\" not found" Feb 13 15:35:37.805534 kubelet[2782]: I0213 15:35:37.805494 2782 topology_manager.go:215] "Topology Admit Handler" podUID="d867fdb9ecbfe0ae1f46c434de4ec36b" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-27-74" Feb 13 15:35:37.808293 kubelet[2782]: I0213 15:35:37.808054 2782 topology_manager.go:215] "Topology Admit Handler" podUID="a988b83ed381dcc124779befa478b212" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-27-74" Feb 13 15:35:37.810286 kubelet[2782]: I0213 15:35:37.810258 2782 topology_manager.go:215] "Topology Admit Handler" podUID="d4523beb2690f25ef2c046d112798780" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-27-74" Feb 13 15:35:37.821085 systemd[1]: Created slice kubepods-burstable-podd867fdb9ecbfe0ae1f46c434de4ec36b.slice - libcontainer container kubepods-burstable-podd867fdb9ecbfe0ae1f46c434de4ec36b.slice. Feb 13 15:35:37.845626 systemd[1]: Created slice kubepods-burstable-poda988b83ed381dcc124779befa478b212.slice - libcontainer container kubepods-burstable-poda988b83ed381dcc124779befa478b212.slice. Feb 13 15:35:37.869743 kubelet[2782]: I0213 15:35:37.869710 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d867fdb9ecbfe0ae1f46c434de4ec36b-k8s-certs\") pod \"kube-apiserver-ip-172-31-27-74\" (UID: \"d867fdb9ecbfe0ae1f46c434de4ec36b\") " pod="kube-system/kube-apiserver-ip-172-31-27-74" Feb 13 15:35:37.872664 kubelet[2782]: E0213 15:35:37.871236 2782 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-74?timeout=10s\": dial tcp 172.31.27.74:6443: connect: connection refused" interval="400ms" Feb 13 15:35:37.872664 kubelet[2782]: I0213 15:35:37.871390 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d867fdb9ecbfe0ae1f46c434de4ec36b-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-27-74\" (UID: \"d867fdb9ecbfe0ae1f46c434de4ec36b\") " pod="kube-system/kube-apiserver-ip-172-31-27-74" Feb 13 15:35:37.872664 kubelet[2782]: I0213 15:35:37.871464 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a988b83ed381dcc124779befa478b212-ca-certs\") pod \"kube-controller-manager-ip-172-31-27-74\" (UID: \"a988b83ed381dcc124779befa478b212\") " pod="kube-system/kube-controller-manager-ip-172-31-27-74" Feb 13 15:35:37.872664 kubelet[2782]: I0213 15:35:37.871525 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a988b83ed381dcc124779befa478b212-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-27-74\" (UID: \"a988b83ed381dcc124779befa478b212\") " pod="kube-system/kube-controller-manager-ip-172-31-27-74" Feb 13 15:35:37.872664 kubelet[2782]: I0213 15:35:37.871555 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a988b83ed381dcc124779befa478b212-k8s-certs\") pod \"kube-controller-manager-ip-172-31-27-74\" (UID: \"a988b83ed381dcc124779befa478b212\") " pod="kube-system/kube-controller-manager-ip-172-31-27-74" Feb 13 15:35:37.872210 systemd[1]: Created slice kubepods-burstable-podd4523beb2690f25ef2c046d112798780.slice - libcontainer container kubepods-burstable-podd4523beb2690f25ef2c046d112798780.slice. Feb 13 15:35:37.873901 kubelet[2782]: I0213 15:35:37.871580 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a988b83ed381dcc124779befa478b212-kubeconfig\") pod \"kube-controller-manager-ip-172-31-27-74\" (UID: \"a988b83ed381dcc124779befa478b212\") " pod="kube-system/kube-controller-manager-ip-172-31-27-74" Feb 13 15:35:37.873901 kubelet[2782]: I0213 15:35:37.871607 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a988b83ed381dcc124779befa478b212-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-27-74\" (UID: \"a988b83ed381dcc124779befa478b212\") " pod="kube-system/kube-controller-manager-ip-172-31-27-74" Feb 13 15:35:37.873901 kubelet[2782]: I0213 15:35:37.871652 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d867fdb9ecbfe0ae1f46c434de4ec36b-ca-certs\") pod \"kube-apiserver-ip-172-31-27-74\" (UID: \"d867fdb9ecbfe0ae1f46c434de4ec36b\") " pod="kube-system/kube-apiserver-ip-172-31-27-74" Feb 13 15:35:37.873901 kubelet[2782]: I0213 15:35:37.871679 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4523beb2690f25ef2c046d112798780-kubeconfig\") pod \"kube-scheduler-ip-172-31-27-74\" (UID: \"d4523beb2690f25ef2c046d112798780\") " pod="kube-system/kube-scheduler-ip-172-31-27-74" Feb 13 15:35:37.961477 kubelet[2782]: I0213 15:35:37.961430 2782 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-27-74" Feb 13 15:35:37.961952 kubelet[2782]: E0213 15:35:37.961906 2782 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.27.74:6443/api/v1/nodes\": dial tcp 172.31.27.74:6443: connect: connection refused" node="ip-172-31-27-74" Feb 13 15:35:38.143464 containerd[1904]: time="2025-02-13T15:35:38.143343521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-27-74,Uid:d867fdb9ecbfe0ae1f46c434de4ec36b,Namespace:kube-system,Attempt:0,}" Feb 13 15:35:38.170299 containerd[1904]: time="2025-02-13T15:35:38.170253327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-27-74,Uid:a988b83ed381dcc124779befa478b212,Namespace:kube-system,Attempt:0,}" Feb 13 15:35:38.179530 containerd[1904]: time="2025-02-13T15:35:38.179298768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-27-74,Uid:d4523beb2690f25ef2c046d112798780,Namespace:kube-system,Attempt:0,}" Feb 13 15:35:38.272129 kubelet[2782]: E0213 15:35:38.272029 2782 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-74?timeout=10s\": dial tcp 172.31.27.74:6443: connect: connection refused" interval="800ms" Feb 13 15:35:38.367132 kubelet[2782]: I0213 15:35:38.367093 2782 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-27-74" Feb 13 15:35:38.369418 kubelet[2782]: E0213 15:35:38.368801 2782 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.27.74:6443/api/v1/nodes\": dial tcp 172.31.27.74:6443: connect: connection refused" node="ip-172-31-27-74" Feb 13 15:35:38.514432 kubelet[2782]: W0213 15:35:38.514285 2782 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.27.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-27-74&limit=500&resourceVersion=0": dial tcp 172.31.27.74:6443: connect: connection refused Feb 13 15:35:38.514432 kubelet[2782]: E0213 15:35:38.514356 2782 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.27.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-27-74&limit=500&resourceVersion=0": dial tcp 172.31.27.74:6443: connect: connection refused Feb 13 15:35:38.731749 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2438255958.mount: Deactivated successfully. Feb 13 15:35:38.767840 kubelet[2782]: W0213 15:35:38.764356 2782 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.27.74:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.27.74:6443: connect: connection refused Feb 13 15:35:38.767840 kubelet[2782]: E0213 15:35:38.764418 2782 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.27.74:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.27.74:6443: connect: connection refused Feb 13 15:35:38.770308 containerd[1904]: time="2025-02-13T15:35:38.766941492Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:35:38.770308 containerd[1904]: time="2025-02-13T15:35:38.767860517Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Feb 13 15:35:38.775496 containerd[1904]: time="2025-02-13T15:35:38.775446038Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:35:38.785116 containerd[1904]: time="2025-02-13T15:35:38.785045267Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:35:38.787443 containerd[1904]: time="2025-02-13T15:35:38.787329284Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:35:38.788915 containerd[1904]: time="2025-02-13T15:35:38.788184040Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 637.935989ms" Feb 13 15:35:38.789244 containerd[1904]: time="2025-02-13T15:35:38.789215886Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:35:38.792422 containerd[1904]: time="2025-02-13T15:35:38.792331290Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:35:38.794504 containerd[1904]: time="2025-02-13T15:35:38.794286499Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:35:38.801753 containerd[1904]: time="2025-02-13T15:35:38.801513522Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 631.170361ms" Feb 13 15:35:38.814533 containerd[1904]: time="2025-02-13T15:35:38.814416371Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 634.850633ms" Feb 13 15:35:38.914355 kubelet[2782]: W0213 15:35:38.914273 2782 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.27.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.27.74:6443: connect: connection refused Feb 13 15:35:38.914355 kubelet[2782]: E0213 15:35:38.914333 2782 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.27.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.27.74:6443: connect: connection refused Feb 13 15:35:39.003709 kubelet[2782]: E0213 15:35:39.003004 2782 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.27.74:6443/api/v1/namespaces/default/events\": dial tcp 172.31.27.74:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-27-74.1823ce81b6396f28 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-27-74,UID:ip-172-31-27-74,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-27-74,},FirstTimestamp:2025-02-13 15:35:37.598463784 +0000 UTC m=+0.925309005,LastTimestamp:2025-02-13 15:35:37.598463784 +0000 UTC m=+0.925309005,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-27-74,}" Feb 13 15:35:39.072711 kubelet[2782]: E0213 15:35:39.072442 2782 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-74?timeout=10s\": dial tcp 172.31.27.74:6443: connect: connection refused" interval="1.6s" Feb 13 15:35:39.111281 kubelet[2782]: W0213 15:35:39.110216 2782 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.27.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.27.74:6443: connect: connection refused Feb 13 15:35:39.111281 kubelet[2782]: E0213 15:35:39.110295 2782 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.27.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.27.74:6443: connect: connection refused Feb 13 15:35:39.178670 kubelet[2782]: I0213 15:35:39.177629 2782 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-27-74" Feb 13 15:35:39.178670 kubelet[2782]: E0213 15:35:39.178218 2782 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.27.74:6443/api/v1/nodes\": dial tcp 172.31.27.74:6443: connect: connection refused" node="ip-172-31-27-74" Feb 13 15:35:39.225134 containerd[1904]: time="2025-02-13T15:35:39.225037797Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:35:39.226097 containerd[1904]: time="2025-02-13T15:35:39.225867988Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:35:39.226097 containerd[1904]: time="2025-02-13T15:35:39.225921981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:35:39.226097 containerd[1904]: time="2025-02-13T15:35:39.226053392Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:35:39.236120 containerd[1904]: time="2025-02-13T15:35:39.222861573Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:35:39.236120 containerd[1904]: time="2025-02-13T15:35:39.235842576Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:35:39.236120 containerd[1904]: time="2025-02-13T15:35:39.235874216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:35:39.237797 containerd[1904]: time="2025-02-13T15:35:39.236050516Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:35:39.255311 containerd[1904]: time="2025-02-13T15:35:39.255190526Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:35:39.255311 containerd[1904]: time="2025-02-13T15:35:39.255273926Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:35:39.256194 containerd[1904]: time="2025-02-13T15:35:39.255296433Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:35:39.257851 containerd[1904]: time="2025-02-13T15:35:39.257762466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:35:39.283961 systemd[1]: Started cri-containerd-46256eb210f3e2248f32ea24f7e13b80dbe137bdac6c335f735a3a55ebe1c8db.scope - libcontainer container 46256eb210f3e2248f32ea24f7e13b80dbe137bdac6c335f735a3a55ebe1c8db. Feb 13 15:35:39.317982 systemd[1]: Started cri-containerd-cf798edb96a29d15cf49623016b4a62df6e40e79b5e22f9bb85f7c4ade794955.scope - libcontainer container cf798edb96a29d15cf49623016b4a62df6e40e79b5e22f9bb85f7c4ade794955. Feb 13 15:35:39.346338 systemd[1]: Started cri-containerd-2c084ef8204638779a945d68ff4dadab86c09e48a2ed1b5a24eb7a15cf10a565.scope - libcontainer container 2c084ef8204638779a945d68ff4dadab86c09e48a2ed1b5a24eb7a15cf10a565. Feb 13 15:35:39.429225 containerd[1904]: time="2025-02-13T15:35:39.428997990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-27-74,Uid:d867fdb9ecbfe0ae1f46c434de4ec36b,Namespace:kube-system,Attempt:0,} returns sandbox id \"46256eb210f3e2248f32ea24f7e13b80dbe137bdac6c335f735a3a55ebe1c8db\"" Feb 13 15:35:39.435182 containerd[1904]: time="2025-02-13T15:35:39.434845195Z" level=info msg="CreateContainer within sandbox \"46256eb210f3e2248f32ea24f7e13b80dbe137bdac6c335f735a3a55ebe1c8db\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 15:35:39.480372 containerd[1904]: time="2025-02-13T15:35:39.480287596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-27-74,Uid:d4523beb2690f25ef2c046d112798780,Namespace:kube-system,Attempt:0,} returns sandbox id \"cf798edb96a29d15cf49623016b4a62df6e40e79b5e22f9bb85f7c4ade794955\"" Feb 13 15:35:39.481811 containerd[1904]: time="2025-02-13T15:35:39.480453590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-27-74,Uid:a988b83ed381dcc124779befa478b212,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c084ef8204638779a945d68ff4dadab86c09e48a2ed1b5a24eb7a15cf10a565\"" Feb 13 15:35:39.487533 containerd[1904]: time="2025-02-13T15:35:39.487474496Z" level=info msg="CreateContainer within sandbox \"46256eb210f3e2248f32ea24f7e13b80dbe137bdac6c335f735a3a55ebe1c8db\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"08055c96c37e0a378a7fd04013e2cbd7d4fcded05ec8d72914e52993d8c14c35\"" Feb 13 15:35:39.488595 containerd[1904]: time="2025-02-13T15:35:39.488551503Z" level=info msg="CreateContainer within sandbox \"cf798edb96a29d15cf49623016b4a62df6e40e79b5e22f9bb85f7c4ade794955\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 15:35:39.489126 containerd[1904]: time="2025-02-13T15:35:39.489082921Z" level=info msg="StartContainer for \"08055c96c37e0a378a7fd04013e2cbd7d4fcded05ec8d72914e52993d8c14c35\"" Feb 13 15:35:39.496203 containerd[1904]: time="2025-02-13T15:35:39.495405487Z" level=info msg="CreateContainer within sandbox \"2c084ef8204638779a945d68ff4dadab86c09e48a2ed1b5a24eb7a15cf10a565\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 15:35:39.539930 systemd[1]: Started cri-containerd-08055c96c37e0a378a7fd04013e2cbd7d4fcded05ec8d72914e52993d8c14c35.scope - libcontainer container 08055c96c37e0a378a7fd04013e2cbd7d4fcded05ec8d72914e52993d8c14c35. Feb 13 15:35:39.541219 containerd[1904]: time="2025-02-13T15:35:39.541178767Z" level=info msg="CreateContainer within sandbox \"cf798edb96a29d15cf49623016b4a62df6e40e79b5e22f9bb85f7c4ade794955\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"24dd6f6a4e1005025788095d48e54c3c703174fcfafff9df6b67798dc1b57af5\"" Feb 13 15:35:39.544932 containerd[1904]: time="2025-02-13T15:35:39.541752049Z" level=info msg="StartContainer for \"24dd6f6a4e1005025788095d48e54c3c703174fcfafff9df6b67798dc1b57af5\"" Feb 13 15:35:39.552907 containerd[1904]: time="2025-02-13T15:35:39.551333754Z" level=info msg="CreateContainer within sandbox \"2c084ef8204638779a945d68ff4dadab86c09e48a2ed1b5a24eb7a15cf10a565\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"eef2b49ba0b095ae5360c4c8602691a9a630c71e8661c7b543819f1de8caf68d\"" Feb 13 15:35:39.560657 containerd[1904]: time="2025-02-13T15:35:39.558684194Z" level=info msg="StartContainer for \"eef2b49ba0b095ae5360c4c8602691a9a630c71e8661c7b543819f1de8caf68d\"" Feb 13 15:35:39.638919 systemd[1]: Started cri-containerd-24dd6f6a4e1005025788095d48e54c3c703174fcfafff9df6b67798dc1b57af5.scope - libcontainer container 24dd6f6a4e1005025788095d48e54c3c703174fcfafff9df6b67798dc1b57af5. Feb 13 15:35:39.653365 systemd[1]: Started cri-containerd-eef2b49ba0b095ae5360c4c8602691a9a630c71e8661c7b543819f1de8caf68d.scope - libcontainer container eef2b49ba0b095ae5360c4c8602691a9a630c71e8661c7b543819f1de8caf68d. Feb 13 15:35:39.684240 kubelet[2782]: E0213 15:35:39.684183 2782 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.27.74:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.27.74:6443: connect: connection refused Feb 13 15:35:39.734043 containerd[1904]: time="2025-02-13T15:35:39.733990968Z" level=info msg="StartContainer for \"08055c96c37e0a378a7fd04013e2cbd7d4fcded05ec8d72914e52993d8c14c35\" returns successfully" Feb 13 15:35:39.822661 containerd[1904]: time="2025-02-13T15:35:39.822524379Z" level=info msg="StartContainer for \"24dd6f6a4e1005025788095d48e54c3c703174fcfafff9df6b67798dc1b57af5\" returns successfully" Feb 13 15:35:39.838931 containerd[1904]: time="2025-02-13T15:35:39.838263973Z" level=info msg="StartContainer for \"eef2b49ba0b095ae5360c4c8602691a9a630c71e8661c7b543819f1de8caf68d\" returns successfully" Feb 13 15:35:40.581931 kubelet[2782]: W0213 15:35:40.581818 2782 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.27.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-27-74&limit=500&resourceVersion=0": dial tcp 172.31.27.74:6443: connect: connection refused Feb 13 15:35:40.582561 kubelet[2782]: E0213 15:35:40.581944 2782 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.27.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-27-74&limit=500&resourceVersion=0": dial tcp 172.31.27.74:6443: connect: connection refused Feb 13 15:35:40.784062 kubelet[2782]: I0213 15:35:40.784032 2782 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-27-74" Feb 13 15:35:43.449189 kubelet[2782]: E0213 15:35:43.449101 2782 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-27-74\" not found" node="ip-172-31-27-74" Feb 13 15:35:43.524659 kubelet[2782]: I0213 15:35:43.523282 2782 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-27-74" Feb 13 15:35:43.540855 kubelet[2782]: E0213 15:35:43.540819 2782 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-27-74\" not found" Feb 13 15:35:43.641816 kubelet[2782]: E0213 15:35:43.641762 2782 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-27-74\" not found" Feb 13 15:35:43.742910 kubelet[2782]: E0213 15:35:43.742581 2782 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-27-74\" not found" Feb 13 15:35:43.843809 kubelet[2782]: E0213 15:35:43.843690 2782 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-27-74\" not found" Feb 13 15:35:43.944444 kubelet[2782]: E0213 15:35:43.944339 2782 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-27-74\" not found" Feb 13 15:35:44.045351 kubelet[2782]: E0213 15:35:44.044532 2782 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-27-74\" not found" Feb 13 15:35:44.145350 kubelet[2782]: E0213 15:35:44.145301 2782 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-27-74\" not found" Feb 13 15:35:44.246411 kubelet[2782]: E0213 15:35:44.246361 2782 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-27-74\" not found" Feb 13 15:35:44.347106 kubelet[2782]: E0213 15:35:44.346653 2782 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-27-74\" not found" Feb 13 15:35:44.449195 kubelet[2782]: E0213 15:35:44.449152 2782 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-27-74\" not found" Feb 13 15:35:44.550014 kubelet[2782]: E0213 15:35:44.549960 2782 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-27-74\" not found" Feb 13 15:35:44.650352 kubelet[2782]: E0213 15:35:44.650318 2782 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-27-74\" not found" Feb 13 15:35:44.750975 kubelet[2782]: E0213 15:35:44.750688 2782 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-27-74\" not found" Feb 13 15:35:44.778266 update_engine[1877]: I20250213 15:35:44.778191 1877 update_attempter.cc:509] Updating boot flags... Feb 13 15:35:44.851913 kubelet[2782]: E0213 15:35:44.851874 2782 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-27-74\" not found" Feb 13 15:35:44.892005 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3070) Feb 13 15:35:44.954400 kubelet[2782]: E0213 15:35:44.954283 2782 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-27-74\" not found" Feb 13 15:35:45.054884 kubelet[2782]: E0213 15:35:45.054818 2782 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-27-74\" not found" Feb 13 15:35:45.156535 kubelet[2782]: E0213 15:35:45.156343 2782 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-27-74\" not found" Feb 13 15:35:45.185740 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3072) Feb 13 15:35:45.257742 kubelet[2782]: E0213 15:35:45.257502 2782 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-27-74\" not found" Feb 13 15:35:45.359715 kubelet[2782]: E0213 15:35:45.359263 2782 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-27-74\" not found" Feb 13 15:35:45.461722 kubelet[2782]: E0213 15:35:45.461688 2782 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-27-74\" not found" Feb 13 15:35:45.595486 kubelet[2782]: I0213 15:35:45.595362 2782 apiserver.go:52] "Watching apiserver" Feb 13 15:35:45.665136 kubelet[2782]: I0213 15:35:45.665060 2782 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 15:35:46.149978 systemd[1]: Reloading requested from client PID 3239 ('systemctl') (unit session-5.scope)... Feb 13 15:35:46.149996 systemd[1]: Reloading... Feb 13 15:35:46.303719 zram_generator::config[3277]: No configuration found. Feb 13 15:35:46.472982 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:35:46.590365 systemd[1]: Reloading finished in 439 ms. Feb 13 15:35:46.641532 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:35:46.642550 kubelet[2782]: I0213 15:35:46.642517 2782 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:35:46.652301 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:35:46.652698 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:35:46.652762 systemd[1]: kubelet.service: Consumed 1.102s CPU time, 111.0M memory peak, 0B memory swap peak. Feb 13 15:35:46.661188 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:35:46.975048 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:35:46.988584 (kubelet)[3336]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:35:47.114227 kubelet[3336]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:35:47.114227 kubelet[3336]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:35:47.114227 kubelet[3336]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:35:47.114988 kubelet[3336]: I0213 15:35:47.114364 3336 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:35:47.121718 kubelet[3336]: I0213 15:35:47.120218 3336 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 15:35:47.121718 kubelet[3336]: I0213 15:35:47.120243 3336 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:35:47.121718 kubelet[3336]: I0213 15:35:47.120557 3336 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 15:35:47.122195 kubelet[3336]: I0213 15:35:47.122167 3336 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 15:35:47.126473 kubelet[3336]: I0213 15:35:47.123792 3336 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:35:47.142087 kubelet[3336]: I0213 15:35:47.141536 3336 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:35:47.148771 kubelet[3336]: I0213 15:35:47.148711 3336 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:35:47.149155 kubelet[3336]: I0213 15:35:47.148775 3336 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-27-74","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:35:47.149155 kubelet[3336]: I0213 15:35:47.149139 3336 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:35:47.149610 kubelet[3336]: I0213 15:35:47.149517 3336 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:35:47.150834 kubelet[3336]: I0213 15:35:47.149728 3336 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:35:47.150834 kubelet[3336]: I0213 15:35:47.149869 3336 kubelet.go:400] "Attempting to sync node with API server" Feb 13 15:35:47.150834 kubelet[3336]: I0213 15:35:47.149886 3336 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:35:47.150834 kubelet[3336]: I0213 15:35:47.149914 3336 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:35:47.150834 kubelet[3336]: I0213 15:35:47.149932 3336 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:35:47.157968 kubelet[3336]: I0213 15:35:47.157919 3336 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:35:47.162410 kubelet[3336]: I0213 15:35:47.162368 3336 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:35:47.174314 kubelet[3336]: I0213 15:35:47.174280 3336 server.go:1264] "Started kubelet" Feb 13 15:35:47.186233 kubelet[3336]: I0213 15:35:47.186070 3336 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:35:47.202438 kubelet[3336]: I0213 15:35:47.202381 3336 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:35:47.205713 kubelet[3336]: I0213 15:35:47.205579 3336 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:35:47.205995 kubelet[3336]: I0213 15:35:47.205964 3336 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:35:47.220036 kubelet[3336]: I0213 15:35:47.219850 3336 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:35:47.236265 kubelet[3336]: I0213 15:35:47.235512 3336 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 15:35:47.236265 kubelet[3336]: I0213 15:35:47.235716 3336 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:35:47.238490 kubelet[3336]: I0213 15:35:47.238457 3336 server.go:455] "Adding debug handlers to kubelet server" Feb 13 15:35:47.259662 kubelet[3336]: I0213 15:35:47.257424 3336 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:35:47.259662 kubelet[3336]: I0213 15:35:47.257560 3336 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:35:47.275743 kubelet[3336]: I0213 15:35:47.271012 3336 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:35:47.321724 kubelet[3336]: I0213 15:35:47.321690 3336 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:35:47.330767 kubelet[3336]: I0213 15:35:47.328876 3336 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:35:47.330767 kubelet[3336]: I0213 15:35:47.328925 3336 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:35:47.330767 kubelet[3336]: I0213 15:35:47.328945 3336 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 15:35:47.330767 kubelet[3336]: E0213 15:35:47.328994 3336 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:35:47.348270 kubelet[3336]: I0213 15:35:47.348133 3336 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-27-74" Feb 13 15:35:47.384677 kubelet[3336]: I0213 15:35:47.383425 3336 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-27-74" Feb 13 15:35:47.384677 kubelet[3336]: I0213 15:35:47.383539 3336 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-27-74" Feb 13 15:35:47.431231 kubelet[3336]: E0213 15:35:47.431196 3336 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 15:35:47.459750 kubelet[3336]: I0213 15:35:47.458610 3336 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:35:47.459750 kubelet[3336]: I0213 15:35:47.458712 3336 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:35:47.459750 kubelet[3336]: I0213 15:35:47.458737 3336 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:35:47.459750 kubelet[3336]: I0213 15:35:47.459016 3336 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 15:35:47.459750 kubelet[3336]: I0213 15:35:47.459191 3336 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 15:35:47.459750 kubelet[3336]: I0213 15:35:47.459230 3336 policy_none.go:49] "None policy: Start" Feb 13 15:35:47.463304 kubelet[3336]: I0213 15:35:47.460676 3336 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:35:47.463304 kubelet[3336]: I0213 15:35:47.460706 3336 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:35:47.463304 kubelet[3336]: I0213 15:35:47.461041 3336 state_mem.go:75] "Updated machine memory state" Feb 13 15:35:47.473077 kubelet[3336]: I0213 15:35:47.470603 3336 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:35:47.473077 kubelet[3336]: I0213 15:35:47.470831 3336 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:35:47.479649 kubelet[3336]: I0213 15:35:47.478041 3336 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:35:47.638752 kubelet[3336]: I0213 15:35:47.631795 3336 topology_manager.go:215] "Topology Admit Handler" podUID="a988b83ed381dcc124779befa478b212" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-27-74" Feb 13 15:35:47.638752 kubelet[3336]: I0213 15:35:47.632170 3336 topology_manager.go:215] "Topology Admit Handler" podUID="d4523beb2690f25ef2c046d112798780" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-27-74" Feb 13 15:35:47.638752 kubelet[3336]: I0213 15:35:47.635194 3336 topology_manager.go:215] "Topology Admit Handler" podUID="d867fdb9ecbfe0ae1f46c434de4ec36b" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-27-74" Feb 13 15:35:47.664853 kubelet[3336]: I0213 15:35:47.664818 3336 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a988b83ed381dcc124779befa478b212-kubeconfig\") pod \"kube-controller-manager-ip-172-31-27-74\" (UID: \"a988b83ed381dcc124779befa478b212\") " pod="kube-system/kube-controller-manager-ip-172-31-27-74" Feb 13 15:35:47.666382 kubelet[3336]: I0213 15:35:47.666348 3336 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a988b83ed381dcc124779befa478b212-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-27-74\" (UID: \"a988b83ed381dcc124779befa478b212\") " pod="kube-system/kube-controller-manager-ip-172-31-27-74" Feb 13 15:35:47.666981 kubelet[3336]: I0213 15:35:47.666602 3336 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4523beb2690f25ef2c046d112798780-kubeconfig\") pod \"kube-scheduler-ip-172-31-27-74\" (UID: \"d4523beb2690f25ef2c046d112798780\") " pod="kube-system/kube-scheduler-ip-172-31-27-74" Feb 13 15:35:47.666981 kubelet[3336]: I0213 15:35:47.666743 3336 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d867fdb9ecbfe0ae1f46c434de4ec36b-ca-certs\") pod \"kube-apiserver-ip-172-31-27-74\" (UID: \"d867fdb9ecbfe0ae1f46c434de4ec36b\") " pod="kube-system/kube-apiserver-ip-172-31-27-74" Feb 13 15:35:47.666981 kubelet[3336]: I0213 15:35:47.666781 3336 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d867fdb9ecbfe0ae1f46c434de4ec36b-k8s-certs\") pod \"kube-apiserver-ip-172-31-27-74\" (UID: \"d867fdb9ecbfe0ae1f46c434de4ec36b\") " pod="kube-system/kube-apiserver-ip-172-31-27-74" Feb 13 15:35:47.666981 kubelet[3336]: I0213 15:35:47.666825 3336 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d867fdb9ecbfe0ae1f46c434de4ec36b-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-27-74\" (UID: \"d867fdb9ecbfe0ae1f46c434de4ec36b\") " pod="kube-system/kube-apiserver-ip-172-31-27-74" Feb 13 15:35:47.666981 kubelet[3336]: I0213 15:35:47.666850 3336 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a988b83ed381dcc124779befa478b212-k8s-certs\") pod \"kube-controller-manager-ip-172-31-27-74\" (UID: \"a988b83ed381dcc124779befa478b212\") " pod="kube-system/kube-controller-manager-ip-172-31-27-74" Feb 13 15:35:47.667380 kubelet[3336]: I0213 15:35:47.666875 3336 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a988b83ed381dcc124779befa478b212-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-27-74\" (UID: \"a988b83ed381dcc124779befa478b212\") " pod="kube-system/kube-controller-manager-ip-172-31-27-74" Feb 13 15:35:47.667380 kubelet[3336]: I0213 15:35:47.666902 3336 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a988b83ed381dcc124779befa478b212-ca-certs\") pod \"kube-controller-manager-ip-172-31-27-74\" (UID: \"a988b83ed381dcc124779befa478b212\") " pod="kube-system/kube-controller-manager-ip-172-31-27-74" Feb 13 15:35:48.176277 kubelet[3336]: I0213 15:35:48.176237 3336 apiserver.go:52] "Watching apiserver" Feb 13 15:35:48.236076 kubelet[3336]: I0213 15:35:48.235956 3336 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 15:35:48.459419 kubelet[3336]: E0213 15:35:48.448355 3336 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-27-74\" already exists" pod="kube-system/kube-apiserver-ip-172-31-27-74" Feb 13 15:35:48.566014 kubelet[3336]: I0213 15:35:48.564355 3336 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-27-74" podStartSLOduration=1.5643379240000002 podStartE2EDuration="1.564337924s" podCreationTimestamp="2025-02-13 15:35:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:35:48.529202136 +0000 UTC m=+1.527613018" watchObservedRunningTime="2025-02-13 15:35:48.564337924 +0000 UTC m=+1.562748807" Feb 13 15:35:48.578678 kubelet[3336]: I0213 15:35:48.578487 3336 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-27-74" podStartSLOduration=1.578465818 podStartE2EDuration="1.578465818s" podCreationTimestamp="2025-02-13 15:35:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:35:48.564774535 +0000 UTC m=+1.563185437" watchObservedRunningTime="2025-02-13 15:35:48.578465818 +0000 UTC m=+1.576876700" Feb 13 15:35:48.603678 kubelet[3336]: I0213 15:35:48.602549 3336 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-27-74" podStartSLOduration=1.602526664 podStartE2EDuration="1.602526664s" podCreationTimestamp="2025-02-13 15:35:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:35:48.580995193 +0000 UTC m=+1.579406077" watchObservedRunningTime="2025-02-13 15:35:48.602526664 +0000 UTC m=+1.600937542" Feb 13 15:35:49.067737 sudo[2174]: pam_unix(sudo:session): session closed for user root Feb 13 15:35:49.090216 sshd[2173]: Connection closed by 139.178.89.65 port 35978 Feb 13 15:35:49.092526 sshd-session[2171]: pam_unix(sshd:session): session closed for user core Feb 13 15:35:49.099789 systemd[1]: sshd@4-172.31.27.74:22-139.178.89.65:35978.service: Deactivated successfully. Feb 13 15:35:49.104029 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 15:35:49.105389 systemd[1]: session-5.scope: Consumed 4.006s CPU time, 185.5M memory peak, 0B memory swap peak. Feb 13 15:35:49.109037 systemd-logind[1873]: Session 5 logged out. Waiting for processes to exit. Feb 13 15:35:49.111843 systemd-logind[1873]: Removed session 5. Feb 13 15:36:00.763653 kubelet[3336]: I0213 15:36:00.763594 3336 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 15:36:00.764362 containerd[1904]: time="2025-02-13T15:36:00.764269650Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 15:36:00.764873 kubelet[3336]: I0213 15:36:00.764671 3336 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 15:36:01.418394 kubelet[3336]: I0213 15:36:01.417663 3336 topology_manager.go:215] "Topology Admit Handler" podUID="7278c250-ab4c-4b51-8368-f5a7e9fc6073" podNamespace="kube-system" podName="kube-proxy-4hcnb" Feb 13 15:36:01.434595 systemd[1]: Created slice kubepods-besteffort-pod7278c250_ab4c_4b51_8368_f5a7e9fc6073.slice - libcontainer container kubepods-besteffort-pod7278c250_ab4c_4b51_8368_f5a7e9fc6073.slice. Feb 13 15:36:01.444470 kubelet[3336]: I0213 15:36:01.444420 3336 topology_manager.go:215] "Topology Admit Handler" podUID="c0299073-284d-488c-8250-e783c496a73b" podNamespace="kube-flannel" podName="kube-flannel-ds-25ndp" Feb 13 15:36:01.461390 systemd[1]: Created slice kubepods-burstable-podc0299073_284d_488c_8250_e783c496a73b.slice - libcontainer container kubepods-burstable-podc0299073_284d_488c_8250_e783c496a73b.slice. Feb 13 15:36:01.468826 kubelet[3336]: I0213 15:36:01.468710 3336 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7278c250-ab4c-4b51-8368-f5a7e9fc6073-xtables-lock\") pod \"kube-proxy-4hcnb\" (UID: \"7278c250-ab4c-4b51-8368-f5a7e9fc6073\") " pod="kube-system/kube-proxy-4hcnb" Feb 13 15:36:01.469331 kubelet[3336]: I0213 15:36:01.469145 3336 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7278c250-ab4c-4b51-8368-f5a7e9fc6073-lib-modules\") pod \"kube-proxy-4hcnb\" (UID: \"7278c250-ab4c-4b51-8368-f5a7e9fc6073\") " pod="kube-system/kube-proxy-4hcnb" Feb 13 15:36:01.469331 kubelet[3336]: I0213 15:36:01.469182 3336 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7278c250-ab4c-4b51-8368-f5a7e9fc6073-kube-proxy\") pod \"kube-proxy-4hcnb\" (UID: \"7278c250-ab4c-4b51-8368-f5a7e9fc6073\") " pod="kube-system/kube-proxy-4hcnb" Feb 13 15:36:01.469331 kubelet[3336]: I0213 15:36:01.469247 3336 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbs88\" (UniqueName: \"kubernetes.io/projected/7278c250-ab4c-4b51-8368-f5a7e9fc6073-kube-api-access-dbs88\") pod \"kube-proxy-4hcnb\" (UID: \"7278c250-ab4c-4b51-8368-f5a7e9fc6073\") " pod="kube-system/kube-proxy-4hcnb" Feb 13 15:36:01.572362 kubelet[3336]: I0213 15:36:01.571028 3336 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w67rj\" (UniqueName: \"kubernetes.io/projected/c0299073-284d-488c-8250-e783c496a73b-kube-api-access-w67rj\") pod \"kube-flannel-ds-25ndp\" (UID: \"c0299073-284d-488c-8250-e783c496a73b\") " pod="kube-flannel/kube-flannel-ds-25ndp" Feb 13 15:36:01.572610 kubelet[3336]: I0213 15:36:01.572404 3336 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/c0299073-284d-488c-8250-e783c496a73b-cni-plugin\") pod \"kube-flannel-ds-25ndp\" (UID: \"c0299073-284d-488c-8250-e783c496a73b\") " pod="kube-flannel/kube-flannel-ds-25ndp" Feb 13 15:36:01.572755 kubelet[3336]: I0213 15:36:01.572720 3336 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/c0299073-284d-488c-8250-e783c496a73b-flannel-cfg\") pod \"kube-flannel-ds-25ndp\" (UID: \"c0299073-284d-488c-8250-e783c496a73b\") " pod="kube-flannel/kube-flannel-ds-25ndp" Feb 13 15:36:01.572830 kubelet[3336]: I0213 15:36:01.572760 3336 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c0299073-284d-488c-8250-e783c496a73b-run\") pod \"kube-flannel-ds-25ndp\" (UID: \"c0299073-284d-488c-8250-e783c496a73b\") " pod="kube-flannel/kube-flannel-ds-25ndp" Feb 13 15:36:01.572830 kubelet[3336]: I0213 15:36:01.572784 3336 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/c0299073-284d-488c-8250-e783c496a73b-cni\") pod \"kube-flannel-ds-25ndp\" (UID: \"c0299073-284d-488c-8250-e783c496a73b\") " pod="kube-flannel/kube-flannel-ds-25ndp" Feb 13 15:36:01.572830 kubelet[3336]: I0213 15:36:01.572809 3336 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c0299073-284d-488c-8250-e783c496a73b-xtables-lock\") pod \"kube-flannel-ds-25ndp\" (UID: \"c0299073-284d-488c-8250-e783c496a73b\") " pod="kube-flannel/kube-flannel-ds-25ndp" Feb 13 15:36:01.611888 kubelet[3336]: E0213 15:36:01.611771 3336 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Feb 13 15:36:01.612279 kubelet[3336]: E0213 15:36:01.611915 3336 projected.go:200] Error preparing data for projected volume kube-api-access-dbs88 for pod kube-system/kube-proxy-4hcnb: configmap "kube-root-ca.crt" not found Feb 13 15:36:01.612279 kubelet[3336]: E0213 15:36:01.612267 3336 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7278c250-ab4c-4b51-8368-f5a7e9fc6073-kube-api-access-dbs88 podName:7278c250-ab4c-4b51-8368-f5a7e9fc6073 nodeName:}" failed. No retries permitted until 2025-02-13 15:36:02.112233859 +0000 UTC m=+15.110644740 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-dbs88" (UniqueName: "kubernetes.io/projected/7278c250-ab4c-4b51-8368-f5a7e9fc6073-kube-api-access-dbs88") pod "kube-proxy-4hcnb" (UID: "7278c250-ab4c-4b51-8368-f5a7e9fc6073") : configmap "kube-root-ca.crt" not found Feb 13 15:36:01.770567 containerd[1904]: time="2025-02-13T15:36:01.768188148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-25ndp,Uid:c0299073-284d-488c-8250-e783c496a73b,Namespace:kube-flannel,Attempt:0,}" Feb 13 15:36:02.064020 containerd[1904]: time="2025-02-13T15:36:02.060616479Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:36:02.064020 containerd[1904]: time="2025-02-13T15:36:02.060733142Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:36:02.064020 containerd[1904]: time="2025-02-13T15:36:02.062241079Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:36:02.064020 containerd[1904]: time="2025-02-13T15:36:02.062417322Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:36:02.287139 systemd[1]: Started cri-containerd-5c9859483ff21cd985ec2325db559ed0cc8314f34ec42f8a5840641d2c5782b1.scope - libcontainer container 5c9859483ff21cd985ec2325db559ed0cc8314f34ec42f8a5840641d2c5782b1. Feb 13 15:36:02.347442 containerd[1904]: time="2025-02-13T15:36:02.344087785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4hcnb,Uid:7278c250-ab4c-4b51-8368-f5a7e9fc6073,Namespace:kube-system,Attempt:0,}" Feb 13 15:36:02.473705 containerd[1904]: time="2025-02-13T15:36:02.473552269Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:36:02.478118 containerd[1904]: time="2025-02-13T15:36:02.477888049Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:36:02.478118 containerd[1904]: time="2025-02-13T15:36:02.477921967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:36:02.483214 containerd[1904]: time="2025-02-13T15:36:02.478065350Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:36:02.515934 containerd[1904]: time="2025-02-13T15:36:02.514133819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-25ndp,Uid:c0299073-284d-488c-8250-e783c496a73b,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"5c9859483ff21cd985ec2325db559ed0cc8314f34ec42f8a5840641d2c5782b1\"" Feb 13 15:36:02.537213 containerd[1904]: time="2025-02-13T15:36:02.536987040Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 15:36:02.557463 systemd[1]: Started cri-containerd-fdfc839f472d6843c59f8cf6293a900ba5580e9bcd516e442d05a21cefc625f6.scope - libcontainer container fdfc839f472d6843c59f8cf6293a900ba5580e9bcd516e442d05a21cefc625f6. Feb 13 15:36:02.620941 containerd[1904]: time="2025-02-13T15:36:02.620335173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4hcnb,Uid:7278c250-ab4c-4b51-8368-f5a7e9fc6073,Namespace:kube-system,Attempt:0,} returns sandbox id \"fdfc839f472d6843c59f8cf6293a900ba5580e9bcd516e442d05a21cefc625f6\"" Feb 13 15:36:02.626123 containerd[1904]: time="2025-02-13T15:36:02.625978177Z" level=info msg="CreateContainer within sandbox \"fdfc839f472d6843c59f8cf6293a900ba5580e9bcd516e442d05a21cefc625f6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 15:36:02.657336 containerd[1904]: time="2025-02-13T15:36:02.657187354Z" level=info msg="CreateContainer within sandbox \"fdfc839f472d6843c59f8cf6293a900ba5580e9bcd516e442d05a21cefc625f6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0602282447c31b2aec8ec4b38ce2a0da50ac498bc77d91c124f4df4184bb4ece\"" Feb 13 15:36:02.660303 containerd[1904]: time="2025-02-13T15:36:02.660245732Z" level=info msg="StartContainer for \"0602282447c31b2aec8ec4b38ce2a0da50ac498bc77d91c124f4df4184bb4ece\"" Feb 13 15:36:02.750974 systemd[1]: Started cri-containerd-0602282447c31b2aec8ec4b38ce2a0da50ac498bc77d91c124f4df4184bb4ece.scope - libcontainer container 0602282447c31b2aec8ec4b38ce2a0da50ac498bc77d91c124f4df4184bb4ece. Feb 13 15:36:02.839839 containerd[1904]: time="2025-02-13T15:36:02.839779728Z" level=info msg="StartContainer for \"0602282447c31b2aec8ec4b38ce2a0da50ac498bc77d91c124f4df4184bb4ece\" returns successfully" Feb 13 15:36:04.798655 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4072476967.mount: Deactivated successfully. Feb 13 15:36:04.891784 containerd[1904]: time="2025-02-13T15:36:04.891687724Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:36:04.893617 containerd[1904]: time="2025-02-13T15:36:04.893441714Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3852937" Feb 13 15:36:04.897225 containerd[1904]: time="2025-02-13T15:36:04.896434011Z" level=info msg="ImageCreate event name:\"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:36:04.901760 containerd[1904]: time="2025-02-13T15:36:04.901714638Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:36:04.903266 containerd[1904]: time="2025-02-13T15:36:04.903227810Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3842055\" in 2.365819607s" Feb 13 15:36:04.903390 containerd[1904]: time="2025-02-13T15:36:04.903360451Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" Feb 13 15:36:04.906345 containerd[1904]: time="2025-02-13T15:36:04.906178096Z" level=info msg="CreateContainer within sandbox \"5c9859483ff21cd985ec2325db559ed0cc8314f34ec42f8a5840641d2c5782b1\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Feb 13 15:36:04.933233 containerd[1904]: time="2025-02-13T15:36:04.933136371Z" level=info msg="CreateContainer within sandbox \"5c9859483ff21cd985ec2325db559ed0cc8314f34ec42f8a5840641d2c5782b1\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"26e29bf7bc4f5a0b6cb05e003605d3ce133619ce8ceb5a89e2d89f169db3b710\"" Feb 13 15:36:04.934754 containerd[1904]: time="2025-02-13T15:36:04.934340786Z" level=info msg="StartContainer for \"26e29bf7bc4f5a0b6cb05e003605d3ce133619ce8ceb5a89e2d89f169db3b710\"" Feb 13 15:36:04.976869 systemd[1]: Started cri-containerd-26e29bf7bc4f5a0b6cb05e003605d3ce133619ce8ceb5a89e2d89f169db3b710.scope - libcontainer container 26e29bf7bc4f5a0b6cb05e003605d3ce133619ce8ceb5a89e2d89f169db3b710. Feb 13 15:36:05.015764 containerd[1904]: time="2025-02-13T15:36:05.015438557Z" level=info msg="StartContainer for \"26e29bf7bc4f5a0b6cb05e003605d3ce133619ce8ceb5a89e2d89f169db3b710\" returns successfully" Feb 13 15:36:05.015807 systemd[1]: cri-containerd-26e29bf7bc4f5a0b6cb05e003605d3ce133619ce8ceb5a89e2d89f169db3b710.scope: Deactivated successfully. Feb 13 15:36:05.114323 containerd[1904]: time="2025-02-13T15:36:05.113861728Z" level=info msg="shim disconnected" id=26e29bf7bc4f5a0b6cb05e003605d3ce133619ce8ceb5a89e2d89f169db3b710 namespace=k8s.io Feb 13 15:36:05.114323 containerd[1904]: time="2025-02-13T15:36:05.113919466Z" level=warning msg="cleaning up after shim disconnected" id=26e29bf7bc4f5a0b6cb05e003605d3ce133619ce8ceb5a89e2d89f169db3b710 namespace=k8s.io Feb 13 15:36:05.114323 containerd[1904]: time="2025-02-13T15:36:05.113931462Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:36:05.482009 containerd[1904]: time="2025-02-13T15:36:05.481961761Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Feb 13 15:36:05.496613 kubelet[3336]: I0213 15:36:05.496549 3336 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4hcnb" podStartSLOduration=4.496529546 podStartE2EDuration="4.496529546s" podCreationTimestamp="2025-02-13 15:36:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:36:03.493069483 +0000 UTC m=+16.491480364" watchObservedRunningTime="2025-02-13 15:36:05.496529546 +0000 UTC m=+18.494940425" Feb 13 15:36:05.627413 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-26e29bf7bc4f5a0b6cb05e003605d3ce133619ce8ceb5a89e2d89f169db3b710-rootfs.mount: Deactivated successfully. Feb 13 15:36:07.830291 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3793475736.mount: Deactivated successfully. Feb 13 15:36:08.941252 containerd[1904]: time="2025-02-13T15:36:08.941199451Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:36:08.943255 containerd[1904]: time="2025-02-13T15:36:08.942970718Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26866358" Feb 13 15:36:08.946713 containerd[1904]: time="2025-02-13T15:36:08.945451503Z" level=info msg="ImageCreate event name:\"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:36:08.949383 containerd[1904]: time="2025-02-13T15:36:08.949344378Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:36:08.951932 containerd[1904]: time="2025-02-13T15:36:08.951895796Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26855532\" in 3.469695698s" Feb 13 15:36:08.951932 containerd[1904]: time="2025-02-13T15:36:08.951931953Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" Feb 13 15:36:08.954222 containerd[1904]: time="2025-02-13T15:36:08.954184000Z" level=info msg="CreateContainer within sandbox \"5c9859483ff21cd985ec2325db559ed0cc8314f34ec42f8a5840641d2c5782b1\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 15:36:08.980369 containerd[1904]: time="2025-02-13T15:36:08.980327951Z" level=info msg="CreateContainer within sandbox \"5c9859483ff21cd985ec2325db559ed0cc8314f34ec42f8a5840641d2c5782b1\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"20f822f002ec48678178d1ad96c2e5409e62dc07307b547c1e41b254eee1ca17\"" Feb 13 15:36:08.981450 containerd[1904]: time="2025-02-13T15:36:08.981001229Z" level=info msg="StartContainer for \"20f822f002ec48678178d1ad96c2e5409e62dc07307b547c1e41b254eee1ca17\"" Feb 13 15:36:09.026318 systemd[1]: run-containerd-runc-k8s.io-20f822f002ec48678178d1ad96c2e5409e62dc07307b547c1e41b254eee1ca17-runc.02fpAR.mount: Deactivated successfully. Feb 13 15:36:09.033869 systemd[1]: Started cri-containerd-20f822f002ec48678178d1ad96c2e5409e62dc07307b547c1e41b254eee1ca17.scope - libcontainer container 20f822f002ec48678178d1ad96c2e5409e62dc07307b547c1e41b254eee1ca17. Feb 13 15:36:09.069259 systemd[1]: cri-containerd-20f822f002ec48678178d1ad96c2e5409e62dc07307b547c1e41b254eee1ca17.scope: Deactivated successfully. Feb 13 15:36:09.074482 containerd[1904]: time="2025-02-13T15:36:09.074358976Z" level=info msg="StartContainer for \"20f822f002ec48678178d1ad96c2e5409e62dc07307b547c1e41b254eee1ca17\" returns successfully" Feb 13 15:36:09.108552 kubelet[3336]: I0213 15:36:09.108494 3336 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 15:36:09.110015 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-20f822f002ec48678178d1ad96c2e5409e62dc07307b547c1e41b254eee1ca17-rootfs.mount: Deactivated successfully. Feb 13 15:36:09.187730 kubelet[3336]: I0213 15:36:09.185301 3336 topology_manager.go:215] "Topology Admit Handler" podUID="98c0de2b-cd50-4ea9-b94a-d29911d16641" podNamespace="kube-system" podName="coredns-7db6d8ff4d-rdxtx" Feb 13 15:36:09.187730 kubelet[3336]: I0213 15:36:09.185514 3336 topology_manager.go:215] "Topology Admit Handler" podUID="dd9cb7a0-33c2-4adf-aaaa-3425ba1fd6ce" podNamespace="kube-system" podName="coredns-7db6d8ff4d-xx54d" Feb 13 15:36:09.222696 systemd[1]: Created slice kubepods-burstable-poddd9cb7a0_33c2_4adf_aaaa_3425ba1fd6ce.slice - libcontainer container kubepods-burstable-poddd9cb7a0_33c2_4adf_aaaa_3425ba1fd6ce.slice. Feb 13 15:36:09.252162 systemd[1]: Created slice kubepods-burstable-pod98c0de2b_cd50_4ea9_b94a_d29911d16641.slice - libcontainer container kubepods-burstable-pod98c0de2b_cd50_4ea9_b94a_d29911d16641.slice. Feb 13 15:36:09.264986 containerd[1904]: time="2025-02-13T15:36:09.264911603Z" level=info msg="shim disconnected" id=20f822f002ec48678178d1ad96c2e5409e62dc07307b547c1e41b254eee1ca17 namespace=k8s.io Feb 13 15:36:09.264986 containerd[1904]: time="2025-02-13T15:36:09.264978216Z" level=warning msg="cleaning up after shim disconnected" id=20f822f002ec48678178d1ad96c2e5409e62dc07307b547c1e41b254eee1ca17 namespace=k8s.io Feb 13 15:36:09.264986 containerd[1904]: time="2025-02-13T15:36:09.264991132Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:36:09.270676 kubelet[3336]: I0213 15:36:09.269909 3336 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p87b7\" (UniqueName: \"kubernetes.io/projected/dd9cb7a0-33c2-4adf-aaaa-3425ba1fd6ce-kube-api-access-p87b7\") pod \"coredns-7db6d8ff4d-xx54d\" (UID: \"dd9cb7a0-33c2-4adf-aaaa-3425ba1fd6ce\") " pod="kube-system/coredns-7db6d8ff4d-xx54d" Feb 13 15:36:09.270676 kubelet[3336]: I0213 15:36:09.270041 3336 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/98c0de2b-cd50-4ea9-b94a-d29911d16641-config-volume\") pod \"coredns-7db6d8ff4d-rdxtx\" (UID: \"98c0de2b-cd50-4ea9-b94a-d29911d16641\") " pod="kube-system/coredns-7db6d8ff4d-rdxtx" Feb 13 15:36:09.270676 kubelet[3336]: I0213 15:36:09.270071 3336 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn7g7\" (UniqueName: \"kubernetes.io/projected/98c0de2b-cd50-4ea9-b94a-d29911d16641-kube-api-access-gn7g7\") pod \"coredns-7db6d8ff4d-rdxtx\" (UID: \"98c0de2b-cd50-4ea9-b94a-d29911d16641\") " pod="kube-system/coredns-7db6d8ff4d-rdxtx" Feb 13 15:36:09.270676 kubelet[3336]: I0213 15:36:09.270141 3336 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dd9cb7a0-33c2-4adf-aaaa-3425ba1fd6ce-config-volume\") pod \"coredns-7db6d8ff4d-xx54d\" (UID: \"dd9cb7a0-33c2-4adf-aaaa-3425ba1fd6ce\") " pod="kube-system/coredns-7db6d8ff4d-xx54d" Feb 13 15:36:09.529137 containerd[1904]: time="2025-02-13T15:36:09.529018628Z" level=info msg="CreateContainer within sandbox \"5c9859483ff21cd985ec2325db559ed0cc8314f34ec42f8a5840641d2c5782b1\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Feb 13 15:36:09.551901 containerd[1904]: time="2025-02-13T15:36:09.551646403Z" level=info msg="CreateContainer within sandbox \"5c9859483ff21cd985ec2325db559ed0cc8314f34ec42f8a5840641d2c5782b1\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"0926b00ffd4dcd914703da8d784248727497105ffa5106d3a47c238646d0261e\"" Feb 13 15:36:09.553907 containerd[1904]: time="2025-02-13T15:36:09.553869830Z" level=info msg="StartContainer for \"0926b00ffd4dcd914703da8d784248727497105ffa5106d3a47c238646d0261e\"" Feb 13 15:36:09.555939 containerd[1904]: time="2025-02-13T15:36:09.553876041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xx54d,Uid:dd9cb7a0-33c2-4adf-aaaa-3425ba1fd6ce,Namespace:kube-system,Attempt:0,}" Feb 13 15:36:09.567934 containerd[1904]: time="2025-02-13T15:36:09.567893489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rdxtx,Uid:98c0de2b-cd50-4ea9-b94a-d29911d16641,Namespace:kube-system,Attempt:0,}" Feb 13 15:36:09.593895 systemd[1]: Started cri-containerd-0926b00ffd4dcd914703da8d784248727497105ffa5106d3a47c238646d0261e.scope - libcontainer container 0926b00ffd4dcd914703da8d784248727497105ffa5106d3a47c238646d0261e. Feb 13 15:36:09.683883 containerd[1904]: time="2025-02-13T15:36:09.683828899Z" level=info msg="StartContainer for \"0926b00ffd4dcd914703da8d784248727497105ffa5106d3a47c238646d0261e\" returns successfully" Feb 13 15:36:09.690154 containerd[1904]: time="2025-02-13T15:36:09.690096723Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xx54d,Uid:dd9cb7a0-33c2-4adf-aaaa-3425ba1fd6ce,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d2316aec6ea4898d7ceba30deb4d9394c7204c8151de7f3d1a294147c32a365d\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 15:36:09.690526 kubelet[3336]: E0213 15:36:09.690482 3336 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2316aec6ea4898d7ceba30deb4d9394c7204c8151de7f3d1a294147c32a365d\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 15:36:09.691013 kubelet[3336]: E0213 15:36:09.690558 3336 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2316aec6ea4898d7ceba30deb4d9394c7204c8151de7f3d1a294147c32a365d\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-xx54d" Feb 13 15:36:09.691630 kubelet[3336]: E0213 15:36:09.691591 3336 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2316aec6ea4898d7ceba30deb4d9394c7204c8151de7f3d1a294147c32a365d\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-xx54d" Feb 13 15:36:09.691790 kubelet[3336]: E0213 15:36:09.691756 3336 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-xx54d_kube-system(dd9cb7a0-33c2-4adf-aaaa-3425ba1fd6ce)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-xx54d_kube-system(dd9cb7a0-33c2-4adf-aaaa-3425ba1fd6ce)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d2316aec6ea4898d7ceba30deb4d9394c7204c8151de7f3d1a294147c32a365d\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-xx54d" podUID="dd9cb7a0-33c2-4adf-aaaa-3425ba1fd6ce" Feb 13 15:36:09.693287 containerd[1904]: time="2025-02-13T15:36:09.693179070Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rdxtx,Uid:98c0de2b-cd50-4ea9-b94a-d29911d16641,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3f2715fe8c632d58655eaa6bf27ee88c040a4faf38cb1d4773c5c61cf2a6b41f\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 15:36:09.693714 kubelet[3336]: E0213 15:36:09.693533 3336 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f2715fe8c632d58655eaa6bf27ee88c040a4faf38cb1d4773c5c61cf2a6b41f\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 15:36:09.693714 kubelet[3336]: E0213 15:36:09.693584 3336 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f2715fe8c632d58655eaa6bf27ee88c040a4faf38cb1d4773c5c61cf2a6b41f\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-rdxtx" Feb 13 15:36:09.694291 kubelet[3336]: E0213 15:36:09.694216 3336 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f2715fe8c632d58655eaa6bf27ee88c040a4faf38cb1d4773c5c61cf2a6b41f\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-rdxtx" Feb 13 15:36:09.694373 kubelet[3336]: E0213 15:36:09.694342 3336 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-rdxtx_kube-system(98c0de2b-cd50-4ea9-b94a-d29911d16641)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-rdxtx_kube-system(98c0de2b-cd50-4ea9-b94a-d29911d16641)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3f2715fe8c632d58655eaa6bf27ee88c040a4faf38cb1d4773c5c61cf2a6b41f\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-rdxtx" podUID="98c0de2b-cd50-4ea9-b94a-d29911d16641" Feb 13 15:36:10.546072 kubelet[3336]: I0213 15:36:10.545749 3336 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-25ndp" podStartSLOduration=3.11972708 podStartE2EDuration="9.545727983s" podCreationTimestamp="2025-02-13 15:36:01 +0000 UTC" firstStartedPulling="2025-02-13 15:36:02.526696348 +0000 UTC m=+15.525107220" lastFinishedPulling="2025-02-13 15:36:08.952697253 +0000 UTC m=+21.951108123" observedRunningTime="2025-02-13 15:36:10.545340132 +0000 UTC m=+23.543751015" watchObservedRunningTime="2025-02-13 15:36:10.545727983 +0000 UTC m=+23.544138864" Feb 13 15:36:10.997167 (udev-worker)[3873]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:36:11.016116 systemd-networkd[1735]: flannel.1: Link UP Feb 13 15:36:11.021491 systemd-networkd[1735]: flannel.1: Gained carrier Feb 13 15:36:12.476151 systemd-networkd[1735]: flannel.1: Gained IPv6LL Feb 13 15:36:14.993024 ntpd[1867]: Listen normally on 7 flannel.1 192.168.0.0:123 Feb 13 15:36:14.993124 ntpd[1867]: Listen normally on 8 flannel.1 [fe80::74f0:f1ff:feb8:f1ba%4]:123 Feb 13 15:36:14.993852 ntpd[1867]: 13 Feb 15:36:14 ntpd[1867]: Listen normally on 7 flannel.1 192.168.0.0:123 Feb 13 15:36:14.993852 ntpd[1867]: 13 Feb 15:36:14 ntpd[1867]: Listen normally on 8 flannel.1 [fe80::74f0:f1ff:feb8:f1ba%4]:123 Feb 13 15:36:20.331559 containerd[1904]: time="2025-02-13T15:36:20.331510951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xx54d,Uid:dd9cb7a0-33c2-4adf-aaaa-3425ba1fd6ce,Namespace:kube-system,Attempt:0,}" Feb 13 15:36:20.426284 systemd-networkd[1735]: cni0: Link UP Feb 13 15:36:20.426295 systemd-networkd[1735]: cni0: Gained carrier Feb 13 15:36:20.431535 (udev-worker)[3988]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:36:20.432196 systemd-networkd[1735]: cni0: Lost carrier Feb 13 15:36:20.523860 kernel: cni0: port 1(veth539d7e35) entered blocking state Feb 13 15:36:20.524396 kernel: cni0: port 1(veth539d7e35) entered disabled state Feb 13 15:36:20.521596 systemd-networkd[1735]: veth539d7e35: Link UP Feb 13 15:36:20.526476 kernel: veth539d7e35: entered allmulticast mode Feb 13 15:36:20.526560 kernel: veth539d7e35: entered promiscuous mode Feb 13 15:36:20.528791 kernel: cni0: port 1(veth539d7e35) entered blocking state Feb 13 15:36:20.528967 kernel: cni0: port 1(veth539d7e35) entered forwarding state Feb 13 15:36:20.529007 kernel: cni0: port 1(veth539d7e35) entered disabled state Feb 13 15:36:20.544619 kernel: cni0: port 1(veth539d7e35) entered blocking state Feb 13 15:36:20.544967 kernel: cni0: port 1(veth539d7e35) entered forwarding state Feb 13 15:36:20.545909 systemd-networkd[1735]: veth539d7e35: Gained carrier Feb 13 15:36:20.546242 systemd-networkd[1735]: cni0: Gained carrier Feb 13 15:36:20.573726 containerd[1904]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00001c938), "name":"cbr0", "type":"bridge"} Feb 13 15:36:20.573726 containerd[1904]: delegateAdd: netconf sent to delegate plugin: Feb 13 15:36:20.612073 containerd[1904]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":8951,"name":"cbr0","type":"bridge"}time="2025-02-13T15:36:20.611211551Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:36:20.612073 containerd[1904]: time="2025-02-13T15:36:20.611399379Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:36:20.612073 containerd[1904]: time="2025-02-13T15:36:20.611430105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:36:20.612073 containerd[1904]: time="2025-02-13T15:36:20.611543619Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:36:20.670869 systemd[1]: Started cri-containerd-96d19645560d8c2fff37381550185f568dbb54de2ae20b5159da4aafbc3272b4.scope - libcontainer container 96d19645560d8c2fff37381550185f568dbb54de2ae20b5159da4aafbc3272b4. Feb 13 15:36:20.729081 containerd[1904]: time="2025-02-13T15:36:20.728952567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xx54d,Uid:dd9cb7a0-33c2-4adf-aaaa-3425ba1fd6ce,Namespace:kube-system,Attempt:0,} returns sandbox id \"96d19645560d8c2fff37381550185f568dbb54de2ae20b5159da4aafbc3272b4\"" Feb 13 15:36:20.768836 containerd[1904]: time="2025-02-13T15:36:20.768785875Z" level=info msg="CreateContainer within sandbox \"96d19645560d8c2fff37381550185f568dbb54de2ae20b5159da4aafbc3272b4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:36:20.790039 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3417457174.mount: Deactivated successfully. Feb 13 15:36:20.795246 containerd[1904]: time="2025-02-13T15:36:20.795206215Z" level=info msg="CreateContainer within sandbox \"96d19645560d8c2fff37381550185f568dbb54de2ae20b5159da4aafbc3272b4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"11891b3534f1f0107a24dddc2e2cbb90592349873e8caecf2ab0b2d93ac4dfe7\"" Feb 13 15:36:20.802743 containerd[1904]: time="2025-02-13T15:36:20.801804704Z" level=info msg="StartContainer for \"11891b3534f1f0107a24dddc2e2cbb90592349873e8caecf2ab0b2d93ac4dfe7\"" Feb 13 15:36:20.845958 systemd[1]: Started cri-containerd-11891b3534f1f0107a24dddc2e2cbb90592349873e8caecf2ab0b2d93ac4dfe7.scope - libcontainer container 11891b3534f1f0107a24dddc2e2cbb90592349873e8caecf2ab0b2d93ac4dfe7. Feb 13 15:36:20.893345 containerd[1904]: time="2025-02-13T15:36:20.893174907Z" level=info msg="StartContainer for \"11891b3534f1f0107a24dddc2e2cbb90592349873e8caecf2ab0b2d93ac4dfe7\" returns successfully" Feb 13 15:36:21.363822 containerd[1904]: time="2025-02-13T15:36:21.363784111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rdxtx,Uid:98c0de2b-cd50-4ea9-b94a-d29911d16641,Namespace:kube-system,Attempt:0,}" Feb 13 15:36:21.420786 (udev-worker)[3990]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:36:21.424974 kernel: cni0: port 2(veth7ccef0b1) entered blocking state Feb 13 15:36:21.425089 kernel: cni0: port 2(veth7ccef0b1) entered disabled state Feb 13 15:36:21.425122 kernel: veth7ccef0b1: entered allmulticast mode Feb 13 15:36:21.425149 kernel: veth7ccef0b1: entered promiscuous mode Feb 13 15:36:21.423175 systemd-networkd[1735]: veth7ccef0b1: Link UP Feb 13 15:36:21.428279 kernel: cni0: port 2(veth7ccef0b1) entered blocking state Feb 13 15:36:21.428365 kernel: cni0: port 2(veth7ccef0b1) entered forwarding state Feb 13 15:36:21.432685 kernel: cni0: port 2(veth7ccef0b1) entered disabled state Feb 13 15:36:21.439180 kernel: cni0: port 2(veth7ccef0b1) entered blocking state Feb 13 15:36:21.439280 kernel: cni0: port 2(veth7ccef0b1) entered forwarding state Feb 13 15:36:21.441318 systemd-networkd[1735]: veth7ccef0b1: Gained carrier Feb 13 15:36:21.444609 containerd[1904]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009a8e8), "name":"cbr0", "type":"bridge"} Feb 13 15:36:21.444609 containerd[1904]: delegateAdd: netconf sent to delegate plugin: Feb 13 15:36:21.484002 containerd[1904]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":8951,"name":"cbr0","type":"bridge"}time="2025-02-13T15:36:21.483710264Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:36:21.484002 containerd[1904]: time="2025-02-13T15:36:21.483963557Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:36:21.487167 containerd[1904]: time="2025-02-13T15:36:21.483993202Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:36:21.487167 containerd[1904]: time="2025-02-13T15:36:21.484110458Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:36:21.536244 systemd[1]: Started cri-containerd-fce9f512ec08a0abb599f00027a213752638dc618ac7560ac834beded7cbb2fe.scope - libcontainer container fce9f512ec08a0abb599f00027a213752638dc618ac7560ac834beded7cbb2fe. Feb 13 15:36:21.634978 containerd[1904]: time="2025-02-13T15:36:21.634436872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rdxtx,Uid:98c0de2b-cd50-4ea9-b94a-d29911d16641,Namespace:kube-system,Attempt:0,} returns sandbox id \"fce9f512ec08a0abb599f00027a213752638dc618ac7560ac834beded7cbb2fe\"" Feb 13 15:36:21.639989 containerd[1904]: time="2025-02-13T15:36:21.639750941Z" level=info msg="CreateContainer within sandbox \"fce9f512ec08a0abb599f00027a213752638dc618ac7560ac834beded7cbb2fe\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:36:21.659312 kubelet[3336]: I0213 15:36:21.653907 3336 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-xx54d" podStartSLOduration=20.653883358 podStartE2EDuration="20.653883358s" podCreationTimestamp="2025-02-13 15:36:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:36:21.621442933 +0000 UTC m=+34.619853816" watchObservedRunningTime="2025-02-13 15:36:21.653883358 +0000 UTC m=+34.652294240" Feb 13 15:36:21.669907 containerd[1904]: time="2025-02-13T15:36:21.669852140Z" level=info msg="CreateContainer within sandbox \"fce9f512ec08a0abb599f00027a213752638dc618ac7560ac834beded7cbb2fe\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"86fdabf1795de16e0880ab2bd35fc1d47db0f81e227b2925d3a4743309c60339\"" Feb 13 15:36:21.674628 containerd[1904]: time="2025-02-13T15:36:21.673377579Z" level=info msg="StartContainer for \"86fdabf1795de16e0880ab2bd35fc1d47db0f81e227b2925d3a4743309c60339\"" Feb 13 15:36:21.729087 systemd[1]: Started cri-containerd-86fdabf1795de16e0880ab2bd35fc1d47db0f81e227b2925d3a4743309c60339.scope - libcontainer container 86fdabf1795de16e0880ab2bd35fc1d47db0f81e227b2925d3a4743309c60339. Feb 13 15:36:21.756699 systemd-networkd[1735]: veth539d7e35: Gained IPv6LL Feb 13 15:36:21.792629 containerd[1904]: time="2025-02-13T15:36:21.792441210Z" level=info msg="StartContainer for \"86fdabf1795de16e0880ab2bd35fc1d47db0f81e227b2925d3a4743309c60339\" returns successfully" Feb 13 15:36:21.884052 systemd-networkd[1735]: cni0: Gained IPv6LL Feb 13 15:36:22.595417 kubelet[3336]: I0213 15:36:22.595359 3336 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-rdxtx" podStartSLOduration=21.595337675 podStartE2EDuration="21.595337675s" podCreationTimestamp="2025-02-13 15:36:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:36:22.594901231 +0000 UTC m=+35.593312114" watchObservedRunningTime="2025-02-13 15:36:22.595337675 +0000 UTC m=+35.593748558" Feb 13 15:36:22.974627 systemd-networkd[1735]: veth7ccef0b1: Gained IPv6LL Feb 13 15:36:24.992742 ntpd[1867]: Listen normally on 9 cni0 192.168.0.1:123 Feb 13 15:36:24.992848 ntpd[1867]: Listen normally on 10 cni0 [fe80::80c2:beff:fecd:8497%5]:123 Feb 13 15:36:24.993265 ntpd[1867]: 13 Feb 15:36:24 ntpd[1867]: Listen normally on 9 cni0 192.168.0.1:123 Feb 13 15:36:24.993265 ntpd[1867]: 13 Feb 15:36:24 ntpd[1867]: Listen normally on 10 cni0 [fe80::80c2:beff:fecd:8497%5]:123 Feb 13 15:36:24.993265 ntpd[1867]: 13 Feb 15:36:24 ntpd[1867]: Listen normally on 11 veth539d7e35 [fe80::1cce:aaff:fe14:13b0%6]:123 Feb 13 15:36:24.993265 ntpd[1867]: 13 Feb 15:36:24 ntpd[1867]: Listen normally on 12 veth7ccef0b1 [fe80::e0f0:ffff:febb:d601%7]:123 Feb 13 15:36:24.992905 ntpd[1867]: Listen normally on 11 veth539d7e35 [fe80::1cce:aaff:fe14:13b0%6]:123 Feb 13 15:36:24.992948 ntpd[1867]: Listen normally on 12 veth7ccef0b1 [fe80::e0f0:ffff:febb:d601%7]:123 Feb 13 15:36:26.951712 systemd[1]: Started sshd@5-172.31.27.74:22-139.178.89.65:54536.service - OpenSSH per-connection server daemon (139.178.89.65:54536). Feb 13 15:36:27.177449 sshd[4243]: Accepted publickey for core from 139.178.89.65 port 54536 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:36:27.179374 sshd-session[4243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:36:27.188698 systemd-logind[1873]: New session 6 of user core. Feb 13 15:36:27.201921 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 15:36:27.465293 sshd[4245]: Connection closed by 139.178.89.65 port 54536 Feb 13 15:36:27.465861 sshd-session[4243]: pam_unix(sshd:session): session closed for user core Feb 13 15:36:27.470694 systemd-logind[1873]: Session 6 logged out. Waiting for processes to exit. Feb 13 15:36:27.472744 systemd[1]: sshd@5-172.31.27.74:22-139.178.89.65:54536.service: Deactivated successfully. Feb 13 15:36:27.475489 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 15:36:27.477432 systemd-logind[1873]: Removed session 6. Feb 13 15:36:32.512736 systemd[1]: Started sshd@6-172.31.27.74:22-139.178.89.65:54546.service - OpenSSH per-connection server daemon (139.178.89.65:54546). Feb 13 15:36:32.705113 sshd[4287]: Accepted publickey for core from 139.178.89.65 port 54546 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:36:32.708679 sshd-session[4287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:36:32.722179 systemd-logind[1873]: New session 7 of user core. Feb 13 15:36:32.730844 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 15:36:32.997617 sshd[4289]: Connection closed by 139.178.89.65 port 54546 Feb 13 15:36:32.999092 sshd-session[4287]: pam_unix(sshd:session): session closed for user core Feb 13 15:36:33.010017 systemd[1]: sshd@6-172.31.27.74:22-139.178.89.65:54546.service: Deactivated successfully. Feb 13 15:36:33.015493 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 15:36:33.018171 systemd-logind[1873]: Session 7 logged out. Waiting for processes to exit. Feb 13 15:36:33.020006 systemd-logind[1873]: Removed session 7. Feb 13 15:36:38.042205 systemd[1]: Started sshd@7-172.31.27.74:22-139.178.89.65:55520.service - OpenSSH per-connection server daemon (139.178.89.65:55520). Feb 13 15:36:38.232061 sshd[4324]: Accepted publickey for core from 139.178.89.65 port 55520 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:36:38.232987 sshd-session[4324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:36:38.239708 systemd-logind[1873]: New session 8 of user core. Feb 13 15:36:38.244839 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 15:36:38.476387 sshd[4326]: Connection closed by 139.178.89.65 port 55520 Feb 13 15:36:38.478813 sshd-session[4324]: pam_unix(sshd:session): session closed for user core Feb 13 15:36:38.484450 systemd[1]: sshd@7-172.31.27.74:22-139.178.89.65:55520.service: Deactivated successfully. Feb 13 15:36:38.494409 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 15:36:38.496456 systemd-logind[1873]: Session 8 logged out. Waiting for processes to exit. Feb 13 15:36:38.498185 systemd-logind[1873]: Removed session 8. Feb 13 15:36:43.523316 systemd[1]: Started sshd@8-172.31.27.74:22-139.178.89.65:55532.service - OpenSSH per-connection server daemon (139.178.89.65:55532). Feb 13 15:36:43.713954 sshd[4362]: Accepted publickey for core from 139.178.89.65 port 55532 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:36:43.715652 sshd-session[4362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:36:43.746726 systemd-logind[1873]: New session 9 of user core. Feb 13 15:36:43.756910 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 15:36:43.950454 sshd[4364]: Connection closed by 139.178.89.65 port 55532 Feb 13 15:36:43.951071 sshd-session[4362]: pam_unix(sshd:session): session closed for user core Feb 13 15:36:43.955787 systemd[1]: sshd@8-172.31.27.74:22-139.178.89.65:55532.service: Deactivated successfully. Feb 13 15:36:43.958183 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 15:36:43.959170 systemd-logind[1873]: Session 9 logged out. Waiting for processes to exit. Feb 13 15:36:43.960477 systemd-logind[1873]: Removed session 9. Feb 13 15:36:43.989061 systemd[1]: Started sshd@9-172.31.27.74:22-139.178.89.65:55534.service - OpenSSH per-connection server daemon (139.178.89.65:55534). Feb 13 15:36:44.153811 sshd[4376]: Accepted publickey for core from 139.178.89.65 port 55534 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:36:44.155819 sshd-session[4376]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:36:44.175897 systemd-logind[1873]: New session 10 of user core. Feb 13 15:36:44.178832 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 15:36:44.440321 sshd[4378]: Connection closed by 139.178.89.65 port 55534 Feb 13 15:36:44.441091 sshd-session[4376]: pam_unix(sshd:session): session closed for user core Feb 13 15:36:44.453235 systemd[1]: sshd@9-172.31.27.74:22-139.178.89.65:55534.service: Deactivated successfully. Feb 13 15:36:44.460843 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 15:36:44.464249 systemd-logind[1873]: Session 10 logged out. Waiting for processes to exit. Feb 13 15:36:44.483185 systemd[1]: Started sshd@10-172.31.27.74:22-139.178.89.65:55542.service - OpenSSH per-connection server daemon (139.178.89.65:55542). Feb 13 15:36:44.484809 systemd-logind[1873]: Removed session 10. Feb 13 15:36:44.682156 sshd[4387]: Accepted publickey for core from 139.178.89.65 port 55542 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:36:44.684209 sshd-session[4387]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:36:44.713472 systemd-logind[1873]: New session 11 of user core. Feb 13 15:36:44.719019 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 15:36:44.934847 sshd[4389]: Connection closed by 139.178.89.65 port 55542 Feb 13 15:36:44.936580 sshd-session[4387]: pam_unix(sshd:session): session closed for user core Feb 13 15:36:44.941149 systemd-logind[1873]: Session 11 logged out. Waiting for processes to exit. Feb 13 15:36:44.941792 systemd[1]: sshd@10-172.31.27.74:22-139.178.89.65:55542.service: Deactivated successfully. Feb 13 15:36:44.945297 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 15:36:44.946668 systemd-logind[1873]: Removed session 11. Feb 13 15:36:49.980024 systemd[1]: Started sshd@11-172.31.27.74:22-139.178.89.65:36462.service - OpenSSH per-connection server daemon (139.178.89.65:36462). Feb 13 15:36:50.200299 sshd[4423]: Accepted publickey for core from 139.178.89.65 port 36462 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:36:50.202208 sshd-session[4423]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:36:50.208912 systemd-logind[1873]: New session 12 of user core. Feb 13 15:36:50.213932 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 15:36:50.420098 sshd[4425]: Connection closed by 139.178.89.65 port 36462 Feb 13 15:36:50.422071 sshd-session[4423]: pam_unix(sshd:session): session closed for user core Feb 13 15:36:50.428382 systemd[1]: sshd@11-172.31.27.74:22-139.178.89.65:36462.service: Deactivated successfully. Feb 13 15:36:50.432045 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 15:36:50.434901 systemd-logind[1873]: Session 12 logged out. Waiting for processes to exit. Feb 13 15:36:50.437356 systemd-logind[1873]: Removed session 12. Feb 13 15:36:55.462076 systemd[1]: Started sshd@12-172.31.27.74:22-139.178.89.65:58226.service - OpenSSH per-connection server daemon (139.178.89.65:58226). Feb 13 15:36:55.657458 sshd[4457]: Accepted publickey for core from 139.178.89.65 port 58226 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:36:55.658327 sshd-session[4457]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:36:55.684247 systemd-logind[1873]: New session 13 of user core. Feb 13 15:36:55.687904 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 15:36:55.928731 sshd[4459]: Connection closed by 139.178.89.65 port 58226 Feb 13 15:36:55.930733 sshd-session[4457]: pam_unix(sshd:session): session closed for user core Feb 13 15:36:55.935775 systemd[1]: sshd@12-172.31.27.74:22-139.178.89.65:58226.service: Deactivated successfully. Feb 13 15:36:55.939458 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 15:36:55.940465 systemd-logind[1873]: Session 13 logged out. Waiting for processes to exit. Feb 13 15:36:55.941731 systemd-logind[1873]: Removed session 13. Feb 13 15:36:55.966077 systemd[1]: Started sshd@13-172.31.27.74:22-139.178.89.65:58240.service - OpenSSH per-connection server daemon (139.178.89.65:58240). Feb 13 15:36:56.152404 sshd[4470]: Accepted publickey for core from 139.178.89.65 port 58240 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:36:56.153195 sshd-session[4470]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:36:56.160089 systemd-logind[1873]: New session 14 of user core. Feb 13 15:36:56.168115 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 15:36:56.919005 sshd[4472]: Connection closed by 139.178.89.65 port 58240 Feb 13 15:36:56.920145 sshd-session[4470]: pam_unix(sshd:session): session closed for user core Feb 13 15:36:56.924717 systemd-logind[1873]: Session 14 logged out. Waiting for processes to exit. Feb 13 15:36:56.926332 systemd[1]: sshd@13-172.31.27.74:22-139.178.89.65:58240.service: Deactivated successfully. Feb 13 15:36:56.929191 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 15:36:56.930804 systemd-logind[1873]: Removed session 14. Feb 13 15:36:56.956203 systemd[1]: Started sshd@14-172.31.27.74:22-139.178.89.65:58250.service - OpenSSH per-connection server daemon (139.178.89.65:58250). Feb 13 15:36:57.132249 sshd[4502]: Accepted publickey for core from 139.178.89.65 port 58250 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:36:57.133966 sshd-session[4502]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:36:57.140521 systemd-logind[1873]: New session 15 of user core. Feb 13 15:36:57.145859 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 15:36:59.121266 sshd[4504]: Connection closed by 139.178.89.65 port 58250 Feb 13 15:36:59.122140 sshd-session[4502]: pam_unix(sshd:session): session closed for user core Feb 13 15:36:59.132614 systemd[1]: sshd@14-172.31.27.74:22-139.178.89.65:58250.service: Deactivated successfully. Feb 13 15:36:59.133286 systemd-logind[1873]: Session 15 logged out. Waiting for processes to exit. Feb 13 15:36:59.139849 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 15:36:59.141916 systemd-logind[1873]: Removed session 15. Feb 13 15:36:59.163317 systemd[1]: Started sshd@15-172.31.27.74:22-139.178.89.65:58252.service - OpenSSH per-connection server daemon (139.178.89.65:58252). Feb 13 15:36:59.333727 sshd[4519]: Accepted publickey for core from 139.178.89.65 port 58252 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:36:59.335538 sshd-session[4519]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:36:59.343175 systemd-logind[1873]: New session 16 of user core. Feb 13 15:36:59.349862 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 15:36:59.686510 sshd[4521]: Connection closed by 139.178.89.65 port 58252 Feb 13 15:36:59.688141 sshd-session[4519]: pam_unix(sshd:session): session closed for user core Feb 13 15:36:59.691528 systemd[1]: sshd@15-172.31.27.74:22-139.178.89.65:58252.service: Deactivated successfully. Feb 13 15:36:59.694022 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 15:36:59.695931 systemd-logind[1873]: Session 16 logged out. Waiting for processes to exit. Feb 13 15:36:59.697450 systemd-logind[1873]: Removed session 16. Feb 13 15:36:59.721018 systemd[1]: Started sshd@16-172.31.27.74:22-139.178.89.65:58266.service - OpenSSH per-connection server daemon (139.178.89.65:58266). Feb 13 15:36:59.877184 sshd[4529]: Accepted publickey for core from 139.178.89.65 port 58266 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:36:59.878862 sshd-session[4529]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:36:59.905448 systemd-logind[1873]: New session 17 of user core. Feb 13 15:36:59.914691 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 15:37:00.301035 sshd[4531]: Connection closed by 139.178.89.65 port 58266 Feb 13 15:37:00.302851 sshd-session[4529]: pam_unix(sshd:session): session closed for user core Feb 13 15:37:00.307543 systemd[1]: sshd@16-172.31.27.74:22-139.178.89.65:58266.service: Deactivated successfully. Feb 13 15:37:00.311418 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 15:37:00.313368 systemd-logind[1873]: Session 17 logged out. Waiting for processes to exit. Feb 13 15:37:00.316864 systemd-logind[1873]: Removed session 17. Feb 13 15:37:05.341052 systemd[1]: Started sshd@17-172.31.27.74:22-139.178.89.65:36574.service - OpenSSH per-connection server daemon (139.178.89.65:36574). Feb 13 15:37:05.510739 sshd[4566]: Accepted publickey for core from 139.178.89.65 port 36574 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:37:05.512155 sshd-session[4566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:05.517135 systemd-logind[1873]: New session 18 of user core. Feb 13 15:37:05.523861 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 15:37:05.774897 sshd[4568]: Connection closed by 139.178.89.65 port 36574 Feb 13 15:37:05.775670 sshd-session[4566]: pam_unix(sshd:session): session closed for user core Feb 13 15:37:05.781074 systemd[1]: sshd@17-172.31.27.74:22-139.178.89.65:36574.service: Deactivated successfully. Feb 13 15:37:05.784141 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 15:37:05.785465 systemd-logind[1873]: Session 18 logged out. Waiting for processes to exit. Feb 13 15:37:05.787341 systemd-logind[1873]: Removed session 18. Feb 13 15:37:10.816454 systemd[1]: Started sshd@18-172.31.27.74:22-139.178.89.65:36580.service - OpenSSH per-connection server daemon (139.178.89.65:36580). Feb 13 15:37:10.998922 sshd[4604]: Accepted publickey for core from 139.178.89.65 port 36580 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:37:11.007106 sshd-session[4604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:11.026454 systemd-logind[1873]: New session 19 of user core. Feb 13 15:37:11.034114 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 15:37:11.324964 sshd[4606]: Connection closed by 139.178.89.65 port 36580 Feb 13 15:37:11.326821 sshd-session[4604]: pam_unix(sshd:session): session closed for user core Feb 13 15:37:11.336552 systemd[1]: sshd@18-172.31.27.74:22-139.178.89.65:36580.service: Deactivated successfully. Feb 13 15:37:11.340017 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 15:37:11.341359 systemd-logind[1873]: Session 19 logged out. Waiting for processes to exit. Feb 13 15:37:11.345074 systemd-logind[1873]: Removed session 19. Feb 13 15:37:16.375123 systemd[1]: Started sshd@19-172.31.27.74:22-139.178.89.65:50402.service - OpenSSH per-connection server daemon (139.178.89.65:50402). Feb 13 15:37:16.556098 sshd[4637]: Accepted publickey for core from 139.178.89.65 port 50402 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:37:16.558753 sshd-session[4637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:16.569012 systemd-logind[1873]: New session 20 of user core. Feb 13 15:37:16.585209 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 15:37:16.838818 sshd[4645]: Connection closed by 139.178.89.65 port 50402 Feb 13 15:37:16.839926 sshd-session[4637]: pam_unix(sshd:session): session closed for user core Feb 13 15:37:16.843374 systemd[1]: sshd@19-172.31.27.74:22-139.178.89.65:50402.service: Deactivated successfully. Feb 13 15:37:16.847374 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 15:37:16.850259 systemd-logind[1873]: Session 20 logged out. Waiting for processes to exit. Feb 13 15:37:16.852572 systemd-logind[1873]: Removed session 20. Feb 13 15:37:21.875491 systemd[1]: Started sshd@20-172.31.27.74:22-139.178.89.65:50414.service - OpenSSH per-connection server daemon (139.178.89.65:50414). Feb 13 15:37:22.049386 sshd[4693]: Accepted publickey for core from 139.178.89.65 port 50414 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:37:22.051429 sshd-session[4693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:22.061463 systemd-logind[1873]: New session 21 of user core. Feb 13 15:37:22.064876 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 15:37:22.311466 sshd[4695]: Connection closed by 139.178.89.65 port 50414 Feb 13 15:37:22.312348 sshd-session[4693]: pam_unix(sshd:session): session closed for user core Feb 13 15:37:22.318198 systemd-logind[1873]: Session 21 logged out. Waiting for processes to exit. Feb 13 15:37:22.319288 systemd[1]: sshd@20-172.31.27.74:22-139.178.89.65:50414.service: Deactivated successfully. Feb 13 15:37:22.321435 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 15:37:22.323014 systemd-logind[1873]: Removed session 21. Feb 13 15:37:35.746756 systemd[1]: cri-containerd-eef2b49ba0b095ae5360c4c8602691a9a630c71e8661c7b543819f1de8caf68d.scope: Deactivated successfully. Feb 13 15:37:35.748768 systemd[1]: cri-containerd-eef2b49ba0b095ae5360c4c8602691a9a630c71e8661c7b543819f1de8caf68d.scope: Consumed 3.267s CPU time, 26.1M memory peak, 0B memory swap peak. Feb 13 15:37:35.816544 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eef2b49ba0b095ae5360c4c8602691a9a630c71e8661c7b543819f1de8caf68d-rootfs.mount: Deactivated successfully. Feb 13 15:37:35.829100 containerd[1904]: time="2025-02-13T15:37:35.828971373Z" level=info msg="shim disconnected" id=eef2b49ba0b095ae5360c4c8602691a9a630c71e8661c7b543819f1de8caf68d namespace=k8s.io Feb 13 15:37:35.830103 containerd[1904]: time="2025-02-13T15:37:35.829093762Z" level=warning msg="cleaning up after shim disconnected" id=eef2b49ba0b095ae5360c4c8602691a9a630c71e8661c7b543819f1de8caf68d namespace=k8s.io Feb 13 15:37:35.830103 containerd[1904]: time="2025-02-13T15:37:35.829123578Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:37:36.871615 kubelet[3336]: I0213 15:37:36.870901 3336 scope.go:117] "RemoveContainer" containerID="eef2b49ba0b095ae5360c4c8602691a9a630c71e8661c7b543819f1de8caf68d" Feb 13 15:37:36.888288 containerd[1904]: time="2025-02-13T15:37:36.874350333Z" level=info msg="CreateContainer within sandbox \"2c084ef8204638779a945d68ff4dadab86c09e48a2ed1b5a24eb7a15cf10a565\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Feb 13 15:37:36.919873 containerd[1904]: time="2025-02-13T15:37:36.919825802Z" level=info msg="CreateContainer within sandbox \"2c084ef8204638779a945d68ff4dadab86c09e48a2ed1b5a24eb7a15cf10a565\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"efd0dcb9dd9cd5a6dca2612d798174bbffbabd48a824a45ce9a678917f4ea73e\"" Feb 13 15:37:36.920368 containerd[1904]: time="2025-02-13T15:37:36.920327551Z" level=info msg="StartContainer for \"efd0dcb9dd9cd5a6dca2612d798174bbffbabd48a824a45ce9a678917f4ea73e\"" Feb 13 15:37:36.970125 systemd[1]: Started cri-containerd-efd0dcb9dd9cd5a6dca2612d798174bbffbabd48a824a45ce9a678917f4ea73e.scope - libcontainer container efd0dcb9dd9cd5a6dca2612d798174bbffbabd48a824a45ce9a678917f4ea73e. Feb 13 15:37:37.070477 containerd[1904]: time="2025-02-13T15:37:37.067372021Z" level=info msg="StartContainer for \"efd0dcb9dd9cd5a6dca2612d798174bbffbabd48a824a45ce9a678917f4ea73e\" returns successfully" Feb 13 15:37:39.817679 kubelet[3336]: E0213 15:37:39.817216 3336 controller.go:195] "Failed to update lease" err="Put \"https://172.31.27.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-74?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 13 15:37:41.229610 systemd[1]: cri-containerd-24dd6f6a4e1005025788095d48e54c3c703174fcfafff9df6b67798dc1b57af5.scope: Deactivated successfully. Feb 13 15:37:41.230331 systemd[1]: cri-containerd-24dd6f6a4e1005025788095d48e54c3c703174fcfafff9df6b67798dc1b57af5.scope: Consumed 1.760s CPU time, 17.7M memory peak, 0B memory swap peak. Feb 13 15:37:41.262400 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-24dd6f6a4e1005025788095d48e54c3c703174fcfafff9df6b67798dc1b57af5-rootfs.mount: Deactivated successfully. Feb 13 15:37:41.283509 containerd[1904]: time="2025-02-13T15:37:41.283204651Z" level=info msg="shim disconnected" id=24dd6f6a4e1005025788095d48e54c3c703174fcfafff9df6b67798dc1b57af5 namespace=k8s.io Feb 13 15:37:41.283509 containerd[1904]: time="2025-02-13T15:37:41.283507922Z" level=warning msg="cleaning up after shim disconnected" id=24dd6f6a4e1005025788095d48e54c3c703174fcfafff9df6b67798dc1b57af5 namespace=k8s.io Feb 13 15:37:41.284313 containerd[1904]: time="2025-02-13T15:37:41.283521420Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:37:41.892816 kubelet[3336]: I0213 15:37:41.892742 3336 scope.go:117] "RemoveContainer" containerID="24dd6f6a4e1005025788095d48e54c3c703174fcfafff9df6b67798dc1b57af5" Feb 13 15:37:41.895631 containerd[1904]: time="2025-02-13T15:37:41.895593933Z" level=info msg="CreateContainer within sandbox \"cf798edb96a29d15cf49623016b4a62df6e40e79b5e22f9bb85f7c4ade794955\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Feb 13 15:37:41.924514 containerd[1904]: time="2025-02-13T15:37:41.924464068Z" level=info msg="CreateContainer within sandbox \"cf798edb96a29d15cf49623016b4a62df6e40e79b5e22f9bb85f7c4ade794955\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"ed22c1fcea304fd7bce3b2ae2a08240925e68ef2bbe94e228128bdc0dc1fa3a3\"" Feb 13 15:37:41.925034 containerd[1904]: time="2025-02-13T15:37:41.924999045Z" level=info msg="StartContainer for \"ed22c1fcea304fd7bce3b2ae2a08240925e68ef2bbe94e228128bdc0dc1fa3a3\"" Feb 13 15:37:41.976914 systemd[1]: Started cri-containerd-ed22c1fcea304fd7bce3b2ae2a08240925e68ef2bbe94e228128bdc0dc1fa3a3.scope - libcontainer container ed22c1fcea304fd7bce3b2ae2a08240925e68ef2bbe94e228128bdc0dc1fa3a3. Feb 13 15:37:42.042394 containerd[1904]: time="2025-02-13T15:37:42.042106657Z" level=info msg="StartContainer for \"ed22c1fcea304fd7bce3b2ae2a08240925e68ef2bbe94e228128bdc0dc1fa3a3\" returns successfully"