Dec 13 01:36:01.776019 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024 Dec 13 01:36:01.776104 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:36:01.776123 kernel: BIOS-provided physical RAM map: Dec 13 01:36:01.776133 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 01:36:01.776143 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 01:36:01.776381 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 01:36:01.776401 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Dec 13 01:36:01.776413 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Dec 13 01:36:01.776423 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Dec 13 01:36:01.776434 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 01:36:01.776445 kernel: NX (Execute Disable) protection: active Dec 13 01:36:01.776456 kernel: APIC: Static calls initialized Dec 13 01:36:01.776467 kernel: SMBIOS 2.7 present. Dec 13 01:36:01.776479 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Dec 13 01:36:01.776496 kernel: Hypervisor detected: KVM Dec 13 01:36:01.776509 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 01:36:01.776521 kernel: kvm-clock: using sched offset of 6703879296 cycles Dec 13 01:36:01.776535 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 01:36:01.776548 kernel: tsc: Detected 2499.996 MHz processor Dec 13 01:36:01.776562 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 01:36:01.776575 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 01:36:01.776591 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Dec 13 01:36:01.776604 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 13 01:36:01.776617 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 01:36:01.776630 kernel: Using GB pages for direct mapping Dec 13 01:36:01.776643 kernel: ACPI: Early table checksum verification disabled Dec 13 01:36:01.776656 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Dec 13 01:36:01.776669 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Dec 13 01:36:01.776682 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Dec 13 01:36:01.776695 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Dec 13 01:36:01.776711 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Dec 13 01:36:01.776724 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Dec 13 01:36:01.776737 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Dec 13 01:36:01.776750 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Dec 13 01:36:01.776763 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Dec 13 01:36:01.776777 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Dec 13 01:36:01.776790 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Dec 13 01:36:01.776802 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Dec 13 01:36:01.776815 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Dec 13 01:36:01.776832 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Dec 13 01:36:01.776851 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Dec 13 01:36:01.776865 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Dec 13 01:36:01.776879 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Dec 13 01:36:01.776894 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Dec 13 01:36:01.776911 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Dec 13 01:36:01.776926 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Dec 13 01:36:01.776940 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Dec 13 01:36:01.776954 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Dec 13 01:36:01.776968 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 01:36:01.776983 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 01:36:01.776998 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Dec 13 01:36:01.777012 kernel: NUMA: Initialized distance table, cnt=1 Dec 13 01:36:01.777026 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Dec 13 01:36:01.777044 kernel: Zone ranges: Dec 13 01:36:01.777058 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 01:36:01.777070 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Dec 13 01:36:01.777107 kernel: Normal empty Dec 13 01:36:01.777121 kernel: Movable zone start for each node Dec 13 01:36:01.777135 kernel: Early memory node ranges Dec 13 01:36:01.777149 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 01:36:01.777162 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Dec 13 01:36:01.777175 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Dec 13 01:36:01.777189 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 01:36:01.777208 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 01:36:01.777223 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Dec 13 01:36:01.777236 kernel: ACPI: PM-Timer IO Port: 0xb008 Dec 13 01:36:01.777251 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 01:36:01.777265 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Dec 13 01:36:01.777279 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 01:36:01.777293 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 01:36:01.777306 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 01:36:01.777321 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 01:36:01.777337 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 01:36:01.777350 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 01:36:01.777365 kernel: TSC deadline timer available Dec 13 01:36:01.777378 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 01:36:01.777390 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 13 01:36:01.777405 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Dec 13 01:36:01.777418 kernel: Booting paravirtualized kernel on KVM Dec 13 01:36:01.777432 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 01:36:01.777448 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 13 01:36:01.777464 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Dec 13 01:36:01.777476 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Dec 13 01:36:01.777488 kernel: pcpu-alloc: [0] 0 1 Dec 13 01:36:01.777500 kernel: kvm-guest: PV spinlocks enabled Dec 13 01:36:01.777512 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 01:36:01.777556 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:36:01.777569 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:36:01.777582 kernel: random: crng init done Dec 13 01:36:01.777598 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 01:36:01.779836 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 01:36:01.779857 kernel: Fallback order for Node 0: 0 Dec 13 01:36:01.779873 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Dec 13 01:36:01.779888 kernel: Policy zone: DMA32 Dec 13 01:36:01.779904 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:36:01.779919 kernel: Memory: 1932348K/2057760K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 125152K reserved, 0K cma-reserved) Dec 13 01:36:01.779963 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 01:36:01.779977 kernel: Kernel/User page tables isolation: enabled Dec 13 01:36:01.779995 kernel: ftrace: allocating 37902 entries in 149 pages Dec 13 01:36:01.780008 kernel: ftrace: allocated 149 pages with 4 groups Dec 13 01:36:01.780020 kernel: Dynamic Preempt: voluntary Dec 13 01:36:01.780033 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:36:01.780047 kernel: rcu: RCU event tracing is enabled. Dec 13 01:36:01.780063 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 01:36:01.780094 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:36:01.780108 kernel: Rude variant of Tasks RCU enabled. Dec 13 01:36:01.780120 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:36:01.780138 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:36:01.780149 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 01:36:01.780160 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 01:36:01.780172 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 01:36:01.780277 kernel: Console: colour VGA+ 80x25 Dec 13 01:36:01.780292 kernel: printk: console [ttyS0] enabled Dec 13 01:36:01.780305 kernel: ACPI: Core revision 20230628 Dec 13 01:36:01.780318 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Dec 13 01:36:01.780330 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 01:36:01.780346 kernel: x2apic enabled Dec 13 01:36:01.780359 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 01:36:01.780383 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Dec 13 01:36:01.780400 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Dec 13 01:36:01.780414 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Dec 13 01:36:01.780428 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Dec 13 01:36:01.780443 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 01:36:01.780456 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 01:36:01.780470 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 01:36:01.780482 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 01:36:01.780497 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Dec 13 01:36:01.780512 kernel: RETBleed: Vulnerable Dec 13 01:36:01.780527 kernel: Speculative Store Bypass: Vulnerable Dec 13 01:36:01.780544 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 01:36:01.780558 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 01:36:01.780571 kernel: GDS: Unknown: Dependent on hypervisor status Dec 13 01:36:01.780584 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 01:36:01.780597 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 01:36:01.780611 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 01:36:01.780628 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Dec 13 01:36:01.780641 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Dec 13 01:36:01.780655 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Dec 13 01:36:01.780668 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Dec 13 01:36:01.780682 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Dec 13 01:36:01.780696 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Dec 13 01:36:01.780710 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 01:36:01.780724 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Dec 13 01:36:01.780737 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Dec 13 01:36:01.780751 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Dec 13 01:36:01.780764 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Dec 13 01:36:01.780781 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Dec 13 01:36:01.780794 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Dec 13 01:36:01.780807 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Dec 13 01:36:01.780821 kernel: Freeing SMP alternatives memory: 32K Dec 13 01:36:01.780834 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:36:01.780848 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:36:01.780907 kernel: landlock: Up and running. Dec 13 01:36:01.780922 kernel: SELinux: Initializing. Dec 13 01:36:01.780936 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 01:36:01.780950 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 01:36:01.780999 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Dec 13 01:36:01.781017 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:36:01.781032 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:36:01.781070 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:36:01.781098 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Dec 13 01:36:01.781112 kernel: signal: max sigframe size: 3632 Dec 13 01:36:01.781150 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:36:01.781166 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:36:01.781180 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 01:36:01.781194 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:36:01.781237 kernel: smpboot: x86: Booting SMP configuration: Dec 13 01:36:01.781252 kernel: .... node #0, CPUs: #1 Dec 13 01:36:01.781268 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Dec 13 01:36:01.781283 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 13 01:36:01.781515 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 01:36:01.781534 kernel: smpboot: Max logical packages: 1 Dec 13 01:36:01.781548 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Dec 13 01:36:01.781561 kernel: devtmpfs: initialized Dec 13 01:36:01.781575 kernel: x86/mm: Memory block size: 128MB Dec 13 01:36:01.781594 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:36:01.781607 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 01:36:01.781621 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:36:01.782677 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:36:01.782706 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:36:01.782722 kernel: audit: type=2000 audit(1734053760.316:1): state=initialized audit_enabled=0 res=1 Dec 13 01:36:01.782737 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:36:01.782752 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 01:36:01.782774 kernel: cpuidle: using governor menu Dec 13 01:36:01.782790 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:36:01.782808 kernel: dca service started, version 1.12.1 Dec 13 01:36:01.782825 kernel: PCI: Using configuration type 1 for base access Dec 13 01:36:01.782841 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 01:36:01.782859 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:36:01.782876 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:36:01.782893 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:36:01.782911 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:36:01.783028 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:36:01.783045 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:36:01.783060 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:36:01.783138 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:36:01.783152 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Dec 13 01:36:01.783166 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 01:36:01.783180 kernel: ACPI: Interpreter enabled Dec 13 01:36:01.783195 kernel: ACPI: PM: (supports S0 S5) Dec 13 01:36:01.783208 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 01:36:01.783225 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 01:36:01.783244 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 01:36:01.783259 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Dec 13 01:36:01.783274 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 01:36:01.783617 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 13 01:36:01.783887 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Dec 13 01:36:01.784027 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Dec 13 01:36:01.784049 kernel: acpiphp: Slot [3] registered Dec 13 01:36:01.784070 kernel: acpiphp: Slot [4] registered Dec 13 01:36:01.784105 kernel: acpiphp: Slot [5] registered Dec 13 01:36:01.784119 kernel: acpiphp: Slot [6] registered Dec 13 01:36:01.784132 kernel: acpiphp: Slot [7] registered Dec 13 01:36:01.784253 kernel: acpiphp: Slot [8] registered Dec 13 01:36:01.784272 kernel: acpiphp: Slot [9] registered Dec 13 01:36:01.784288 kernel: acpiphp: Slot [10] registered Dec 13 01:36:01.784342 kernel: acpiphp: Slot [11] registered Dec 13 01:36:01.784360 kernel: acpiphp: Slot [12] registered Dec 13 01:36:01.784384 kernel: acpiphp: Slot [13] registered Dec 13 01:36:01.784434 kernel: acpiphp: Slot [14] registered Dec 13 01:36:01.784450 kernel: acpiphp: Slot [15] registered Dec 13 01:36:01.784464 kernel: acpiphp: Slot [16] registered Dec 13 01:36:01.784515 kernel: acpiphp: Slot [17] registered Dec 13 01:36:01.784532 kernel: acpiphp: Slot [18] registered Dec 13 01:36:01.784636 kernel: acpiphp: Slot [19] registered Dec 13 01:36:01.784711 kernel: acpiphp: Slot [20] registered Dec 13 01:36:01.784727 kernel: acpiphp: Slot [21] registered Dec 13 01:36:01.784742 kernel: acpiphp: Slot [22] registered Dec 13 01:36:01.784762 kernel: acpiphp: Slot [23] registered Dec 13 01:36:01.784777 kernel: acpiphp: Slot [24] registered Dec 13 01:36:01.784792 kernel: acpiphp: Slot [25] registered Dec 13 01:36:01.784808 kernel: acpiphp: Slot [26] registered Dec 13 01:36:01.784822 kernel: acpiphp: Slot [27] registered Dec 13 01:36:01.784838 kernel: acpiphp: Slot [28] registered Dec 13 01:36:01.784853 kernel: acpiphp: Slot [29] registered Dec 13 01:36:01.784914 kernel: acpiphp: Slot [30] registered Dec 13 01:36:01.784929 kernel: acpiphp: Slot [31] registered Dec 13 01:36:01.784950 kernel: PCI host bridge to bus 0000:00 Dec 13 01:36:01.785134 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 01:36:01.785258 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 01:36:01.785379 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 01:36:01.785494 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Dec 13 01:36:01.785611 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 01:36:01.785842 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Dec 13 01:36:01.786169 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Dec 13 01:36:01.786327 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Dec 13 01:36:01.786479 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Dec 13 01:36:01.786618 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Dec 13 01:36:01.786766 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Dec 13 01:36:01.786915 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Dec 13 01:36:01.787172 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Dec 13 01:36:01.787378 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Dec 13 01:36:01.787548 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Dec 13 01:36:01.787693 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Dec 13 01:36:01.787846 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Dec 13 01:36:01.787984 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Dec 13 01:36:01.788139 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Dec 13 01:36:01.788567 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 01:36:01.788738 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Dec 13 01:36:01.788881 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Dec 13 01:36:01.789164 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Dec 13 01:36:01.789315 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Dec 13 01:36:01.789339 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 01:36:01.789356 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 01:36:01.789378 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 01:36:01.789395 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 01:36:01.789411 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 13 01:36:01.789428 kernel: iommu: Default domain type: Translated Dec 13 01:36:01.789445 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 01:36:01.789515 kernel: PCI: Using ACPI for IRQ routing Dec 13 01:36:01.789532 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 01:36:01.789549 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 01:36:01.789566 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Dec 13 01:36:01.789712 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Dec 13 01:36:01.789855 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Dec 13 01:36:01.789993 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 01:36:01.790015 kernel: vgaarb: loaded Dec 13 01:36:01.790032 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Dec 13 01:36:01.790050 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Dec 13 01:36:01.790066 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 01:36:01.790104 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:36:01.790119 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:36:01.790137 kernel: pnp: PnP ACPI init Dec 13 01:36:01.790151 kernel: pnp: PnP ACPI: found 5 devices Dec 13 01:36:01.790166 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 01:36:01.790180 kernel: NET: Registered PF_INET protocol family Dec 13 01:36:01.790198 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 01:36:01.790214 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 01:36:01.790230 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:36:01.790247 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 01:36:01.790263 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 13 01:36:01.790285 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 01:36:01.790300 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 01:36:01.790317 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 01:36:01.790334 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:36:01.790351 kernel: NET: Registered PF_XDP protocol family Dec 13 01:36:01.790508 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 01:36:01.790645 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 01:36:01.790778 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 01:36:01.790917 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Dec 13 01:36:01.791073 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 01:36:01.791129 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:36:01.791145 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 01:36:01.791162 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Dec 13 01:36:01.791178 kernel: clocksource: Switched to clocksource tsc Dec 13 01:36:01.791194 kernel: Initialise system trusted keyrings Dec 13 01:36:01.791210 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 01:36:01.791230 kernel: Key type asymmetric registered Dec 13 01:36:01.791246 kernel: Asymmetric key parser 'x509' registered Dec 13 01:36:01.791262 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 01:36:01.791279 kernel: io scheduler mq-deadline registered Dec 13 01:36:01.791396 kernel: io scheduler kyber registered Dec 13 01:36:01.791414 kernel: io scheduler bfq registered Dec 13 01:36:01.791431 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 01:36:01.791448 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:36:01.791465 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 01:36:01.791486 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 01:36:01.791502 kernel: i8042: Warning: Keylock active Dec 13 01:36:01.791518 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 01:36:01.791534 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 01:36:01.791695 kernel: rtc_cmos 00:00: RTC can wake from S4 Dec 13 01:36:01.791927 kernel: rtc_cmos 00:00: registered as rtc0 Dec 13 01:36:01.792145 kernel: rtc_cmos 00:00: setting system clock to 2024-12-13T01:36:00 UTC (1734053760) Dec 13 01:36:01.792288 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Dec 13 01:36:01.792316 kernel: intel_pstate: CPU model not supported Dec 13 01:36:01.792334 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:36:01.792350 kernel: Segment Routing with IPv6 Dec 13 01:36:01.792366 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:36:01.792383 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:36:01.792399 kernel: Key type dns_resolver registered Dec 13 01:36:01.792415 kernel: IPI shorthand broadcast: enabled Dec 13 01:36:01.792432 kernel: sched_clock: Marking stable (1032121769, 319591655)->(1477478415, -125764991) Dec 13 01:36:01.792449 kernel: registered taskstats version 1 Dec 13 01:36:01.792469 kernel: Loading compiled-in X.509 certificates Dec 13 01:36:01.792487 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0' Dec 13 01:36:01.792503 kernel: Key type .fscrypt registered Dec 13 01:36:01.792520 kernel: Key type fscrypt-provisioning registered Dec 13 01:36:01.792537 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:36:01.792553 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:36:01.792570 kernel: ima: No architecture policies found Dec 13 01:36:01.792587 kernel: clk: Disabling unused clocks Dec 13 01:36:01.792603 kernel: Freeing unused kernel image (initmem) memory: 42844K Dec 13 01:36:01.792624 kernel: Write protecting the kernel read-only data: 36864k Dec 13 01:36:01.792640 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Dec 13 01:36:01.792656 kernel: Run /init as init process Dec 13 01:36:01.792673 kernel: with arguments: Dec 13 01:36:01.792690 kernel: /init Dec 13 01:36:01.792707 kernel: with environment: Dec 13 01:36:01.792722 kernel: HOME=/ Dec 13 01:36:01.792737 kernel: TERM=linux Dec 13 01:36:01.792754 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:36:01.792782 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:36:01.792819 systemd[1]: Detected virtualization amazon. Dec 13 01:36:01.792841 systemd[1]: Detected architecture x86-64. Dec 13 01:36:01.792859 systemd[1]: Running in initrd. Dec 13 01:36:01.792878 systemd[1]: No hostname configured, using default hostname. Dec 13 01:36:01.792900 systemd[1]: Hostname set to . Dec 13 01:36:01.792919 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:36:01.792937 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 01:36:01.792960 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:36:01.792982 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:36:01.793001 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:36:01.793020 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:36:01.793040 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:36:01.793059 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:36:01.793076 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:36:01.793132 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:36:01.793151 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:36:01.793169 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:36:01.793187 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:36:01.793205 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:36:01.793224 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:36:01.793242 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:36:01.793265 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:36:01.793283 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:36:01.793301 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:36:01.793320 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:36:01.793338 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:36:01.793357 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:36:01.793376 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:36:01.793395 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:36:01.793417 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:36:01.793435 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:36:01.793454 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:36:01.793473 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:36:01.793492 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:36:01.793514 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:36:01.793537 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:36:01.793556 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:36:01.793575 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:36:01.793629 systemd-journald[178]: Collecting audit messages is disabled. Dec 13 01:36:01.793675 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:36:01.793693 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:36:01.793714 systemd-journald[178]: Journal started Dec 13 01:36:01.793753 systemd-journald[178]: Runtime Journal (/run/log/journal/ec25d9408dfc40b3cf134d178bbbc1c4) is 4.8M, max 38.6M, 33.7M free. Dec 13 01:36:01.797138 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:36:01.801117 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:36:01.824734 systemd-modules-load[179]: Inserted module 'overlay' Dec 13 01:36:01.993340 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:36:01.993379 kernel: Bridge firewalling registered Dec 13 01:36:01.886425 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:36:01.893168 systemd-modules-load[179]: Inserted module 'br_netfilter' Dec 13 01:36:01.996326 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:36:02.007547 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:36:02.015806 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:36:02.061186 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:36:02.089793 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:36:02.108791 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:36:02.118072 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:36:02.230394 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:36:02.237624 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:36:02.267690 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:36:02.282289 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:36:02.307148 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:36:02.411814 dracut-cmdline[213]: dracut-dracut-053 Dec 13 01:36:02.424113 dracut-cmdline[213]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:36:02.484219 systemd-resolved[208]: Positive Trust Anchors: Dec 13 01:36:02.484242 systemd-resolved[208]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:36:02.484292 systemd-resolved[208]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:36:02.525134 systemd-resolved[208]: Defaulting to hostname 'linux'. Dec 13 01:36:02.528911 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:36:02.540445 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:36:02.739284 kernel: SCSI subsystem initialized Dec 13 01:36:02.760126 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:36:02.784140 kernel: iscsi: registered transport (tcp) Dec 13 01:36:02.833115 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:36:02.833189 kernel: QLogic iSCSI HBA Driver Dec 13 01:36:03.013675 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:36:03.040429 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:36:03.186157 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:36:03.186236 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:36:03.193129 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:36:03.314124 kernel: raid6: avx512x4 gen() 3612 MB/s Dec 13 01:36:03.332504 kernel: raid6: avx512x2 gen() 2474 MB/s Dec 13 01:36:03.349132 kernel: raid6: avx512x1 gen() 3322 MB/s Dec 13 01:36:03.367299 kernel: raid6: avx2x4 gen() 5393 MB/s Dec 13 01:36:03.388359 kernel: raid6: avx2x2 gen() 6019 MB/s Dec 13 01:36:03.405517 kernel: raid6: avx2x1 gen() 3432 MB/s Dec 13 01:36:03.405591 kernel: raid6: using algorithm avx2x2 gen() 6019 MB/s Dec 13 01:36:03.426117 kernel: raid6: .... xor() 3665 MB/s, rmw enabled Dec 13 01:36:03.426196 kernel: raid6: using avx512x2 recovery algorithm Dec 13 01:36:03.463110 kernel: xor: automatically using best checksumming function avx Dec 13 01:36:03.828106 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:36:03.846257 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:36:03.855600 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:36:03.911077 systemd-udevd[396]: Using default interface naming scheme 'v255'. Dec 13 01:36:03.922764 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:36:03.961886 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:36:04.033055 dracut-pre-trigger[400]: rd.md=0: removing MD RAID activation Dec 13 01:36:04.135215 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:36:04.145319 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:36:04.239588 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:36:04.253584 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:36:04.328980 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:36:04.351312 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:36:04.362698 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:36:04.369978 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:36:04.388855 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:36:04.431358 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:36:04.451507 kernel: ena 0000:00:05.0: ENA device version: 0.10 Dec 13 01:36:04.483884 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Dec 13 01:36:04.484439 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Dec 13 01:36:04.485152 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 01:36:04.485205 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:78:ed:3c:29:79 Dec 13 01:36:04.486602 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 01:36:04.486643 kernel: AES CTR mode by8 optimization enabled Dec 13 01:36:04.490092 (udev-worker)[456]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:36:04.517008 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:36:04.517209 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:36:04.520187 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:36:04.521386 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:36:04.524323 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:36:04.527303 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:36:04.539720 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:36:04.575514 kernel: nvme nvme0: pci function 0000:00:04.0 Dec 13 01:36:04.575782 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Dec 13 01:36:04.589170 kernel: nvme nvme0: 2/0/0 default/read/poll queues Dec 13 01:36:04.597126 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 01:36:04.597195 kernel: GPT:9289727 != 16777215 Dec 13 01:36:04.597218 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 01:36:04.597240 kernel: GPT:9289727 != 16777215 Dec 13 01:36:04.597260 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 01:36:04.597433 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:36:04.690107 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (444) Dec 13 01:36:04.695119 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/nvme0n1p3 scanned by (udev-worker) (457) Dec 13 01:36:04.763012 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Dec 13 01:36:04.855170 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:36:04.875641 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Dec 13 01:36:04.888444 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Dec 13 01:36:04.890020 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Dec 13 01:36:04.966382 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 13 01:36:04.995400 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:36:05.006036 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:36:05.026702 disk-uuid[617]: Primary Header is updated. Dec 13 01:36:05.026702 disk-uuid[617]: Secondary Entries is updated. Dec 13 01:36:05.026702 disk-uuid[617]: Secondary Header is updated. Dec 13 01:36:05.034107 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:36:05.042109 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:36:05.052049 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:36:05.057361 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:36:06.052872 disk-uuid[618]: The operation has completed successfully. Dec 13 01:36:06.054301 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:36:06.279376 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:36:06.279509 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:36:06.305297 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:36:06.309391 sh[969]: Success Dec 13 01:36:06.325116 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 01:36:06.455793 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:36:06.470773 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:36:06.477249 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:36:06.515750 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be Dec 13 01:36:06.515816 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:36:06.515847 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:36:06.517309 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:36:06.517332 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:36:06.565121 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 13 01:36:06.568361 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:36:06.570838 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 01:36:06.583279 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:36:06.592343 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:36:06.610872 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:36:06.610938 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:36:06.610962 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 01:36:06.617131 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 01:36:06.630628 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:36:06.631879 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:36:06.639140 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:36:06.648318 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:36:06.714670 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:36:06.724360 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:36:06.781367 systemd-networkd[1161]: lo: Link UP Dec 13 01:36:06.781380 systemd-networkd[1161]: lo: Gained carrier Dec 13 01:36:06.784456 systemd-networkd[1161]: Enumeration completed Dec 13 01:36:06.784985 systemd-networkd[1161]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:36:06.784990 systemd-networkd[1161]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:36:06.786266 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:36:06.790714 systemd[1]: Reached target network.target - Network. Dec 13 01:36:06.805642 systemd-networkd[1161]: eth0: Link UP Dec 13 01:36:06.805652 systemd-networkd[1161]: eth0: Gained carrier Dec 13 01:36:06.805670 systemd-networkd[1161]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:36:06.828725 systemd-networkd[1161]: eth0: DHCPv4 address 172.31.20.145/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 01:36:06.874790 ignition[1083]: Ignition 2.19.0 Dec 13 01:36:06.874805 ignition[1083]: Stage: fetch-offline Dec 13 01:36:06.875115 ignition[1083]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:36:06.875128 ignition[1083]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:36:06.880148 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:36:06.875524 ignition[1083]: Ignition finished successfully Dec 13 01:36:06.889322 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 01:36:06.908665 ignition[1170]: Ignition 2.19.0 Dec 13 01:36:06.908679 ignition[1170]: Stage: fetch Dec 13 01:36:06.909304 ignition[1170]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:36:06.909318 ignition[1170]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:36:06.909510 ignition[1170]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:36:06.965931 ignition[1170]: PUT result: OK Dec 13 01:36:06.979461 ignition[1170]: parsed url from cmdline: "" Dec 13 01:36:06.979472 ignition[1170]: no config URL provided Dec 13 01:36:06.979483 ignition[1170]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:36:06.979498 ignition[1170]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:36:06.979526 ignition[1170]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:36:06.983433 ignition[1170]: PUT result: OK Dec 13 01:36:06.983513 ignition[1170]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Dec 13 01:36:06.987428 ignition[1170]: GET result: OK Dec 13 01:36:06.987537 ignition[1170]: parsing config with SHA512: 354e8ee6daa72694d335be96d7fdd209578100b36bbd41430b3b5f44814c6ed22e9ea7c85f9abf3f90df151ce2b1ad89038b8b396b7a5a4647490296999eac72 Dec 13 01:36:06.994967 unknown[1170]: fetched base config from "system" Dec 13 01:36:06.994986 unknown[1170]: fetched base config from "system" Dec 13 01:36:06.995836 ignition[1170]: fetch: fetch complete Dec 13 01:36:06.994995 unknown[1170]: fetched user config from "aws" Dec 13 01:36:06.995842 ignition[1170]: fetch: fetch passed Dec 13 01:36:07.001914 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 01:36:06.995900 ignition[1170]: Ignition finished successfully Dec 13 01:36:07.018502 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:36:07.043584 ignition[1176]: Ignition 2.19.0 Dec 13 01:36:07.043614 ignition[1176]: Stage: kargs Dec 13 01:36:07.046041 ignition[1176]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:36:07.046052 ignition[1176]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:36:07.047185 ignition[1176]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:36:07.051172 ignition[1176]: PUT result: OK Dec 13 01:36:07.057560 ignition[1176]: kargs: kargs passed Dec 13 01:36:07.057651 ignition[1176]: Ignition finished successfully Dec 13 01:36:07.062686 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:36:07.070595 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:36:07.145671 ignition[1182]: Ignition 2.19.0 Dec 13 01:36:07.145689 ignition[1182]: Stage: disks Dec 13 01:36:07.146999 ignition[1182]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:36:07.147014 ignition[1182]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:36:07.147895 ignition[1182]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:36:07.151516 ignition[1182]: PUT result: OK Dec 13 01:36:07.159971 ignition[1182]: disks: disks passed Dec 13 01:36:07.160071 ignition[1182]: Ignition finished successfully Dec 13 01:36:07.164242 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:36:07.175920 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:36:07.182762 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:36:07.195122 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:36:07.198664 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:36:07.200291 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:36:07.210330 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:36:07.247442 systemd-fsck[1190]: ROOT: clean, 14/553520 files, 52654/553472 blocks Dec 13 01:36:07.251051 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:36:07.264477 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:36:07.396563 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none. Dec 13 01:36:07.397309 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:36:07.399663 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:36:07.409413 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:36:07.417355 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:36:07.420429 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 01:36:07.422715 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:36:07.422745 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:36:07.428710 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:36:07.438368 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:36:07.445145 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1209) Dec 13 01:36:07.447116 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:36:07.447177 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:36:07.448369 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 01:36:07.460542 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 01:36:07.462907 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:36:07.622654 initrd-setup-root[1236]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:36:07.635251 initrd-setup-root[1243]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:36:07.643257 initrd-setup-root[1250]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:36:07.650391 initrd-setup-root[1257]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:36:07.807788 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:36:07.815229 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:36:07.822121 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:36:07.828727 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:36:07.831718 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:36:07.872015 ignition[1325]: INFO : Ignition 2.19.0 Dec 13 01:36:07.872015 ignition[1325]: INFO : Stage: mount Dec 13 01:36:07.872015 ignition[1325]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:36:07.872015 ignition[1325]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:36:07.872015 ignition[1325]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:36:07.883702 ignition[1325]: INFO : PUT result: OK Dec 13 01:36:07.885790 ignition[1325]: INFO : mount: mount passed Dec 13 01:36:07.887994 ignition[1325]: INFO : Ignition finished successfully Dec 13 01:36:07.889722 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:36:07.891369 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:36:07.908300 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:36:07.926452 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:36:07.949112 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1337) Dec 13 01:36:07.952124 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:36:07.952190 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:36:07.953098 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 01:36:07.958135 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 01:36:07.960794 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:36:07.996398 ignition[1353]: INFO : Ignition 2.19.0 Dec 13 01:36:07.996398 ignition[1353]: INFO : Stage: files Dec 13 01:36:08.002407 ignition[1353]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:36:08.002407 ignition[1353]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:36:08.002407 ignition[1353]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:36:08.002407 ignition[1353]: INFO : PUT result: OK Dec 13 01:36:08.011768 ignition[1353]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:36:08.025630 ignition[1353]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:36:08.025630 ignition[1353]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:36:08.037163 ignition[1353]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:36:08.039391 ignition[1353]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:36:08.039391 ignition[1353]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:36:08.037991 unknown[1353]: wrote ssh authorized keys file for user: core Dec 13 01:36:08.050189 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 01:36:08.050189 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 01:36:08.050189 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:36:08.050189 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 01:36:08.156942 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 01:36:08.303762 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:36:08.303762 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:36:08.314224 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:36:08.314224 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:36:08.332969 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:36:08.332969 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:36:08.332969 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:36:08.349434 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:36:08.349434 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:36:08.349434 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:36:08.349434 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:36:08.349434 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:36:08.349434 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:36:08.349434 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:36:08.349434 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 01:36:08.754260 systemd-networkd[1161]: eth0: Gained IPv6LL Dec 13 01:36:08.971852 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 13 01:36:09.920533 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:36:09.920533 ignition[1353]: INFO : files: op(c): [started] processing unit "containerd.service" Dec 13 01:36:09.925382 ignition[1353]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 01:36:09.928275 ignition[1353]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 01:36:09.928275 ignition[1353]: INFO : files: op(c): [finished] processing unit "containerd.service" Dec 13 01:36:09.928275 ignition[1353]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Dec 13 01:36:09.928275 ignition[1353]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:36:09.928275 ignition[1353]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:36:09.928275 ignition[1353]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Dec 13 01:36:09.928275 ignition[1353]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:36:09.928275 ignition[1353]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:36:09.928275 ignition[1353]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:36:09.928275 ignition[1353]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:36:09.928275 ignition[1353]: INFO : files: files passed Dec 13 01:36:09.928275 ignition[1353]: INFO : Ignition finished successfully Dec 13 01:36:09.955505 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:36:09.965331 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:36:09.978503 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:36:09.993795 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:36:09.994099 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:36:10.011982 initrd-setup-root-after-ignition[1382]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:36:10.011982 initrd-setup-root-after-ignition[1382]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:36:10.021621 initrd-setup-root-after-ignition[1386]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:36:10.026355 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:36:10.032714 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:36:10.041342 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:36:10.073380 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:36:10.073515 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:36:10.077018 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:36:10.079498 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:36:10.084319 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:36:10.105376 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:36:10.121929 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:36:10.129777 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:36:10.149675 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:36:10.150271 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:36:10.150481 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:36:10.150738 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:36:10.150891 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:36:10.153288 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:36:10.154066 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:36:10.154893 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:36:10.155698 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:36:10.155864 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:36:10.156126 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:36:10.156304 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:36:10.156571 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:36:10.156841 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:36:10.157020 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:36:10.157415 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:36:10.157599 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:36:10.158227 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:36:10.158452 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:36:10.158629 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:36:10.172618 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:36:10.175833 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:36:10.177916 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:36:10.216236 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:36:10.217694 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:36:10.225818 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:36:10.226956 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:36:10.239234 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:36:10.240803 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:36:10.241008 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:36:10.271717 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:36:10.272901 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:36:10.273319 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:36:10.275824 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:36:10.276183 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:36:10.294077 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:36:10.298241 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:36:10.309274 ignition[1406]: INFO : Ignition 2.19.0 Dec 13 01:36:10.309274 ignition[1406]: INFO : Stage: umount Dec 13 01:36:10.318409 ignition[1406]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:36:10.318409 ignition[1406]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:36:10.318409 ignition[1406]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:36:10.318409 ignition[1406]: INFO : PUT result: OK Dec 13 01:36:10.327407 ignition[1406]: INFO : umount: umount passed Dec 13 01:36:10.327407 ignition[1406]: INFO : Ignition finished successfully Dec 13 01:36:10.330286 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:36:10.330412 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:36:10.335535 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:36:10.335618 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:36:10.337633 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:36:10.337689 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:36:10.339048 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 01:36:10.339109 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 01:36:10.340843 systemd[1]: Stopped target network.target - Network. Dec 13 01:36:10.350353 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:36:10.350626 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:36:10.354672 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:36:10.358429 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:36:10.362696 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:36:10.365480 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:36:10.370509 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:36:10.373361 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:36:10.373518 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:36:10.377967 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:36:10.378024 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:36:10.378476 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:36:10.378532 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:36:10.378664 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:36:10.378855 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:36:10.379239 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:36:10.379545 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:36:10.381657 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:36:10.382392 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:36:10.382511 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:36:10.383563 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:36:10.383657 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:36:10.409607 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:36:10.409759 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:36:10.410138 systemd-networkd[1161]: eth0: DHCPv6 lease lost Dec 13 01:36:10.416200 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:36:10.416395 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:36:10.421912 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:36:10.421978 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:36:10.438338 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:36:10.440843 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:36:10.440936 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:36:10.444765 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:36:10.444833 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:36:10.448926 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:36:10.448983 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:36:10.450249 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:36:10.450292 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:36:10.452920 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:36:10.469527 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:36:10.472051 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:36:10.476574 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:36:10.480353 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:36:10.490344 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:36:10.490427 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:36:10.494344 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:36:10.494409 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:36:10.498608 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:36:10.498696 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:36:10.503436 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:36:10.503534 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:36:10.507059 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:36:10.507199 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:36:10.542631 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:36:10.546840 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:36:10.546977 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:36:10.550887 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:36:10.551997 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:36:10.566240 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:36:10.566371 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:36:10.577853 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:36:10.585249 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:36:10.617966 systemd[1]: Switching root. Dec 13 01:36:10.648438 systemd-journald[178]: Journal stopped Dec 13 01:36:12.910222 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Dec 13 01:36:12.910329 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:36:12.910351 kernel: SELinux: policy capability open_perms=1 Dec 13 01:36:12.910374 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:36:12.910391 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:36:12.910408 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:36:12.910434 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:36:12.910453 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:36:12.910477 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:36:12.910499 kernel: audit: type=1403 audit(1734053771.242:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:36:12.910519 systemd[1]: Successfully loaded SELinux policy in 70.666ms. Dec 13 01:36:12.910549 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 34.975ms. Dec 13 01:36:12.910569 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:36:12.910589 systemd[1]: Detected virtualization amazon. Dec 13 01:36:12.910608 systemd[1]: Detected architecture x86-64. Dec 13 01:36:12.910627 systemd[1]: Detected first boot. Dec 13 01:36:12.910645 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:36:12.910664 zram_generator::config[1466]: No configuration found. Dec 13 01:36:12.910683 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:36:12.910708 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:36:12.910730 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Dec 13 01:36:12.910749 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 01:36:12.910774 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 01:36:12.910792 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 01:36:12.910812 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 01:36:12.910833 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 01:36:12.910853 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 01:36:12.910874 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 01:36:12.910894 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 01:36:12.910912 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:36:12.910931 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:36:12.910951 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 01:36:12.910969 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 01:36:12.910989 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 01:36:12.911006 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:36:12.911024 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 01:36:12.911042 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:36:12.911065 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 01:36:12.912347 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:36:12.912485 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:36:12.912509 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:36:12.912612 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:36:12.912635 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 01:36:12.912656 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 01:36:12.912675 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:36:12.912702 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:36:12.912722 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:36:12.912741 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:36:12.912760 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:36:12.912779 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 01:36:12.912798 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 01:36:12.912818 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 01:36:12.912838 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 01:36:12.912856 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:36:12.912879 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 01:36:12.912899 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 01:36:12.912919 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 01:36:12.912938 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 01:36:12.912957 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:36:12.912977 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:36:12.912996 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 01:36:12.913016 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:36:12.913035 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:36:12.913058 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:36:12.913077 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 01:36:12.913529 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:36:12.913548 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:36:12.913566 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Dec 13 01:36:12.913586 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Dec 13 01:36:12.913606 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:36:12.913626 kernel: fuse: init (API version 7.39) Dec 13 01:36:12.913655 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:36:12.913675 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 01:36:12.913696 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 01:36:12.913716 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:36:12.913738 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:36:12.913759 kernel: ACPI: bus type drm_connector registered Dec 13 01:36:12.913854 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 01:36:12.913882 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 01:36:12.913905 kernel: loop: module loaded Dec 13 01:36:12.913931 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 01:36:12.913954 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 01:36:12.913977 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 01:36:12.914000 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 01:36:12.914022 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 01:36:12.914044 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:36:12.914066 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:36:12.914110 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 01:36:12.914133 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:36:12.914160 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:36:12.914184 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:36:12.914205 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:36:12.914228 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:36:12.914292 systemd-journald[1570]: Collecting audit messages is disabled. Dec 13 01:36:12.914501 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:36:12.914529 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:36:12.914553 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 01:36:12.914575 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:36:12.914707 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:36:12.914736 systemd-journald[1570]: Journal started Dec 13 01:36:12.914798 systemd-journald[1570]: Runtime Journal (/run/log/journal/ec25d9408dfc40b3cf134d178bbbc1c4) is 4.8M, max 38.6M, 33.7M free. Dec 13 01:36:12.919252 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:36:12.925763 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:36:12.928321 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 01:36:12.930355 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 01:36:12.950102 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 01:36:12.960311 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 01:36:12.969217 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 01:36:12.970939 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:36:12.986450 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 01:36:12.994357 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 01:36:12.998403 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:36:13.020863 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 01:36:13.022659 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:36:13.036464 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:36:13.045226 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:36:13.052707 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:36:13.058330 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 01:36:13.062308 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 01:36:13.077736 systemd-journald[1570]: Time spent on flushing to /var/log/journal/ec25d9408dfc40b3cf134d178bbbc1c4 is 74.317ms for 948 entries. Dec 13 01:36:13.077736 systemd-journald[1570]: System Journal (/var/log/journal/ec25d9408dfc40b3cf134d178bbbc1c4) is 8.0M, max 195.6M, 187.6M free. Dec 13 01:36:13.166281 systemd-journald[1570]: Received client request to flush runtime journal. Dec 13 01:36:13.084292 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 01:36:13.088151 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 01:36:13.095501 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 01:36:13.127664 udevadm[1622]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 01:36:13.146891 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:36:13.156701 systemd-tmpfiles[1618]: ACLs are not supported, ignoring. Dec 13 01:36:13.156725 systemd-tmpfiles[1618]: ACLs are not supported, ignoring. Dec 13 01:36:13.175593 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 01:36:13.179461 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:36:13.205310 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 01:36:13.318280 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 01:36:13.330642 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:36:13.358486 systemd-tmpfiles[1639]: ACLs are not supported, ignoring. Dec 13 01:36:13.358516 systemd-tmpfiles[1639]: ACLs are not supported, ignoring. Dec 13 01:36:13.368007 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:36:14.157606 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 01:36:14.164422 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:36:14.202960 systemd-udevd[1645]: Using default interface naming scheme 'v255'. Dec 13 01:36:14.249078 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:36:14.261580 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:36:14.294612 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 01:36:14.346895 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Dec 13 01:36:14.353388 (udev-worker)[1653]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:36:14.384121 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1658) Dec 13 01:36:14.386112 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1658) Dec 13 01:36:14.401717 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 01:36:14.500114 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 13 01:36:14.517357 kernel: ACPI: button: Power Button [PWRF] Dec 13 01:36:14.517456 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Dec 13 01:36:14.525102 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Dec 13 01:36:14.525174 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Dec 13 01:36:14.560274 kernel: ACPI: button: Sleep Button [SLPF] Dec 13 01:36:14.534017 systemd-networkd[1648]: lo: Link UP Dec 13 01:36:14.534023 systemd-networkd[1648]: lo: Gained carrier Dec 13 01:36:14.536572 systemd-networkd[1648]: Enumeration completed Dec 13 01:36:14.536745 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:36:14.541006 systemd-networkd[1648]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:36:14.541012 systemd-networkd[1648]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:36:14.546590 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 01:36:14.552973 systemd-networkd[1648]: eth0: Link UP Dec 13 01:36:14.555847 systemd-networkd[1648]: eth0: Gained carrier Dec 13 01:36:14.555877 systemd-networkd[1648]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:36:14.566247 systemd-networkd[1648]: eth0: DHCPv4 address 172.31.20.145/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 01:36:14.598101 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 01:36:14.598172 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (1657) Dec 13 01:36:14.625464 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:36:14.766066 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 01:36:14.792993 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 13 01:36:14.890247 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:36:14.901331 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 01:36:14.917064 lvm[1769]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:36:14.946486 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 01:36:14.948358 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:36:14.955469 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 01:36:14.963340 lvm[1772]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:36:14.991490 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 01:36:14.993204 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:36:14.994576 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:36:14.994614 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:36:14.995880 systemd[1]: Reached target machines.target - Containers. Dec 13 01:36:14.999069 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 01:36:15.004298 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 01:36:15.011294 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 01:36:15.012598 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:36:15.014529 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 01:36:15.029343 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 01:36:15.036178 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 01:36:15.048885 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 01:36:15.066318 kernel: loop0: detected capacity change from 0 to 140768 Dec 13 01:36:15.067325 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 01:36:15.081112 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:36:15.082189 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 01:36:15.128308 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:36:15.152221 kernel: loop1: detected capacity change from 0 to 61336 Dec 13 01:36:15.193124 kernel: loop2: detected capacity change from 0 to 211296 Dec 13 01:36:15.323150 kernel: loop3: detected capacity change from 0 to 142488 Dec 13 01:36:15.429307 kernel: loop4: detected capacity change from 0 to 140768 Dec 13 01:36:15.489238 kernel: loop5: detected capacity change from 0 to 61336 Dec 13 01:36:15.535117 kernel: loop6: detected capacity change from 0 to 211296 Dec 13 01:36:15.579118 kernel: loop7: detected capacity change from 0 to 142488 Dec 13 01:36:15.630211 (sd-merge)[1794]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Dec 13 01:36:15.631140 (sd-merge)[1794]: Merged extensions into '/usr'. Dec 13 01:36:15.636963 systemd[1]: Reloading requested from client PID 1780 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 01:36:15.636982 systemd[1]: Reloading... Dec 13 01:36:15.667190 systemd-networkd[1648]: eth0: Gained IPv6LL Dec 13 01:36:15.804129 zram_generator::config[1824]: No configuration found. Dec 13 01:36:16.038443 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:36:16.158206 systemd[1]: Reloading finished in 520 ms. Dec 13 01:36:16.175229 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 01:36:16.179535 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 01:36:16.193325 systemd[1]: Starting ensure-sysext.service... Dec 13 01:36:16.193770 ldconfig[1776]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:36:16.197600 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:36:16.202260 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 01:36:16.220994 systemd[1]: Reloading requested from client PID 1879 ('systemctl') (unit ensure-sysext.service)... Dec 13 01:36:16.221011 systemd[1]: Reloading... Dec 13 01:36:16.244358 systemd-tmpfiles[1880]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:36:16.244891 systemd-tmpfiles[1880]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 01:36:16.246553 systemd-tmpfiles[1880]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:36:16.247011 systemd-tmpfiles[1880]: ACLs are not supported, ignoring. Dec 13 01:36:16.247139 systemd-tmpfiles[1880]: ACLs are not supported, ignoring. Dec 13 01:36:16.253283 systemd-tmpfiles[1880]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:36:16.253300 systemd-tmpfiles[1880]: Skipping /boot Dec 13 01:36:16.276570 systemd-tmpfiles[1880]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:36:16.276592 systemd-tmpfiles[1880]: Skipping /boot Dec 13 01:36:16.328130 zram_generator::config[1906]: No configuration found. Dec 13 01:36:16.493668 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:36:16.611997 systemd[1]: Reloading finished in 390 ms. Dec 13 01:36:16.637175 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:36:16.650345 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:36:16.654902 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 01:36:16.666823 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 01:36:16.679152 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:36:16.691319 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 01:36:16.715632 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:36:16.716327 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:36:16.722500 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:36:16.733438 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:36:16.742330 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:36:16.746471 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:36:16.750321 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:36:16.757153 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:36:16.763431 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:36:16.774233 augenrules[1991]: No rules Dec 13 01:36:16.775625 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:36:16.789040 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:36:16.790395 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:36:16.819542 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:36:16.821718 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:36:16.822269 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:36:16.823902 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 01:36:16.827035 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 01:36:16.833237 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:36:16.833992 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:36:16.836771 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:36:16.837268 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:36:16.841026 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:36:16.843434 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:36:16.871376 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:36:16.873393 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:36:16.883287 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:36:16.898673 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:36:16.917680 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:36:16.975506 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:36:16.977218 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:36:16.977498 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 01:36:16.987549 systemd-resolved[1976]: Positive Trust Anchors: Dec 13 01:36:16.987572 systemd-resolved[1976]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:36:16.987625 systemd-resolved[1976]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:36:16.993154 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 01:36:16.994596 systemd-resolved[1976]: Defaulting to hostname 'linux'. Dec 13 01:36:16.995281 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:36:17.013965 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:36:17.024050 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:36:17.024305 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:36:17.027889 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:36:17.028206 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:36:17.030836 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:36:17.034529 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:36:17.036984 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:36:17.037430 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:36:17.049382 systemd[1]: Finished ensure-sysext.service. Dec 13 01:36:17.056393 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 01:36:17.069350 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 01:36:17.072001 systemd[1]: Reached target network.target - Network. Dec 13 01:36:17.073320 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 01:36:17.074735 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:36:17.076297 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:36:17.076549 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:36:17.076596 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:36:17.076640 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:36:17.077950 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 01:36:17.079383 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 01:36:17.080922 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 01:36:17.082174 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 01:36:17.083642 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 01:36:17.085987 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:36:17.086022 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:36:17.087102 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:36:17.090034 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 01:36:17.093744 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 01:36:17.097526 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 01:36:17.105770 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 01:36:17.107982 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:36:17.109092 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:36:17.110825 systemd[1]: System is tainted: cgroupsv1 Dec 13 01:36:17.110893 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:36:17.110927 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:36:17.120450 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 01:36:17.126481 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 01:36:17.136308 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 01:36:17.143135 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 01:36:17.178566 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 01:36:17.180173 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 01:36:17.212939 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:36:17.220224 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 01:36:17.228049 jq[2037]: false Dec 13 01:36:17.238988 systemd[1]: Started ntpd.service - Network Time Service. Dec 13 01:36:17.253291 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 01:36:17.263306 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 01:36:17.271274 dbus-daemon[2036]: [system] SELinux support is enabled Dec 13 01:36:17.275013 extend-filesystems[2038]: Found loop4 Dec 13 01:36:17.275013 extend-filesystems[2038]: Found loop5 Dec 13 01:36:17.275013 extend-filesystems[2038]: Found loop6 Dec 13 01:36:17.275013 extend-filesystems[2038]: Found loop7 Dec 13 01:36:17.275013 extend-filesystems[2038]: Found nvme0n1 Dec 13 01:36:17.275013 extend-filesystems[2038]: Found nvme0n1p1 Dec 13 01:36:17.275013 extend-filesystems[2038]: Found nvme0n1p2 Dec 13 01:36:17.275013 extend-filesystems[2038]: Found nvme0n1p3 Dec 13 01:36:17.275013 extend-filesystems[2038]: Found usr Dec 13 01:36:17.275013 extend-filesystems[2038]: Found nvme0n1p4 Dec 13 01:36:17.275013 extend-filesystems[2038]: Found nvme0n1p6 Dec 13 01:36:17.275013 extend-filesystems[2038]: Found nvme0n1p7 Dec 13 01:36:17.275013 extend-filesystems[2038]: Found nvme0n1p9 Dec 13 01:36:17.275013 extend-filesystems[2038]: Checking size of /dev/nvme0n1p9 Dec 13 01:36:17.279992 systemd[1]: Starting setup-oem.service - Setup OEM... Dec 13 01:36:17.287439 dbus-daemon[2036]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1648 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 01:36:17.294295 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 01:36:17.306313 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 01:36:17.332394 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 01:36:17.334897 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 01:36:17.352315 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 01:36:17.376789 extend-filesystems[2038]: Resized partition /dev/nvme0n1p9 Dec 13 01:36:17.381237 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 01:36:17.385454 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 01:36:17.397481 extend-filesystems[2073]: resize2fs 1.47.1 (20-May-2024) Dec 13 01:36:17.401773 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:36:17.402205 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 01:36:17.405194 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Dec 13 01:36:17.406975 jq[2068]: true Dec 13 01:36:17.423256 ntpd[2043]: 13 Dec 01:36:17 ntpd[2043]: ntpd 4.2.8p17@1.4004-o Thu Dec 12 22:36:14 UTC 2024 (1): Starting Dec 13 01:36:17.423256 ntpd[2043]: 13 Dec 01:36:17 ntpd[2043]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 13 01:36:17.423256 ntpd[2043]: 13 Dec 01:36:17 ntpd[2043]: ---------------------------------------------------- Dec 13 01:36:17.423256 ntpd[2043]: 13 Dec 01:36:17 ntpd[2043]: ntp-4 is maintained by Network Time Foundation, Dec 13 01:36:17.423256 ntpd[2043]: 13 Dec 01:36:17 ntpd[2043]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 13 01:36:17.423256 ntpd[2043]: 13 Dec 01:36:17 ntpd[2043]: corporation. Support and training for ntp-4 are Dec 13 01:36:17.423256 ntpd[2043]: 13 Dec 01:36:17 ntpd[2043]: available at https://www.nwtime.org/support Dec 13 01:36:17.423256 ntpd[2043]: 13 Dec 01:36:17 ntpd[2043]: ---------------------------------------------------- Dec 13 01:36:17.422507 ntpd[2043]: ntpd 4.2.8p17@1.4004-o Thu Dec 12 22:36:14 UTC 2024 (1): Starting Dec 13 01:36:17.441557 ntpd[2043]: 13 Dec 01:36:17 ntpd[2043]: proto: precision = 0.084 usec (-23) Dec 13 01:36:17.441557 ntpd[2043]: 13 Dec 01:36:17 ntpd[2043]: basedate set to 2024-11-30 Dec 13 01:36:17.441557 ntpd[2043]: 13 Dec 01:36:17 ntpd[2043]: gps base set to 2024-12-01 (week 2343) Dec 13 01:36:17.441557 ntpd[2043]: 13 Dec 01:36:17 ntpd[2043]: Listen and drop on 0 v6wildcard [::]:123 Dec 13 01:36:17.441557 ntpd[2043]: 13 Dec 01:36:17 ntpd[2043]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 13 01:36:17.441557 ntpd[2043]: 13 Dec 01:36:17 ntpd[2043]: Listen normally on 2 lo 127.0.0.1:123 Dec 13 01:36:17.441557 ntpd[2043]: 13 Dec 01:36:17 ntpd[2043]: Listen normally on 3 eth0 172.31.20.145:123 Dec 13 01:36:17.441557 ntpd[2043]: 13 Dec 01:36:17 ntpd[2043]: Listen normally on 4 lo [::1]:123 Dec 13 01:36:17.441557 ntpd[2043]: 13 Dec 01:36:17 ntpd[2043]: Listen normally on 5 eth0 [fe80::478:edff:fe3c:2979%2]:123 Dec 13 01:36:17.441557 ntpd[2043]: 13 Dec 01:36:17 ntpd[2043]: Listening on routing socket on fd #22 for interface updates Dec 13 01:36:17.441557 ntpd[2043]: 13 Dec 01:36:17 ntpd[2043]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:36:17.441557 ntpd[2043]: 13 Dec 01:36:17 ntpd[2043]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:36:17.441204 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:36:17.422536 ntpd[2043]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 13 01:36:17.441591 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 01:36:17.422548 ntpd[2043]: ---------------------------------------------------- Dec 13 01:36:17.422558 ntpd[2043]: ntp-4 is maintained by Network Time Foundation, Dec 13 01:36:17.422569 ntpd[2043]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 13 01:36:17.422580 ntpd[2043]: corporation. Support and training for ntp-4 are Dec 13 01:36:17.422589 ntpd[2043]: available at https://www.nwtime.org/support Dec 13 01:36:17.422600 ntpd[2043]: ---------------------------------------------------- Dec 13 01:36:17.424993 ntpd[2043]: proto: precision = 0.084 usec (-23) Dec 13 01:36:17.426652 ntpd[2043]: basedate set to 2024-11-30 Dec 13 01:36:17.426677 ntpd[2043]: gps base set to 2024-12-01 (week 2343) Dec 13 01:36:17.432030 ntpd[2043]: Listen and drop on 0 v6wildcard [::]:123 Dec 13 01:36:17.432120 ntpd[2043]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 13 01:36:17.432326 ntpd[2043]: Listen normally on 2 lo 127.0.0.1:123 Dec 13 01:36:17.432362 ntpd[2043]: Listen normally on 3 eth0 172.31.20.145:123 Dec 13 01:36:17.432403 ntpd[2043]: Listen normally on 4 lo [::1]:123 Dec 13 01:36:17.432444 ntpd[2043]: Listen normally on 5 eth0 [fe80::478:edff:fe3c:2979%2]:123 Dec 13 01:36:17.432480 ntpd[2043]: Listening on routing socket on fd #22 for interface updates Dec 13 01:36:17.433956 ntpd[2043]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:36:17.433987 ntpd[2043]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:36:17.455337 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 01:36:17.469225 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:36:17.469572 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 01:36:17.481134 coreos-metadata[2035]: Dec 13 01:36:17.478 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 13 01:36:17.493272 coreos-metadata[2035]: Dec 13 01:36:17.493 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Dec 13 01:36:17.500605 coreos-metadata[2035]: Dec 13 01:36:17.500 INFO Fetch successful Dec 13 01:36:17.500605 coreos-metadata[2035]: Dec 13 01:36:17.500 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Dec 13 01:36:17.515059 coreos-metadata[2035]: Dec 13 01:36:17.515 INFO Fetch successful Dec 13 01:36:17.515215 coreos-metadata[2035]: Dec 13 01:36:17.515 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Dec 13 01:36:17.520970 coreos-metadata[2035]: Dec 13 01:36:17.520 INFO Fetch successful Dec 13 01:36:17.520970 coreos-metadata[2035]: Dec 13 01:36:17.520 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Dec 13 01:36:17.521291 coreos-metadata[2035]: Dec 13 01:36:17.521 INFO Fetch successful Dec 13 01:36:17.521351 coreos-metadata[2035]: Dec 13 01:36:17.521 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Dec 13 01:36:17.524234 coreos-metadata[2035]: Dec 13 01:36:17.522 INFO Fetch failed with 404: resource not found Dec 13 01:36:17.524234 coreos-metadata[2035]: Dec 13 01:36:17.522 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Dec 13 01:36:17.529909 coreos-metadata[2035]: Dec 13 01:36:17.527 INFO Fetch successful Dec 13 01:36:17.529909 coreos-metadata[2035]: Dec 13 01:36:17.528 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Dec 13 01:36:17.537132 update_engine[2062]: I20241213 01:36:17.536146 2062 main.cc:92] Flatcar Update Engine starting Dec 13 01:36:17.540146 coreos-metadata[2035]: Dec 13 01:36:17.537 INFO Fetch successful Dec 13 01:36:17.540146 coreos-metadata[2035]: Dec 13 01:36:17.537 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Dec 13 01:36:17.539629 (ntainerd)[2093]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 01:36:17.551280 coreos-metadata[2035]: Dec 13 01:36:17.543 INFO Fetch successful Dec 13 01:36:17.551280 coreos-metadata[2035]: Dec 13 01:36:17.543 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Dec 13 01:36:17.551280 coreos-metadata[2035]: Dec 13 01:36:17.546 INFO Fetch successful Dec 13 01:36:17.551280 coreos-metadata[2035]: Dec 13 01:36:17.546 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Dec 13 01:36:17.552341 update_engine[2062]: I20241213 01:36:17.552124 2062 update_check_scheduler.cc:74] Next update check in 3m51s Dec 13 01:36:17.560210 jq[2090]: true Dec 13 01:36:17.561143 coreos-metadata[2035]: Dec 13 01:36:17.560 INFO Fetch successful Dec 13 01:36:17.593651 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:36:17.593699 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 01:36:17.596248 dbus-daemon[2036]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 01:36:17.601159 tar[2080]: linux-amd64/helm Dec 13 01:36:17.598232 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:36:17.598273 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 01:36:17.625613 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (2105) Dec 13 01:36:17.625703 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Dec 13 01:36:17.610728 systemd[1]: Started update-engine.service - Update Engine. Dec 13 01:36:17.647305 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 13 01:36:17.652143 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 01:36:17.662175 extend-filesystems[2073]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Dec 13 01:36:17.662175 extend-filesystems[2073]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 01:36:17.662175 extend-filesystems[2073]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Dec 13 01:36:17.671355 extend-filesystems[2038]: Resized filesystem in /dev/nvme0n1p9 Dec 13 01:36:17.662838 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 01:36:17.665154 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:36:17.665480 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 01:36:17.683290 systemd[1]: Finished setup-oem.service - Setup OEM. Dec 13 01:36:17.718629 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Dec 13 01:36:17.739153 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 01:36:17.746549 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 01:36:17.907981 bash[2172]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:36:17.915810 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 01:36:17.942599 systemd[1]: Starting sshkeys.service... Dec 13 01:36:18.066552 systemd-logind[2059]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 01:36:18.066587 systemd-logind[2059]: Watching system buttons on /dev/input/event3 (Sleep Button) Dec 13 01:36:18.066615 systemd-logind[2059]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 01:36:18.077518 systemd-logind[2059]: New seat seat0. Dec 13 01:36:18.114364 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 01:36:18.138350 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 13 01:36:18.146737 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 13 01:36:18.328219 amazon-ssm-agent[2148]: Initializing new seelog logger Dec 13 01:36:18.343111 amazon-ssm-agent[2148]: New Seelog Logger Creation Complete Dec 13 01:36:18.343111 amazon-ssm-agent[2148]: 2024/12/13 01:36:18 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:36:18.343111 amazon-ssm-agent[2148]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:36:18.346229 amazon-ssm-agent[2148]: 2024/12/13 01:36:18 processing appconfig overrides Dec 13 01:36:18.359508 amazon-ssm-agent[2148]: 2024/12/13 01:36:18 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:36:18.359508 amazon-ssm-agent[2148]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:36:18.359713 amazon-ssm-agent[2148]: 2024/12/13 01:36:18 processing appconfig overrides Dec 13 01:36:18.371103 amazon-ssm-agent[2148]: 2024-12-13 01:36:18 INFO Proxy environment variables: Dec 13 01:36:18.373360 amazon-ssm-agent[2148]: 2024/12/13 01:36:18 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:36:18.373360 amazon-ssm-agent[2148]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:36:18.373593 amazon-ssm-agent[2148]: 2024/12/13 01:36:18 processing appconfig overrides Dec 13 01:36:18.404540 amazon-ssm-agent[2148]: 2024/12/13 01:36:18 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:36:18.404799 amazon-ssm-agent[2148]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:36:18.406787 amazon-ssm-agent[2148]: 2024/12/13 01:36:18 processing appconfig overrides Dec 13 01:36:18.459757 coreos-metadata[2226]: Dec 13 01:36:18.458 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 13 01:36:18.471363 coreos-metadata[2226]: Dec 13 01:36:18.461 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Dec 13 01:36:18.471363 coreos-metadata[2226]: Dec 13 01:36:18.462 INFO Fetch successful Dec 13 01:36:18.471363 coreos-metadata[2226]: Dec 13 01:36:18.462 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 01:36:18.471363 coreos-metadata[2226]: Dec 13 01:36:18.463 INFO Fetch successful Dec 13 01:36:18.471763 amazon-ssm-agent[2148]: 2024-12-13 01:36:18 INFO https_proxy: Dec 13 01:36:18.485608 unknown[2226]: wrote ssh authorized keys file for user: core Dec 13 01:36:18.500328 dbus-daemon[2036]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 01:36:18.503856 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 13 01:36:18.510873 dbus-daemon[2036]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2137 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 01:36:18.534363 locksmithd[2138]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:36:18.594309 systemd[1]: Starting polkit.service - Authorization Manager... Dec 13 01:36:18.623133 amazon-ssm-agent[2148]: 2024-12-13 01:36:18 INFO http_proxy: Dec 13 01:36:18.629010 update-ssh-keys[2248]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:36:18.632934 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 13 01:36:18.655787 systemd[1]: Finished sshkeys.service. Dec 13 01:36:18.677451 polkitd[2251]: Started polkitd version 121 Dec 13 01:36:18.729897 amazon-ssm-agent[2148]: 2024-12-13 01:36:18 INFO no_proxy: Dec 13 01:36:18.746784 polkitd[2251]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 01:36:18.746874 polkitd[2251]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 01:36:18.769486 polkitd[2251]: Finished loading, compiling and executing 2 rules Dec 13 01:36:18.790915 dbus-daemon[2036]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 01:36:18.791239 systemd[1]: Started polkit.service - Authorization Manager. Dec 13 01:36:18.793572 polkitd[2251]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 01:36:18.819612 sshd_keygen[2086]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:36:18.828271 amazon-ssm-agent[2148]: 2024-12-13 01:36:18 INFO Checking if agent identity type OnPrem can be assumed Dec 13 01:36:18.927769 amazon-ssm-agent[2148]: 2024-12-13 01:36:18 INFO Checking if agent identity type EC2 can be assumed Dec 13 01:36:18.968625 systemd-hostnamed[2137]: Hostname set to (transient) Dec 13 01:36:18.969843 systemd-resolved[1976]: System hostname changed to 'ip-172-31-20-145'. Dec 13 01:36:18.983031 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 01:36:19.045920 amazon-ssm-agent[2148]: 2024-12-13 01:36:19 INFO Agent will take identity from EC2 Dec 13 01:36:19.062186 containerd[2093]: time="2024-12-13T01:36:19.062052400Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 01:36:19.065851 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 01:36:19.078652 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 01:36:19.089963 systemd[1]: Started sshd@0-172.31.20.145:22-139.178.68.195:57018.service - OpenSSH per-connection server daemon (139.178.68.195:57018). Dec 13 01:36:19.142582 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:36:19.142955 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 01:36:19.147963 amazon-ssm-agent[2148]: 2024-12-13 01:36:19 INFO [amazon-ssm-agent] using named pipe channel for IPC Dec 13 01:36:19.155667 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 01:36:19.215837 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 01:36:19.230041 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 01:36:19.235504 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 01:36:19.237268 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 01:36:19.247693 amazon-ssm-agent[2148]: 2024-12-13 01:36:19 INFO [amazon-ssm-agent] using named pipe channel for IPC Dec 13 01:36:19.255122 containerd[2093]: time="2024-12-13T01:36:19.254164265Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:36:19.256355 containerd[2093]: time="2024-12-13T01:36:19.256300293Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:36:19.256472 containerd[2093]: time="2024-12-13T01:36:19.256403779Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:36:19.256472 containerd[2093]: time="2024-12-13T01:36:19.256431579Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:36:19.256652 containerd[2093]: time="2024-12-13T01:36:19.256628715Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 01:36:19.256703 containerd[2093]: time="2024-12-13T01:36:19.256665735Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 01:36:19.256905 containerd[2093]: time="2024-12-13T01:36:19.256749700Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:36:19.256905 containerd[2093]: time="2024-12-13T01:36:19.256769569Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:36:19.257601 containerd[2093]: time="2024-12-13T01:36:19.257100760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:36:19.257601 containerd[2093]: time="2024-12-13T01:36:19.257130273Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:36:19.257601 containerd[2093]: time="2024-12-13T01:36:19.257152292Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:36:19.257601 containerd[2093]: time="2024-12-13T01:36:19.257170485Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:36:19.257601 containerd[2093]: time="2024-12-13T01:36:19.257274916Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:36:19.257601 containerd[2093]: time="2024-12-13T01:36:19.257523623Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:36:19.257845 containerd[2093]: time="2024-12-13T01:36:19.257720953Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:36:19.257845 containerd[2093]: time="2024-12-13T01:36:19.257744877Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:36:19.257927 containerd[2093]: time="2024-12-13T01:36:19.257846015Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:36:19.257927 containerd[2093]: time="2024-12-13T01:36:19.257901953Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:36:19.268039 containerd[2093]: time="2024-12-13T01:36:19.267889942Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:36:19.268039 containerd[2093]: time="2024-12-13T01:36:19.267986586Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:36:19.271111 containerd[2093]: time="2024-12-13T01:36:19.268013774Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 01:36:19.271111 containerd[2093]: time="2024-12-13T01:36:19.268320240Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 01:36:19.271111 containerd[2093]: time="2024-12-13T01:36:19.268343427Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:36:19.271111 containerd[2093]: time="2024-12-13T01:36:19.268556081Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:36:19.271111 containerd[2093]: time="2024-12-13T01:36:19.269993950Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:36:19.271111 containerd[2093]: time="2024-12-13T01:36:19.270187881Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 01:36:19.271111 containerd[2093]: time="2024-12-13T01:36:19.270213248Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 01:36:19.271111 containerd[2093]: time="2024-12-13T01:36:19.270235835Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 01:36:19.271111 containerd[2093]: time="2024-12-13T01:36:19.270257377Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:36:19.271111 containerd[2093]: time="2024-12-13T01:36:19.270278650Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:36:19.271111 containerd[2093]: time="2024-12-13T01:36:19.270299729Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:36:19.271111 containerd[2093]: time="2024-12-13T01:36:19.270321931Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:36:19.271111 containerd[2093]: time="2024-12-13T01:36:19.270344566Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:36:19.271111 containerd[2093]: time="2024-12-13T01:36:19.270364426Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:36:19.271700 containerd[2093]: time="2024-12-13T01:36:19.270383466Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:36:19.271700 containerd[2093]: time="2024-12-13T01:36:19.270405218Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:36:19.271700 containerd[2093]: time="2024-12-13T01:36:19.270449315Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:36:19.271700 containerd[2093]: time="2024-12-13T01:36:19.270470384Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:36:19.271700 containerd[2093]: time="2024-12-13T01:36:19.270488860Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:36:19.271700 containerd[2093]: time="2024-12-13T01:36:19.270509724Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:36:19.271700 containerd[2093]: time="2024-12-13T01:36:19.270530240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:36:19.271700 containerd[2093]: time="2024-12-13T01:36:19.270552006Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:36:19.271700 containerd[2093]: time="2024-12-13T01:36:19.270570204Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:36:19.271700 containerd[2093]: time="2024-12-13T01:36:19.270590564Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:36:19.271700 containerd[2093]: time="2024-12-13T01:36:19.270621261Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 01:36:19.271700 containerd[2093]: time="2024-12-13T01:36:19.270649369Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 01:36:19.271700 containerd[2093]: time="2024-12-13T01:36:19.270668437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:36:19.271700 containerd[2093]: time="2024-12-13T01:36:19.270686551Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 01:36:19.272234 containerd[2093]: time="2024-12-13T01:36:19.270705883Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:36:19.272234 containerd[2093]: time="2024-12-13T01:36:19.270733940Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 01:36:19.272234 containerd[2093]: time="2024-12-13T01:36:19.270764899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 01:36:19.272234 containerd[2093]: time="2024-12-13T01:36:19.270783391Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:36:19.272234 containerd[2093]: time="2024-12-13T01:36:19.270799804Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:36:19.272234 containerd[2093]: time="2024-12-13T01:36:19.270856134Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:36:19.272234 containerd[2093]: time="2024-12-13T01:36:19.270880421Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 01:36:19.272234 containerd[2093]: time="2024-12-13T01:36:19.270897459Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:36:19.272234 containerd[2093]: time="2024-12-13T01:36:19.270915590Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 01:36:19.272234 containerd[2093]: time="2024-12-13T01:36:19.270930902Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:36:19.272234 containerd[2093]: time="2024-12-13T01:36:19.270949165Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 01:36:19.272234 containerd[2093]: time="2024-12-13T01:36:19.270968910Z" level=info msg="NRI interface is disabled by configuration." Dec 13 01:36:19.272234 containerd[2093]: time="2024-12-13T01:36:19.270983776Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:36:19.286138 containerd[2093]: time="2024-12-13T01:36:19.276041888Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:36:19.286138 containerd[2093]: time="2024-12-13T01:36:19.276170538Z" level=info msg="Connect containerd service" Dec 13 01:36:19.286138 containerd[2093]: time="2024-12-13T01:36:19.276236365Z" level=info msg="using legacy CRI server" Dec 13 01:36:19.286138 containerd[2093]: time="2024-12-13T01:36:19.276248474Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 01:36:19.286138 containerd[2093]: time="2024-12-13T01:36:19.276417326Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:36:19.301199 containerd[2093]: time="2024-12-13T01:36:19.301139141Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:36:19.301662 containerd[2093]: time="2024-12-13T01:36:19.301630295Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:36:19.301736 containerd[2093]: time="2024-12-13T01:36:19.301707580Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:36:19.302099 containerd[2093]: time="2024-12-13T01:36:19.301760284Z" level=info msg="Start subscribing containerd event" Dec 13 01:36:19.302099 containerd[2093]: time="2024-12-13T01:36:19.301823789Z" level=info msg="Start recovering state" Dec 13 01:36:19.302099 containerd[2093]: time="2024-12-13T01:36:19.301922192Z" level=info msg="Start event monitor" Dec 13 01:36:19.302099 containerd[2093]: time="2024-12-13T01:36:19.301937063Z" level=info msg="Start snapshots syncer" Dec 13 01:36:19.302099 containerd[2093]: time="2024-12-13T01:36:19.301950410Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:36:19.302099 containerd[2093]: time="2024-12-13T01:36:19.301970085Z" level=info msg="Start streaming server" Dec 13 01:36:19.302099 containerd[2093]: time="2024-12-13T01:36:19.302044614Z" level=info msg="containerd successfully booted in 0.247271s" Dec 13 01:36:19.312875 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 01:36:19.347200 amazon-ssm-agent[2148]: 2024-12-13 01:36:19 INFO [amazon-ssm-agent] using named pipe channel for IPC Dec 13 01:36:19.409108 sshd[2300]: Accepted publickey for core from 139.178.68.195 port 57018 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:36:19.426457 sshd[2300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:36:19.447172 amazon-ssm-agent[2148]: 2024-12-13 01:36:19 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Dec 13 01:36:19.456869 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 01:36:19.465436 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 01:36:19.478405 systemd-logind[2059]: New session 1 of user core. Dec 13 01:36:19.528453 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 01:36:19.550182 amazon-ssm-agent[2148]: 2024-12-13 01:36:19 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Dec 13 01:36:19.564513 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 01:36:19.584499 (systemd)[2319]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:36:19.652104 amazon-ssm-agent[2148]: 2024-12-13 01:36:19 INFO [amazon-ssm-agent] Starting Core Agent Dec 13 01:36:19.756825 amazon-ssm-agent[2148]: 2024-12-13 01:36:19 INFO [amazon-ssm-agent] registrar detected. Attempting registration Dec 13 01:36:19.850166 systemd[2319]: Queued start job for default target default.target. Dec 13 01:36:19.851109 systemd[2319]: Created slice app.slice - User Application Slice. Dec 13 01:36:19.851143 systemd[2319]: Reached target paths.target - Paths. Dec 13 01:36:19.851164 systemd[2319]: Reached target timers.target - Timers. Dec 13 01:36:19.858136 amazon-ssm-agent[2148]: 2024-12-13 01:36:19 INFO [Registrar] Starting registrar module Dec 13 01:36:19.859329 systemd[2319]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 01:36:19.908459 systemd[2319]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 01:36:19.908550 systemd[2319]: Reached target sockets.target - Sockets. Dec 13 01:36:19.908571 systemd[2319]: Reached target basic.target - Basic System. Dec 13 01:36:19.908633 systemd[2319]: Reached target default.target - Main User Target. Dec 13 01:36:19.908669 systemd[2319]: Startup finished in 293ms. Dec 13 01:36:19.911271 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 01:36:19.924534 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 01:36:19.959925 amazon-ssm-agent[2148]: 2024-12-13 01:36:19 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Dec 13 01:36:20.093287 systemd[1]: Started sshd@1-172.31.20.145:22-139.178.68.195:57034.service - OpenSSH per-connection server daemon (139.178.68.195:57034). Dec 13 01:36:20.154308 tar[2080]: linux-amd64/LICENSE Dec 13 01:36:20.154308 tar[2080]: linux-amd64/README.md Dec 13 01:36:20.189893 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 01:36:20.231901 amazon-ssm-agent[2148]: 2024-12-13 01:36:20 INFO [EC2Identity] EC2 registration was successful. Dec 13 01:36:20.263384 amazon-ssm-agent[2148]: 2024-12-13 01:36:20 INFO [CredentialRefresher] credentialRefresher has started Dec 13 01:36:20.263384 amazon-ssm-agent[2148]: 2024-12-13 01:36:20 INFO [CredentialRefresher] Starting credentials refresher loop Dec 13 01:36:20.263384 amazon-ssm-agent[2148]: 2024-12-13 01:36:20 INFO EC2RoleProvider Successfully connected with instance profile role credentials Dec 13 01:36:20.326652 sshd[2331]: Accepted publickey for core from 139.178.68.195 port 57034 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:36:20.333474 amazon-ssm-agent[2148]: 2024-12-13 01:36:20 INFO [CredentialRefresher] Next credential rotation will be in 30.516660998 minutes Dec 13 01:36:20.333277 sshd[2331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:36:20.358636 systemd-logind[2059]: New session 2 of user core. Dec 13 01:36:20.373599 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 01:36:20.504003 sshd[2331]: pam_unix(sshd:session): session closed for user core Dec 13 01:36:20.521627 systemd[1]: sshd@1-172.31.20.145:22-139.178.68.195:57034.service: Deactivated successfully. Dec 13 01:36:20.533776 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 01:36:20.539072 systemd-logind[2059]: Session 2 logged out. Waiting for processes to exit. Dec 13 01:36:20.559072 systemd[1]: Started sshd@2-172.31.20.145:22-139.178.68.195:57040.service - OpenSSH per-connection server daemon (139.178.68.195:57040). Dec 13 01:36:20.565075 systemd-logind[2059]: Removed session 2. Dec 13 01:36:20.761621 sshd[2345]: Accepted publickey for core from 139.178.68.195 port 57040 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:36:20.763998 sshd[2345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:36:20.774061 systemd-logind[2059]: New session 3 of user core. Dec 13 01:36:20.780682 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 01:36:20.923611 sshd[2345]: pam_unix(sshd:session): session closed for user core Dec 13 01:36:20.927531 systemd[1]: sshd@2-172.31.20.145:22-139.178.68.195:57040.service: Deactivated successfully. Dec 13 01:36:20.934866 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 01:36:20.935075 systemd-logind[2059]: Session 3 logged out. Waiting for processes to exit. Dec 13 01:36:20.937746 systemd-logind[2059]: Removed session 3. Dec 13 01:36:21.039320 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:36:21.042618 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 01:36:21.048213 systemd[1]: Startup finished in 11.643s (kernel) + 9.874s (userspace) = 21.518s. Dec 13 01:36:21.190829 (kubelet)[2361]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:36:21.280654 amazon-ssm-agent[2148]: 2024-12-13 01:36:21 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Dec 13 01:36:21.381950 amazon-ssm-agent[2148]: 2024-12-13 01:36:21 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2367) started Dec 13 01:36:21.482571 amazon-ssm-agent[2148]: 2024-12-13 01:36:21 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Dec 13 01:36:22.654582 kubelet[2361]: E1213 01:36:22.654495 2361 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:36:22.658253 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:36:22.658873 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:36:25.588345 systemd-resolved[1976]: Clock change detected. Flushing caches. Dec 13 01:36:32.120537 systemd[1]: Started sshd@3-172.31.20.145:22-139.178.68.195:34938.service - OpenSSH per-connection server daemon (139.178.68.195:34938). Dec 13 01:36:32.286492 sshd[2386]: Accepted publickey for core from 139.178.68.195 port 34938 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:36:32.288165 sshd[2386]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:36:32.295681 systemd-logind[2059]: New session 4 of user core. Dec 13 01:36:32.304783 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 01:36:32.446118 sshd[2386]: pam_unix(sshd:session): session closed for user core Dec 13 01:36:32.451544 systemd[1]: sshd@3-172.31.20.145:22-139.178.68.195:34938.service: Deactivated successfully. Dec 13 01:36:32.458962 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 01:36:32.460666 systemd-logind[2059]: Session 4 logged out. Waiting for processes to exit. Dec 13 01:36:32.462480 systemd-logind[2059]: Removed session 4. Dec 13 01:36:32.477338 systemd[1]: Started sshd@4-172.31.20.145:22-139.178.68.195:34948.service - OpenSSH per-connection server daemon (139.178.68.195:34948). Dec 13 01:36:32.646867 sshd[2394]: Accepted publickey for core from 139.178.68.195 port 34948 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:36:32.648428 sshd[2394]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:36:32.676170 systemd-logind[2059]: New session 5 of user core. Dec 13 01:36:32.690062 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 01:36:32.817770 sshd[2394]: pam_unix(sshd:session): session closed for user core Dec 13 01:36:32.826647 systemd[1]: sshd@4-172.31.20.145:22-139.178.68.195:34948.service: Deactivated successfully. Dec 13 01:36:32.844276 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 01:36:32.844357 systemd-logind[2059]: Session 5 logged out. Waiting for processes to exit. Dec 13 01:36:32.852997 systemd[1]: Started sshd@5-172.31.20.145:22-139.178.68.195:34950.service - OpenSSH per-connection server daemon (139.178.68.195:34950). Dec 13 01:36:32.854429 systemd-logind[2059]: Removed session 5. Dec 13 01:36:33.038426 sshd[2402]: Accepted publickey for core from 139.178.68.195 port 34950 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:36:33.041047 sshd[2402]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:36:33.048581 systemd-logind[2059]: New session 6 of user core. Dec 13 01:36:33.057794 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 01:36:33.188784 sshd[2402]: pam_unix(sshd:session): session closed for user core Dec 13 01:36:33.200055 systemd[1]: sshd@5-172.31.20.145:22-139.178.68.195:34950.service: Deactivated successfully. Dec 13 01:36:33.210711 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 01:36:33.215431 systemd-logind[2059]: Session 6 logged out. Waiting for processes to exit. Dec 13 01:36:33.225785 systemd[1]: Started sshd@6-172.31.20.145:22-139.178.68.195:34952.service - OpenSSH per-connection server daemon (139.178.68.195:34952). Dec 13 01:36:33.227060 systemd-logind[2059]: Removed session 6. Dec 13 01:36:33.420850 sshd[2410]: Accepted publickey for core from 139.178.68.195 port 34952 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:36:33.422571 sshd[2410]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:36:33.435745 systemd-logind[2059]: New session 7 of user core. Dec 13 01:36:33.441628 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 01:36:33.576468 sudo[2414]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 01:36:33.576904 sudo[2414]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:36:34.073948 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 01:36:34.093436 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:36:34.135491 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 01:36:34.149868 (dockerd)[2433]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 01:36:34.560763 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:36:34.576997 (kubelet)[2444]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:36:34.696496 kubelet[2444]: E1213 01:36:34.694709 2444 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:36:34.703619 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:36:34.704041 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:36:35.100410 dockerd[2433]: time="2024-12-13T01:36:35.100348813Z" level=info msg="Starting up" Dec 13 01:36:35.363178 dockerd[2433]: time="2024-12-13T01:36:35.363031341Z" level=info msg="Loading containers: start." Dec 13 01:36:35.548155 kernel: Initializing XFRM netlink socket Dec 13 01:36:35.583331 (udev-worker)[2473]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:36:35.655945 systemd-networkd[1648]: docker0: Link UP Dec 13 01:36:35.677157 dockerd[2433]: time="2024-12-13T01:36:35.677041449Z" level=info msg="Loading containers: done." Dec 13 01:36:35.703647 dockerd[2433]: time="2024-12-13T01:36:35.703542678Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 01:36:35.703934 dockerd[2433]: time="2024-12-13T01:36:35.703810806Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 01:36:35.704000 dockerd[2433]: time="2024-12-13T01:36:35.703964621Z" level=info msg="Daemon has completed initialization" Dec 13 01:36:35.744265 dockerd[2433]: time="2024-12-13T01:36:35.744179396Z" level=info msg="API listen on /run/docker.sock" Dec 13 01:36:35.744848 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 01:36:37.090009 containerd[2093]: time="2024-12-13T01:36:37.089686433Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 01:36:37.723632 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount638126253.mount: Deactivated successfully. Dec 13 01:36:40.556340 containerd[2093]: time="2024-12-13T01:36:40.556280144Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:36:40.558202 containerd[2093]: time="2024-12-13T01:36:40.558152836Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=35139254" Dec 13 01:36:40.559145 containerd[2093]: time="2024-12-13T01:36:40.558965862Z" level=info msg="ImageCreate event name:\"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:36:40.563608 containerd[2093]: time="2024-12-13T01:36:40.561931191Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:36:40.563608 containerd[2093]: time="2024-12-13T01:36:40.563179266Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"35136054\" in 3.473449748s" Dec 13 01:36:40.563608 containerd[2093]: time="2024-12-13T01:36:40.563223310Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Dec 13 01:36:40.598471 containerd[2093]: time="2024-12-13T01:36:40.598427676Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 01:36:43.557433 containerd[2093]: time="2024-12-13T01:36:43.557372583Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:36:43.563675 containerd[2093]: time="2024-12-13T01:36:43.563599134Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=32217732" Dec 13 01:36:43.570215 containerd[2093]: time="2024-12-13T01:36:43.570144882Z" level=info msg="ImageCreate event name:\"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:36:43.577987 containerd[2093]: time="2024-12-13T01:36:43.577914890Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:36:43.582881 containerd[2093]: time="2024-12-13T01:36:43.582726430Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"33662844\" in 2.984253931s" Dec 13 01:36:43.582881 containerd[2093]: time="2024-12-13T01:36:43.582779375Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Dec 13 01:36:43.611065 containerd[2093]: time="2024-12-13T01:36:43.610847545Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 01:36:44.787086 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 01:36:44.795380 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:36:45.051387 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:36:45.061940 (kubelet)[2680]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:36:45.249550 kubelet[2680]: E1213 01:36:45.249485 2680 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:36:45.259362 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:36:45.259597 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:36:45.680740 containerd[2093]: time="2024-12-13T01:36:45.680681785Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:36:45.682949 containerd[2093]: time="2024-12-13T01:36:45.682754503Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=17332822" Dec 13 01:36:45.686442 containerd[2093]: time="2024-12-13T01:36:45.685177722Z" level=info msg="ImageCreate event name:\"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:36:45.689197 containerd[2093]: time="2024-12-13T01:36:45.689152828Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:36:45.690418 containerd[2093]: time="2024-12-13T01:36:45.690381451Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"18777952\" in 2.079488445s" Dec 13 01:36:45.690704 containerd[2093]: time="2024-12-13T01:36:45.690621518Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Dec 13 01:36:45.717731 containerd[2093]: time="2024-12-13T01:36:45.717685917Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 01:36:47.069504 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount930261782.mount: Deactivated successfully. Dec 13 01:36:47.837574 containerd[2093]: time="2024-12-13T01:36:47.837524262Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:36:47.838831 containerd[2093]: time="2024-12-13T01:36:47.838712194Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=28619958" Dec 13 01:36:47.841093 containerd[2093]: time="2024-12-13T01:36:47.839920417Z" level=info msg="ImageCreate event name:\"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:36:47.842152 containerd[2093]: time="2024-12-13T01:36:47.841996265Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:36:47.842855 containerd[2093]: time="2024-12-13T01:36:47.842677670Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"28618977\" in 2.124948294s" Dec 13 01:36:47.842855 containerd[2093]: time="2024-12-13T01:36:47.842717967Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 01:36:47.870109 containerd[2093]: time="2024-12-13T01:36:47.869731040Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 01:36:48.484293 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2680231238.mount: Deactivated successfully. Dec 13 01:36:49.642910 containerd[2093]: time="2024-12-13T01:36:49.642856237Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:36:49.644312 containerd[2093]: time="2024-12-13T01:36:49.644264612Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Dec 13 01:36:49.645147 containerd[2093]: time="2024-12-13T01:36:49.644875937Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:36:49.647860 containerd[2093]: time="2024-12-13T01:36:49.647805100Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:36:49.649413 containerd[2093]: time="2024-12-13T01:36:49.649134701Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.77934112s" Dec 13 01:36:49.649413 containerd[2093]: time="2024-12-13T01:36:49.649182289Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 01:36:49.675983 containerd[2093]: time="2024-12-13T01:36:49.675943695Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 01:36:50.172984 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 01:36:50.209153 containerd[2093]: time="2024-12-13T01:36:50.208673609Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:36:50.210442 containerd[2093]: time="2024-12-13T01:36:50.210389218Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Dec 13 01:36:50.210547 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2579806356.mount: Deactivated successfully. Dec 13 01:36:50.211481 containerd[2093]: time="2024-12-13T01:36:50.211262704Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:36:50.233554 containerd[2093]: time="2024-12-13T01:36:50.233510856Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:36:50.237087 containerd[2093]: time="2024-12-13T01:36:50.236807351Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 560.811357ms" Dec 13 01:36:50.237087 containerd[2093]: time="2024-12-13T01:36:50.236857677Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 01:36:50.262975 containerd[2093]: time="2024-12-13T01:36:50.262937489Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 01:36:50.845727 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount110997127.mount: Deactivated successfully. Dec 13 01:36:53.772472 containerd[2093]: time="2024-12-13T01:36:53.772379274Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:36:53.778510 containerd[2093]: time="2024-12-13T01:36:53.777425017Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Dec 13 01:36:53.780945 containerd[2093]: time="2024-12-13T01:36:53.780002879Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:36:53.800574 containerd[2093]: time="2024-12-13T01:36:53.800523991Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:36:53.802299 containerd[2093]: time="2024-12-13T01:36:53.802247333Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.539269091s" Dec 13 01:36:53.802556 containerd[2093]: time="2024-12-13T01:36:53.802532261Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Dec 13 01:36:55.287194 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 01:36:55.296244 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:36:55.577360 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:36:55.591846 (kubelet)[2883]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:36:55.672014 kubelet[2883]: E1213 01:36:55.671942 2883 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:36:55.676482 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:36:55.676864 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:36:57.359343 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:36:57.366542 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:36:57.402229 systemd[1]: Reloading requested from client PID 2900 ('systemctl') (unit session-7.scope)... Dec 13 01:36:57.402372 systemd[1]: Reloading... Dec 13 01:36:57.539172 zram_generator::config[2943]: No configuration found. Dec 13 01:36:57.789808 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:36:57.894663 systemd[1]: Reloading finished in 491 ms. Dec 13 01:36:57.958623 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 01:36:57.958827 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 01:36:57.959652 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:36:57.966777 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:36:58.266404 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:36:58.269685 (kubelet)[3012]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:36:58.355656 kubelet[3012]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:36:58.355656 kubelet[3012]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:36:58.355656 kubelet[3012]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:36:58.359164 kubelet[3012]: I1213 01:36:58.358422 3012 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:36:59.003940 kubelet[3012]: I1213 01:36:59.003897 3012 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 01:36:59.003940 kubelet[3012]: I1213 01:36:59.003940 3012 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:36:59.004254 kubelet[3012]: I1213 01:36:59.004233 3012 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 01:36:59.046304 kubelet[3012]: E1213 01:36:59.046268 3012 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.20.145:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.20.145:6443: connect: connection refused Dec 13 01:36:59.046449 kubelet[3012]: I1213 01:36:59.046398 3012 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:36:59.070759 kubelet[3012]: I1213 01:36:59.070723 3012 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:36:59.071308 kubelet[3012]: I1213 01:36:59.071283 3012 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:36:59.073370 kubelet[3012]: I1213 01:36:59.073337 3012 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:36:59.073803 kubelet[3012]: I1213 01:36:59.073379 3012 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:36:59.073803 kubelet[3012]: I1213 01:36:59.073395 3012 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:36:59.073803 kubelet[3012]: I1213 01:36:59.073722 3012 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:36:59.073944 kubelet[3012]: I1213 01:36:59.073896 3012 kubelet.go:396] "Attempting to sync node with API server" Dec 13 01:36:59.073944 kubelet[3012]: I1213 01:36:59.073918 3012 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:36:59.074019 kubelet[3012]: I1213 01:36:59.073951 3012 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:36:59.074019 kubelet[3012]: I1213 01:36:59.073973 3012 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:36:59.080223 kubelet[3012]: W1213 01:36:59.080168 3012 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.20.145:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.20.145:6443: connect: connection refused Dec 13 01:36:59.080370 kubelet[3012]: E1213 01:36:59.080234 3012 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.20.145:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.20.145:6443: connect: connection refused Dec 13 01:36:59.082209 kubelet[3012]: W1213 01:36:59.080582 3012 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.20.145:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-145&limit=500&resourceVersion=0": dial tcp 172.31.20.145:6443: connect: connection refused Dec 13 01:36:59.082209 kubelet[3012]: E1213 01:36:59.080615 3012 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.20.145:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-145&limit=500&resourceVersion=0": dial tcp 172.31.20.145:6443: connect: connection refused Dec 13 01:36:59.082209 kubelet[3012]: I1213 01:36:59.081546 3012 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:36:59.088044 kubelet[3012]: I1213 01:36:59.087998 3012 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:36:59.089730 kubelet[3012]: W1213 01:36:59.089688 3012 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 01:36:59.090446 kubelet[3012]: I1213 01:36:59.090414 3012 server.go:1256] "Started kubelet" Dec 13 01:36:59.093423 kubelet[3012]: I1213 01:36:59.092013 3012 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:36:59.104902 kubelet[3012]: I1213 01:36:59.104433 3012 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:36:59.106106 kubelet[3012]: I1213 01:36:59.106079 3012 server.go:461] "Adding debug handlers to kubelet server" Dec 13 01:36:59.107661 kubelet[3012]: I1213 01:36:59.107637 3012 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:36:59.108509 kubelet[3012]: I1213 01:36:59.107852 3012 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:36:59.110489 kubelet[3012]: I1213 01:36:59.110462 3012 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:36:59.110900 kubelet[3012]: I1213 01:36:59.110871 3012 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 01:36:59.110975 kubelet[3012]: I1213 01:36:59.110944 3012 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 01:36:59.112057 kubelet[3012]: W1213 01:36:59.112000 3012 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.20.145:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.20.145:6443: connect: connection refused Dec 13 01:36:59.112329 kubelet[3012]: E1213 01:36:59.112198 3012 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.20.145:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.20.145:6443: connect: connection refused Dec 13 01:36:59.112560 kubelet[3012]: E1213 01:36:59.112435 3012 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.145:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-145?timeout=10s\": dial tcp 172.31.20.145:6443: connect: connection refused" interval="200ms" Dec 13 01:36:59.115570 kubelet[3012]: E1213 01:36:59.115281 3012 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.20.145:6443/api/v1/namespaces/default/events\": dial tcp 172.31.20.145:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-20-145.181098c2d44aace8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-20-145,UID:ip-172-31-20-145,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-20-145,},FirstTimestamp:2024-12-13 01:36:59.090382056 +0000 UTC m=+0.810648421,LastTimestamp:2024-12-13 01:36:59.090382056 +0000 UTC m=+0.810648421,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-20-145,}" Dec 13 01:36:59.118156 kubelet[3012]: I1213 01:36:59.117805 3012 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:36:59.118156 kubelet[3012]: I1213 01:36:59.117922 3012 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:36:59.121995 kubelet[3012]: I1213 01:36:59.121961 3012 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:36:59.148024 kubelet[3012]: I1213 01:36:59.147992 3012 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:36:59.158250 kubelet[3012]: I1213 01:36:59.158213 3012 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:36:59.158250 kubelet[3012]: I1213 01:36:59.158255 3012 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:36:59.158410 kubelet[3012]: I1213 01:36:59.158276 3012 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 01:36:59.158410 kubelet[3012]: E1213 01:36:59.158337 3012 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:36:59.172029 kubelet[3012]: W1213 01:36:59.170908 3012 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.20.145:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.20.145:6443: connect: connection refused Dec 13 01:36:59.172029 kubelet[3012]: E1213 01:36:59.170981 3012 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.20.145:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.20.145:6443: connect: connection refused Dec 13 01:36:59.173424 kubelet[3012]: I1213 01:36:59.173390 3012 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:36:59.173424 kubelet[3012]: I1213 01:36:59.173416 3012 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:36:59.173565 kubelet[3012]: I1213 01:36:59.173436 3012 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:36:59.186160 kubelet[3012]: I1213 01:36:59.185942 3012 policy_none.go:49] "None policy: Start" Dec 13 01:36:59.187355 kubelet[3012]: I1213 01:36:59.187319 3012 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:36:59.187355 kubelet[3012]: I1213 01:36:59.187353 3012 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:36:59.193018 kubelet[3012]: I1213 01:36:59.192982 3012 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:36:59.193301 kubelet[3012]: I1213 01:36:59.193278 3012 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:36:59.199280 kubelet[3012]: E1213 01:36:59.199244 3012 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-20-145\" not found" Dec 13 01:36:59.213941 kubelet[3012]: I1213 01:36:59.213883 3012 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-20-145" Dec 13 01:36:59.216083 kubelet[3012]: E1213 01:36:59.216057 3012 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.20.145:6443/api/v1/nodes\": dial tcp 172.31.20.145:6443: connect: connection refused" node="ip-172-31-20-145" Dec 13 01:36:59.259624 kubelet[3012]: I1213 01:36:59.259469 3012 topology_manager.go:215] "Topology Admit Handler" podUID="f2576a9d4fa7fa46d3459f153f132380" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-20-145" Dec 13 01:36:59.262633 kubelet[3012]: I1213 01:36:59.262427 3012 topology_manager.go:215] "Topology Admit Handler" podUID="9c0c720114e938b36ee4f1f00bfe4f0d" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-20-145" Dec 13 01:36:59.264377 kubelet[3012]: I1213 01:36:59.264345 3012 topology_manager.go:215] "Topology Admit Handler" podUID="91545541379665a00da5b2ece6520422" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-20-145" Dec 13 01:36:59.314987 kubelet[3012]: E1213 01:36:59.314821 3012 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.145:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-145?timeout=10s\": dial tcp 172.31.20.145:6443: connect: connection refused" interval="400ms" Dec 13 01:36:59.412524 kubelet[3012]: I1213 01:36:59.412459 3012 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9c0c720114e938b36ee4f1f00bfe4f0d-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-20-145\" (UID: \"9c0c720114e938b36ee4f1f00bfe4f0d\") " pod="kube-system/kube-controller-manager-ip-172-31-20-145" Dec 13 01:36:59.413234 kubelet[3012]: I1213 01:36:59.412544 3012 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9c0c720114e938b36ee4f1f00bfe4f0d-k8s-certs\") pod \"kube-controller-manager-ip-172-31-20-145\" (UID: \"9c0c720114e938b36ee4f1f00bfe4f0d\") " pod="kube-system/kube-controller-manager-ip-172-31-20-145" Dec 13 01:36:59.413234 kubelet[3012]: I1213 01:36:59.412618 3012 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9c0c720114e938b36ee4f1f00bfe4f0d-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-20-145\" (UID: \"9c0c720114e938b36ee4f1f00bfe4f0d\") " pod="kube-system/kube-controller-manager-ip-172-31-20-145" Dec 13 01:36:59.413234 kubelet[3012]: I1213 01:36:59.412669 3012 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/91545541379665a00da5b2ece6520422-kubeconfig\") pod \"kube-scheduler-ip-172-31-20-145\" (UID: \"91545541379665a00da5b2ece6520422\") " pod="kube-system/kube-scheduler-ip-172-31-20-145" Dec 13 01:36:59.413234 kubelet[3012]: I1213 01:36:59.412715 3012 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f2576a9d4fa7fa46d3459f153f132380-k8s-certs\") pod \"kube-apiserver-ip-172-31-20-145\" (UID: \"f2576a9d4fa7fa46d3459f153f132380\") " pod="kube-system/kube-apiserver-ip-172-31-20-145" Dec 13 01:36:59.413234 kubelet[3012]: I1213 01:36:59.412748 3012 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f2576a9d4fa7fa46d3459f153f132380-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-20-145\" (UID: \"f2576a9d4fa7fa46d3459f153f132380\") " pod="kube-system/kube-apiserver-ip-172-31-20-145" Dec 13 01:36:59.413436 kubelet[3012]: I1213 01:36:59.412793 3012 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9c0c720114e938b36ee4f1f00bfe4f0d-ca-certs\") pod \"kube-controller-manager-ip-172-31-20-145\" (UID: \"9c0c720114e938b36ee4f1f00bfe4f0d\") " pod="kube-system/kube-controller-manager-ip-172-31-20-145" Dec 13 01:36:59.413436 kubelet[3012]: I1213 01:36:59.412822 3012 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c0c720114e938b36ee4f1f00bfe4f0d-kubeconfig\") pod \"kube-controller-manager-ip-172-31-20-145\" (UID: \"9c0c720114e938b36ee4f1f00bfe4f0d\") " pod="kube-system/kube-controller-manager-ip-172-31-20-145" Dec 13 01:36:59.413436 kubelet[3012]: I1213 01:36:59.412855 3012 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f2576a9d4fa7fa46d3459f153f132380-ca-certs\") pod \"kube-apiserver-ip-172-31-20-145\" (UID: \"f2576a9d4fa7fa46d3459f153f132380\") " pod="kube-system/kube-apiserver-ip-172-31-20-145" Dec 13 01:36:59.419370 kubelet[3012]: I1213 01:36:59.418966 3012 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-20-145" Dec 13 01:36:59.419370 kubelet[3012]: E1213 01:36:59.419348 3012 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.20.145:6443/api/v1/nodes\": dial tcp 172.31.20.145:6443: connect: connection refused" node="ip-172-31-20-145" Dec 13 01:36:59.571764 containerd[2093]: time="2024-12-13T01:36:59.571706790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-20-145,Uid:f2576a9d4fa7fa46d3459f153f132380,Namespace:kube-system,Attempt:0,}" Dec 13 01:36:59.583555 containerd[2093]: time="2024-12-13T01:36:59.583507639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-20-145,Uid:9c0c720114e938b36ee4f1f00bfe4f0d,Namespace:kube-system,Attempt:0,}" Dec 13 01:36:59.588108 containerd[2093]: time="2024-12-13T01:36:59.588032792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-20-145,Uid:91545541379665a00da5b2ece6520422,Namespace:kube-system,Attempt:0,}" Dec 13 01:36:59.716298 kubelet[3012]: E1213 01:36:59.716189 3012 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.145:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-145?timeout=10s\": dial tcp 172.31.20.145:6443: connect: connection refused" interval="800ms" Dec 13 01:36:59.821930 kubelet[3012]: I1213 01:36:59.821792 3012 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-20-145" Dec 13 01:36:59.822231 kubelet[3012]: E1213 01:36:59.822204 3012 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.20.145:6443/api/v1/nodes\": dial tcp 172.31.20.145:6443: connect: connection refused" node="ip-172-31-20-145" Dec 13 01:37:00.037628 kubelet[3012]: W1213 01:37:00.037553 3012 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.20.145:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.20.145:6443: connect: connection refused Dec 13 01:37:00.037628 kubelet[3012]: E1213 01:37:00.037633 3012 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.20.145:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.20.145:6443: connect: connection refused Dec 13 01:37:00.135247 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4148845904.mount: Deactivated successfully. Dec 13 01:37:00.143998 kubelet[3012]: W1213 01:37:00.143913 3012 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.20.145:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.20.145:6443: connect: connection refused Dec 13 01:37:00.143998 kubelet[3012]: E1213 01:37:00.143996 3012 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.20.145:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.20.145:6443: connect: connection refused Dec 13 01:37:00.192913 containerd[2093]: time="2024-12-13T01:37:00.189559534Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:37:00.198770 containerd[2093]: time="2024-12-13T01:37:00.198698238Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:37:00.202991 containerd[2093]: time="2024-12-13T01:37:00.202630602Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:37:00.208107 containerd[2093]: time="2024-12-13T01:37:00.206491969Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:37:00.216066 containerd[2093]: time="2024-12-13T01:37:00.216008352Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:37:00.224345 containerd[2093]: time="2024-12-13T01:37:00.224277627Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:37:00.228005 containerd[2093]: time="2024-12-13T01:37:00.227887239Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Dec 13 01:37:00.234431 containerd[2093]: time="2024-12-13T01:37:00.234270526Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:37:00.235763 containerd[2093]: time="2024-12-13T01:37:00.235712561Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 663.919739ms" Dec 13 01:37:00.237699 containerd[2093]: time="2024-12-13T01:37:00.237534753Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 653.936828ms" Dec 13 01:37:00.245774 containerd[2093]: time="2024-12-13T01:37:00.245711480Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 657.518846ms" Dec 13 01:37:00.261707 kubelet[3012]: W1213 01:37:00.261611 3012 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.20.145:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.20.145:6443: connect: connection refused Dec 13 01:37:00.261707 kubelet[3012]: E1213 01:37:00.261664 3012 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.20.145:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.20.145:6443: connect: connection refused Dec 13 01:37:00.518854 kubelet[3012]: E1213 01:37:00.518597 3012 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.145:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-145?timeout=10s\": dial tcp 172.31.20.145:6443: connect: connection refused" interval="1.6s" Dec 13 01:37:00.573323 containerd[2093]: time="2024-12-13T01:37:00.572967479Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:37:00.573323 containerd[2093]: time="2024-12-13T01:37:00.573045154Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:37:00.573323 containerd[2093]: time="2024-12-13T01:37:00.573114798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:37:00.574870 containerd[2093]: time="2024-12-13T01:37:00.574305297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:37:00.574988 containerd[2093]: time="2024-12-13T01:37:00.574749141Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:37:00.574988 containerd[2093]: time="2024-12-13T01:37:00.574908608Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:37:00.575410 containerd[2093]: time="2024-12-13T01:37:00.574998369Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:37:00.575657 containerd[2093]: time="2024-12-13T01:37:00.575393402Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:37:00.576932 containerd[2093]: time="2024-12-13T01:37:00.576692825Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:37:00.576932 containerd[2093]: time="2024-12-13T01:37:00.576745830Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:37:00.576932 containerd[2093]: time="2024-12-13T01:37:00.576757715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:37:00.576932 containerd[2093]: time="2024-12-13T01:37:00.576841756Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:37:00.623505 kubelet[3012]: W1213 01:37:00.623435 3012 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.20.145:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-145&limit=500&resourceVersion=0": dial tcp 172.31.20.145:6443: connect: connection refused Dec 13 01:37:00.623505 kubelet[3012]: E1213 01:37:00.623514 3012 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.20.145:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-145&limit=500&resourceVersion=0": dial tcp 172.31.20.145:6443: connect: connection refused Dec 13 01:37:00.631540 kubelet[3012]: I1213 01:37:00.628394 3012 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-20-145" Dec 13 01:37:00.631540 kubelet[3012]: E1213 01:37:00.628780 3012 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.20.145:6443/api/v1/nodes\": dial tcp 172.31.20.145:6443: connect: connection refused" node="ip-172-31-20-145" Dec 13 01:37:00.744438 containerd[2093]: time="2024-12-13T01:37:00.744231746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-20-145,Uid:f2576a9d4fa7fa46d3459f153f132380,Namespace:kube-system,Attempt:0,} returns sandbox id \"b4585ade5cfd274548454a41eee3c5e6080d37463587a46b7223751108c97e67\"" Dec 13 01:37:00.770209 containerd[2093]: time="2024-12-13T01:37:00.770064088Z" level=info msg="CreateContainer within sandbox \"b4585ade5cfd274548454a41eee3c5e6080d37463587a46b7223751108c97e67\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 01:37:00.809317 containerd[2093]: time="2024-12-13T01:37:00.798648163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-20-145,Uid:9c0c720114e938b36ee4f1f00bfe4f0d,Namespace:kube-system,Attempt:0,} returns sandbox id \"47635c9f743579cd5b50b85a1dec757f12685dbf28314eee32e283e219749e85\"" Dec 13 01:37:00.820610 containerd[2093]: time="2024-12-13T01:37:00.820555960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-20-145,Uid:91545541379665a00da5b2ece6520422,Namespace:kube-system,Attempt:0,} returns sandbox id \"f464137bbccc1eeb3f3c0e8ca9f14e03fe8f1b173477e026b0440ae137802b92\"" Dec 13 01:37:00.826074 containerd[2093]: time="2024-12-13T01:37:00.825932525Z" level=info msg="CreateContainer within sandbox \"47635c9f743579cd5b50b85a1dec757f12685dbf28314eee32e283e219749e85\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 01:37:00.828645 containerd[2093]: time="2024-12-13T01:37:00.828608818Z" level=info msg="CreateContainer within sandbox \"f464137bbccc1eeb3f3c0e8ca9f14e03fe8f1b173477e026b0440ae137802b92\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 01:37:00.829519 containerd[2093]: time="2024-12-13T01:37:00.829159768Z" level=info msg="CreateContainer within sandbox \"b4585ade5cfd274548454a41eee3c5e6080d37463587a46b7223751108c97e67\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"93f46a3691e62b6d30a6dc1139e27c26b16a220d635048e46aa3fd806a52b621\"" Dec 13 01:37:00.830585 containerd[2093]: time="2024-12-13T01:37:00.830544600Z" level=info msg="StartContainer for \"93f46a3691e62b6d30a6dc1139e27c26b16a220d635048e46aa3fd806a52b621\"" Dec 13 01:37:00.852672 containerd[2093]: time="2024-12-13T01:37:00.852554289Z" level=info msg="CreateContainer within sandbox \"47635c9f743579cd5b50b85a1dec757f12685dbf28314eee32e283e219749e85\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9d9cbeb0648c7927fb79d678cda3b80ac95fc87a2bca8901f6363dc25f1b6a0d\"" Dec 13 01:37:00.853756 containerd[2093]: time="2024-12-13T01:37:00.853642803Z" level=info msg="StartContainer for \"9d9cbeb0648c7927fb79d678cda3b80ac95fc87a2bca8901f6363dc25f1b6a0d\"" Dec 13 01:37:00.856811 containerd[2093]: time="2024-12-13T01:37:00.856688673Z" level=info msg="CreateContainer within sandbox \"f464137bbccc1eeb3f3c0e8ca9f14e03fe8f1b173477e026b0440ae137802b92\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"feb95357a86f627fcd1c0eb65d7803551f359d1d885c40b74d48a6599dbcb700\"" Dec 13 01:37:00.857320 containerd[2093]: time="2024-12-13T01:37:00.857295877Z" level=info msg="StartContainer for \"feb95357a86f627fcd1c0eb65d7803551f359d1d885c40b74d48a6599dbcb700\"" Dec 13 01:37:01.066300 containerd[2093]: time="2024-12-13T01:37:01.066238519Z" level=info msg="StartContainer for \"93f46a3691e62b6d30a6dc1139e27c26b16a220d635048e46aa3fd806a52b621\" returns successfully" Dec 13 01:37:01.139814 containerd[2093]: time="2024-12-13T01:37:01.136828945Z" level=info msg="StartContainer for \"feb95357a86f627fcd1c0eb65d7803551f359d1d885c40b74d48a6599dbcb700\" returns successfully" Dec 13 01:37:01.164566 kubelet[3012]: E1213 01:37:01.164531 3012 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.20.145:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.20.145:6443: connect: connection refused Dec 13 01:37:01.170539 containerd[2093]: time="2024-12-13T01:37:01.166058320Z" level=info msg="StartContainer for \"9d9cbeb0648c7927fb79d678cda3b80ac95fc87a2bca8901f6363dc25f1b6a0d\" returns successfully" Dec 13 01:37:02.033071 kubelet[3012]: W1213 01:37:02.032670 3012 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.20.145:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.20.145:6443: connect: connection refused Dec 13 01:37:02.033071 kubelet[3012]: E1213 01:37:02.032745 3012 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.20.145:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.20.145:6443: connect: connection refused Dec 13 01:37:02.123914 kubelet[3012]: E1213 01:37:02.123202 3012 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.145:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-145?timeout=10s\": dial tcp 172.31.20.145:6443: connect: connection refused" interval="3.2s" Dec 13 01:37:02.242100 kubelet[3012]: W1213 01:37:02.242013 3012 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.20.145:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.20.145:6443: connect: connection refused Dec 13 01:37:02.242100 kubelet[3012]: E1213 01:37:02.242066 3012 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.20.145:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.20.145:6443: connect: connection refused Dec 13 01:37:02.251349 kubelet[3012]: I1213 01:37:02.251316 3012 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-20-145" Dec 13 01:37:02.262528 kubelet[3012]: E1213 01:37:02.262488 3012 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.20.145:6443/api/v1/nodes\": dial tcp 172.31.20.145:6443: connect: connection refused" node="ip-172-31-20-145" Dec 13 01:37:02.391522 kubelet[3012]: E1213 01:37:02.391479 3012 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.20.145:6443/api/v1/namespaces/default/events\": dial tcp 172.31.20.145:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-20-145.181098c2d44aace8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-20-145,UID:ip-172-31-20-145,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-20-145,},FirstTimestamp:2024-12-13 01:36:59.090382056 +0000 UTC m=+0.810648421,LastTimestamp:2024-12-13 01:36:59.090382056 +0000 UTC m=+0.810648421,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-20-145,}" Dec 13 01:37:02.507583 kubelet[3012]: W1213 01:37:02.507534 3012 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.20.145:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.20.145:6443: connect: connection refused Dec 13 01:37:02.507583 kubelet[3012]: E1213 01:37:02.507591 3012 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.20.145:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.20.145:6443: connect: connection refused Dec 13 01:37:03.710216 update_engine[2062]: I20241213 01:37:03.709374 2062 update_attempter.cc:509] Updating boot flags... Dec 13 01:37:03.985670 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (3299) Dec 13 01:37:04.509330 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (3298) Dec 13 01:37:05.136150 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (3298) Dec 13 01:37:05.532234 kubelet[3012]: I1213 01:37:05.522391 3012 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-20-145" Dec 13 01:37:05.645227 kubelet[3012]: E1213 01:37:05.644570 3012 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-20-145\" not found" node="ip-172-31-20-145" Dec 13 01:37:05.705144 kubelet[3012]: I1213 01:37:05.705095 3012 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-20-145" Dec 13 01:37:06.085607 kubelet[3012]: I1213 01:37:06.085562 3012 apiserver.go:52] "Watching apiserver" Dec 13 01:37:06.111553 kubelet[3012]: I1213 01:37:06.111472 3012 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 01:37:09.009257 systemd[1]: Reloading requested from client PID 3554 ('systemctl') (unit session-7.scope)... Dec 13 01:37:09.009279 systemd[1]: Reloading... Dec 13 01:37:09.253166 zram_generator::config[3595]: No configuration found. Dec 13 01:37:09.467656 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:37:09.606285 systemd[1]: Reloading finished in 596 ms. Dec 13 01:37:09.648281 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:37:09.667745 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:37:09.668638 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:37:09.677641 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:37:10.023407 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:37:10.027389 (kubelet)[3661]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:37:10.141261 kubelet[3661]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:37:10.141261 kubelet[3661]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:37:10.141261 kubelet[3661]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:37:10.143740 kubelet[3661]: I1213 01:37:10.143183 3661 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:37:10.152579 kubelet[3661]: I1213 01:37:10.151699 3661 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 01:37:10.152579 kubelet[3661]: I1213 01:37:10.151738 3661 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:37:10.152579 kubelet[3661]: I1213 01:37:10.152105 3661 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 01:37:10.154298 kubelet[3661]: I1213 01:37:10.154269 3661 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 01:37:10.164163 kubelet[3661]: I1213 01:37:10.163965 3661 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:37:10.176630 kubelet[3661]: I1213 01:37:10.176201 3661 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:37:10.177445 kubelet[3661]: I1213 01:37:10.177041 3661 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:37:10.177571 kubelet[3661]: I1213 01:37:10.177546 3661 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:37:10.181212 kubelet[3661]: I1213 01:37:10.178411 3661 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:37:10.181212 kubelet[3661]: I1213 01:37:10.178442 3661 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:37:10.181212 kubelet[3661]: I1213 01:37:10.178501 3661 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:37:10.181212 kubelet[3661]: I1213 01:37:10.178675 3661 kubelet.go:396] "Attempting to sync node with API server" Dec 13 01:37:10.181212 kubelet[3661]: I1213 01:37:10.178693 3661 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:37:10.181212 kubelet[3661]: I1213 01:37:10.178728 3661 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:37:10.181212 kubelet[3661]: I1213 01:37:10.178752 3661 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:37:10.186080 kubelet[3661]: I1213 01:37:10.182877 3661 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:37:10.186080 kubelet[3661]: I1213 01:37:10.183106 3661 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:37:10.189990 kubelet[3661]: I1213 01:37:10.189950 3661 server.go:1256] "Started kubelet" Dec 13 01:37:10.204567 kubelet[3661]: I1213 01:37:10.204534 3661 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:37:10.217574 kubelet[3661]: I1213 01:37:10.217478 3661 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:37:10.222819 kubelet[3661]: I1213 01:37:10.221199 3661 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:37:10.222819 kubelet[3661]: I1213 01:37:10.221918 3661 server.go:461] "Adding debug handlers to kubelet server" Dec 13 01:37:10.228100 kubelet[3661]: I1213 01:37:10.228072 3661 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:37:10.247949 kubelet[3661]: I1213 01:37:10.228408 3661 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 01:37:10.249380 kubelet[3661]: I1213 01:37:10.249358 3661 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 01:37:10.249753 kubelet[3661]: I1213 01:37:10.229758 3661 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:37:10.251703 kubelet[3661]: I1213 01:37:10.250756 3661 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:37:10.262435 kubelet[3661]: I1213 01:37:10.262384 3661 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:37:10.279967 kubelet[3661]: I1213 01:37:10.278850 3661 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:37:10.292719 kubelet[3661]: I1213 01:37:10.292690 3661 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:37:10.295839 kubelet[3661]: E1213 01:37:10.295707 3661 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:37:10.301771 kubelet[3661]: I1213 01:37:10.301613 3661 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:37:10.301771 kubelet[3661]: I1213 01:37:10.301654 3661 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:37:10.301771 kubelet[3661]: I1213 01:37:10.301677 3661 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 01:37:10.301771 kubelet[3661]: E1213 01:37:10.301747 3661 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:37:10.339791 kubelet[3661]: I1213 01:37:10.339763 3661 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-20-145" Dec 13 01:37:10.358644 kubelet[3661]: I1213 01:37:10.358583 3661 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-20-145" Dec 13 01:37:10.359003 kubelet[3661]: I1213 01:37:10.358719 3661 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-20-145" Dec 13 01:37:10.402090 kubelet[3661]: E1213 01:37:10.401947 3661 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 01:37:10.403851 kubelet[3661]: I1213 01:37:10.403502 3661 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:37:10.403851 kubelet[3661]: I1213 01:37:10.403557 3661 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:37:10.403851 kubelet[3661]: I1213 01:37:10.403594 3661 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:37:10.403851 kubelet[3661]: I1213 01:37:10.403769 3661 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 01:37:10.403851 kubelet[3661]: I1213 01:37:10.403795 3661 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 01:37:10.403851 kubelet[3661]: I1213 01:37:10.403805 3661 policy_none.go:49] "None policy: Start" Dec 13 01:37:10.405968 kubelet[3661]: I1213 01:37:10.405351 3661 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:37:10.405968 kubelet[3661]: I1213 01:37:10.405391 3661 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:37:10.405968 kubelet[3661]: I1213 01:37:10.405594 3661 state_mem.go:75] "Updated machine memory state" Dec 13 01:37:10.408166 kubelet[3661]: I1213 01:37:10.408113 3661 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:37:10.410250 kubelet[3661]: I1213 01:37:10.409932 3661 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:37:10.602853 kubelet[3661]: I1213 01:37:10.602554 3661 topology_manager.go:215] "Topology Admit Handler" podUID="f2576a9d4fa7fa46d3459f153f132380" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-20-145" Dec 13 01:37:10.603185 kubelet[3661]: I1213 01:37:10.603074 3661 topology_manager.go:215] "Topology Admit Handler" podUID="9c0c720114e938b36ee4f1f00bfe4f0d" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-20-145" Dec 13 01:37:10.607193 kubelet[3661]: I1213 01:37:10.604382 3661 topology_manager.go:215] "Topology Admit Handler" podUID="91545541379665a00da5b2ece6520422" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-20-145" Dec 13 01:37:10.617276 kubelet[3661]: E1213 01:37:10.617144 3661 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-20-145\" already exists" pod="kube-system/kube-apiserver-ip-172-31-20-145" Dec 13 01:37:10.652473 kubelet[3661]: I1213 01:37:10.652422 3661 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f2576a9d4fa7fa46d3459f153f132380-k8s-certs\") pod \"kube-apiserver-ip-172-31-20-145\" (UID: \"f2576a9d4fa7fa46d3459f153f132380\") " pod="kube-system/kube-apiserver-ip-172-31-20-145" Dec 13 01:37:10.652956 kubelet[3661]: I1213 01:37:10.652490 3661 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f2576a9d4fa7fa46d3459f153f132380-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-20-145\" (UID: \"f2576a9d4fa7fa46d3459f153f132380\") " pod="kube-system/kube-apiserver-ip-172-31-20-145" Dec 13 01:37:10.652956 kubelet[3661]: I1213 01:37:10.652520 3661 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9c0c720114e938b36ee4f1f00bfe4f0d-ca-certs\") pod \"kube-controller-manager-ip-172-31-20-145\" (UID: \"9c0c720114e938b36ee4f1f00bfe4f0d\") " pod="kube-system/kube-controller-manager-ip-172-31-20-145" Dec 13 01:37:10.652956 kubelet[3661]: I1213 01:37:10.652550 3661 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9c0c720114e938b36ee4f1f00bfe4f0d-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-20-145\" (UID: \"9c0c720114e938b36ee4f1f00bfe4f0d\") " pod="kube-system/kube-controller-manager-ip-172-31-20-145" Dec 13 01:37:10.652956 kubelet[3661]: I1213 01:37:10.652588 3661 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c0c720114e938b36ee4f1f00bfe4f0d-kubeconfig\") pod \"kube-controller-manager-ip-172-31-20-145\" (UID: \"9c0c720114e938b36ee4f1f00bfe4f0d\") " pod="kube-system/kube-controller-manager-ip-172-31-20-145" Dec 13 01:37:10.652956 kubelet[3661]: I1213 01:37:10.652620 3661 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9c0c720114e938b36ee4f1f00bfe4f0d-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-20-145\" (UID: \"9c0c720114e938b36ee4f1f00bfe4f0d\") " pod="kube-system/kube-controller-manager-ip-172-31-20-145" Dec 13 01:37:10.653317 kubelet[3661]: I1213 01:37:10.652647 3661 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f2576a9d4fa7fa46d3459f153f132380-ca-certs\") pod \"kube-apiserver-ip-172-31-20-145\" (UID: \"f2576a9d4fa7fa46d3459f153f132380\") " pod="kube-system/kube-apiserver-ip-172-31-20-145" Dec 13 01:37:10.653317 kubelet[3661]: I1213 01:37:10.652841 3661 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9c0c720114e938b36ee4f1f00bfe4f0d-k8s-certs\") pod \"kube-controller-manager-ip-172-31-20-145\" (UID: \"9c0c720114e938b36ee4f1f00bfe4f0d\") " pod="kube-system/kube-controller-manager-ip-172-31-20-145" Dec 13 01:37:10.653317 kubelet[3661]: I1213 01:37:10.652883 3661 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/91545541379665a00da5b2ece6520422-kubeconfig\") pod \"kube-scheduler-ip-172-31-20-145\" (UID: \"91545541379665a00da5b2ece6520422\") " pod="kube-system/kube-scheduler-ip-172-31-20-145" Dec 13 01:37:11.182595 kubelet[3661]: I1213 01:37:11.182309 3661 apiserver.go:52] "Watching apiserver" Dec 13 01:37:11.249233 kubelet[3661]: I1213 01:37:11.249181 3661 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 01:37:11.369286 kubelet[3661]: E1213 01:37:11.368513 3661 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-20-145\" already exists" pod="kube-system/kube-apiserver-ip-172-31-20-145" Dec 13 01:37:11.380066 kubelet[3661]: E1213 01:37:11.380034 3661 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-20-145\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-20-145" Dec 13 01:37:11.420949 kubelet[3661]: I1213 01:37:11.420864 3661 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-20-145" podStartSLOduration=1.420807007 podStartE2EDuration="1.420807007s" podCreationTimestamp="2024-12-13 01:37:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:37:11.393449612 +0000 UTC m=+1.355261907" watchObservedRunningTime="2024-12-13 01:37:11.420807007 +0000 UTC m=+1.382619310" Dec 13 01:37:11.443211 kubelet[3661]: I1213 01:37:11.443074 3661 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-20-145" podStartSLOduration=1.443028221 podStartE2EDuration="1.443028221s" podCreationTimestamp="2024-12-13 01:37:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:37:11.442698634 +0000 UTC m=+1.404510928" watchObservedRunningTime="2024-12-13 01:37:11.443028221 +0000 UTC m=+1.404840517" Dec 13 01:37:11.804167 sudo[2414]: pam_unix(sudo:session): session closed for user root Dec 13 01:37:11.829576 sshd[2410]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:11.834727 systemd[1]: sshd@6-172.31.20.145:22-139.178.68.195:34952.service: Deactivated successfully. Dec 13 01:37:11.845372 systemd-logind[2059]: Session 7 logged out. Waiting for processes to exit. Dec 13 01:37:11.845613 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 01:37:11.848729 systemd-logind[2059]: Removed session 7. Dec 13 01:37:21.814542 kubelet[3661]: I1213 01:37:21.814504 3661 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 01:37:21.816935 containerd[2093]: time="2024-12-13T01:37:21.815745247Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 01:37:21.817426 kubelet[3661]: I1213 01:37:21.816260 3661 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 01:37:22.623319 kubelet[3661]: I1213 01:37:22.622850 3661 topology_manager.go:215] "Topology Admit Handler" podUID="d9fc0335-9931-4725-8c0d-b6c01b21c4e3" podNamespace="kube-system" podName="kube-proxy-9b2qb" Dec 13 01:37:22.635146 kubelet[3661]: I1213 01:37:22.631686 3661 topology_manager.go:215] "Topology Admit Handler" podUID="fc028e8f-5399-4577-b067-f361d786f542" podNamespace="kube-flannel" podName="kube-flannel-ds-6tlch" Dec 13 01:37:22.738155 kubelet[3661]: I1213 01:37:22.737741 3661 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5gxz\" (UniqueName: \"kubernetes.io/projected/d9fc0335-9931-4725-8c0d-b6c01b21c4e3-kube-api-access-r5gxz\") pod \"kube-proxy-9b2qb\" (UID: \"d9fc0335-9931-4725-8c0d-b6c01b21c4e3\") " pod="kube-system/kube-proxy-9b2qb" Dec 13 01:37:22.738155 kubelet[3661]: I1213 01:37:22.737811 3661 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d9fc0335-9931-4725-8c0d-b6c01b21c4e3-lib-modules\") pod \"kube-proxy-9b2qb\" (UID: \"d9fc0335-9931-4725-8c0d-b6c01b21c4e3\") " pod="kube-system/kube-proxy-9b2qb" Dec 13 01:37:22.738155 kubelet[3661]: I1213 01:37:22.737841 3661 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/fc028e8f-5399-4577-b067-f361d786f542-cni\") pod \"kube-flannel-ds-6tlch\" (UID: \"fc028e8f-5399-4577-b067-f361d786f542\") " pod="kube-flannel/kube-flannel-ds-6tlch" Dec 13 01:37:22.738155 kubelet[3661]: I1213 01:37:22.737875 3661 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/fc028e8f-5399-4577-b067-f361d786f542-flannel-cfg\") pod \"kube-flannel-ds-6tlch\" (UID: \"fc028e8f-5399-4577-b067-f361d786f542\") " pod="kube-flannel/kube-flannel-ds-6tlch" Dec 13 01:37:22.738155 kubelet[3661]: I1213 01:37:22.737905 3661 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d9fc0335-9931-4725-8c0d-b6c01b21c4e3-xtables-lock\") pod \"kube-proxy-9b2qb\" (UID: \"d9fc0335-9931-4725-8c0d-b6c01b21c4e3\") " pod="kube-system/kube-proxy-9b2qb" Dec 13 01:37:22.738395 kubelet[3661]: I1213 01:37:22.737936 3661 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/fc028e8f-5399-4577-b067-f361d786f542-run\") pod \"kube-flannel-ds-6tlch\" (UID: \"fc028e8f-5399-4577-b067-f361d786f542\") " pod="kube-flannel/kube-flannel-ds-6tlch" Dec 13 01:37:22.738395 kubelet[3661]: I1213 01:37:22.737957 3661 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/fc028e8f-5399-4577-b067-f361d786f542-cni-plugin\") pod \"kube-flannel-ds-6tlch\" (UID: \"fc028e8f-5399-4577-b067-f361d786f542\") " pod="kube-flannel/kube-flannel-ds-6tlch" Dec 13 01:37:22.738395 kubelet[3661]: I1213 01:37:22.737975 3661 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fc028e8f-5399-4577-b067-f361d786f542-xtables-lock\") pod \"kube-flannel-ds-6tlch\" (UID: \"fc028e8f-5399-4577-b067-f361d786f542\") " pod="kube-flannel/kube-flannel-ds-6tlch" Dec 13 01:37:22.738395 kubelet[3661]: I1213 01:37:22.738078 3661 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpvpd\" (UniqueName: \"kubernetes.io/projected/fc028e8f-5399-4577-b067-f361d786f542-kube-api-access-qpvpd\") pod \"kube-flannel-ds-6tlch\" (UID: \"fc028e8f-5399-4577-b067-f361d786f542\") " pod="kube-flannel/kube-flannel-ds-6tlch" Dec 13 01:37:22.738395 kubelet[3661]: I1213 01:37:22.738103 3661 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d9fc0335-9931-4725-8c0d-b6c01b21c4e3-kube-proxy\") pod \"kube-proxy-9b2qb\" (UID: \"d9fc0335-9931-4725-8c0d-b6c01b21c4e3\") " pod="kube-system/kube-proxy-9b2qb" Dec 13 01:37:22.952006 containerd[2093]: time="2024-12-13T01:37:22.951364441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9b2qb,Uid:d9fc0335-9931-4725-8c0d-b6c01b21c4e3,Namespace:kube-system,Attempt:0,}" Dec 13 01:37:22.954225 containerd[2093]: time="2024-12-13T01:37:22.953873511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-6tlch,Uid:fc028e8f-5399-4577-b067-f361d786f542,Namespace:kube-flannel,Attempt:0,}" Dec 13 01:37:23.062009 containerd[2093]: time="2024-12-13T01:37:23.061858250Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:37:23.062009 containerd[2093]: time="2024-12-13T01:37:23.061944043Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:37:23.062009 containerd[2093]: time="2024-12-13T01:37:23.061969005Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:37:23.062916 containerd[2093]: time="2024-12-13T01:37:23.062092843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:37:23.070870 containerd[2093]: time="2024-12-13T01:37:23.067300710Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:37:23.070870 containerd[2093]: time="2024-12-13T01:37:23.067524598Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:37:23.070870 containerd[2093]: time="2024-12-13T01:37:23.067592840Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:37:23.070870 containerd[2093]: time="2024-12-13T01:37:23.067771195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:37:23.206561 containerd[2093]: time="2024-12-13T01:37:23.205097628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9b2qb,Uid:d9fc0335-9931-4725-8c0d-b6c01b21c4e3,Namespace:kube-system,Attempt:0,} returns sandbox id \"5899d1c6bc062478b9ecc4bfcc8bd193a6e48cd722fdf5d906c7d0a3a6c23001\"" Dec 13 01:37:23.216239 containerd[2093]: time="2024-12-13T01:37:23.215668469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-6tlch,Uid:fc028e8f-5399-4577-b067-f361d786f542,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"e4d6b038d10d7091ebac64de8bc6fd33dc83bad96ad547314af4f545b4aa76e6\"" Dec 13 01:37:23.216239 containerd[2093]: time="2024-12-13T01:37:23.215882680Z" level=info msg="CreateContainer within sandbox \"5899d1c6bc062478b9ecc4bfcc8bd193a6e48cd722fdf5d906c7d0a3a6c23001\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 01:37:23.221664 containerd[2093]: time="2024-12-13T01:37:23.220106655Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Dec 13 01:37:23.257534 containerd[2093]: time="2024-12-13T01:37:23.256691230Z" level=info msg="CreateContainer within sandbox \"5899d1c6bc062478b9ecc4bfcc8bd193a6e48cd722fdf5d906c7d0a3a6c23001\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f017880e7f3cf71055c2bba7461c3bb8e1c8012a6c5e96ced0207933512a917f\"" Dec 13 01:37:23.260013 containerd[2093]: time="2024-12-13T01:37:23.259940272Z" level=info msg="StartContainer for \"f017880e7f3cf71055c2bba7461c3bb8e1c8012a6c5e96ced0207933512a917f\"" Dec 13 01:37:23.344048 containerd[2093]: time="2024-12-13T01:37:23.343998354Z" level=info msg="StartContainer for \"f017880e7f3cf71055c2bba7461c3bb8e1c8012a6c5e96ced0207933512a917f\" returns successfully" Dec 13 01:37:23.424565 kubelet[3661]: I1213 01:37:23.424521 3661 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-9b2qb" podStartSLOduration=1.424469628 podStartE2EDuration="1.424469628s" podCreationTimestamp="2024-12-13 01:37:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:37:23.421516263 +0000 UTC m=+13.383328558" watchObservedRunningTime="2024-12-13 01:37:23.424469628 +0000 UTC m=+13.386281923" Dec 13 01:37:25.090979 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1680979505.mount: Deactivated successfully. Dec 13 01:37:25.175872 containerd[2093]: time="2024-12-13T01:37:25.175820097Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:37:25.178919 containerd[2093]: time="2024-12-13T01:37:25.178332006Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3852936" Dec 13 01:37:25.183526 containerd[2093]: time="2024-12-13T01:37:25.181509796Z" level=info msg="ImageCreate event name:\"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:37:25.186822 containerd[2093]: time="2024-12-13T01:37:25.186652171Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:37:25.187706 containerd[2093]: time="2024-12-13T01:37:25.187663437Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3842055\" in 1.967015714s" Dec 13 01:37:25.187804 containerd[2093]: time="2024-12-13T01:37:25.187714815Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" Dec 13 01:37:25.191291 containerd[2093]: time="2024-12-13T01:37:25.191254723Z" level=info msg="CreateContainer within sandbox \"e4d6b038d10d7091ebac64de8bc6fd33dc83bad96ad547314af4f545b4aa76e6\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Dec 13 01:37:25.220591 containerd[2093]: time="2024-12-13T01:37:25.220465891Z" level=info msg="CreateContainer within sandbox \"e4d6b038d10d7091ebac64de8bc6fd33dc83bad96ad547314af4f545b4aa76e6\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"0cbd1ae1f0ac4904e1b27aa1bccbb313a245bb86742332bdf1bb0b625b77119a\"" Dec 13 01:37:25.221314 containerd[2093]: time="2024-12-13T01:37:25.221205248Z" level=info msg="StartContainer for \"0cbd1ae1f0ac4904e1b27aa1bccbb313a245bb86742332bdf1bb0b625b77119a\"" Dec 13 01:37:25.308029 containerd[2093]: time="2024-12-13T01:37:25.307977642Z" level=info msg="StartContainer for \"0cbd1ae1f0ac4904e1b27aa1bccbb313a245bb86742332bdf1bb0b625b77119a\" returns successfully" Dec 13 01:37:25.382980 containerd[2093]: time="2024-12-13T01:37:25.382717041Z" level=info msg="shim disconnected" id=0cbd1ae1f0ac4904e1b27aa1bccbb313a245bb86742332bdf1bb0b625b77119a namespace=k8s.io Dec 13 01:37:25.382980 containerd[2093]: time="2024-12-13T01:37:25.382888277Z" level=warning msg="cleaning up after shim disconnected" id=0cbd1ae1f0ac4904e1b27aa1bccbb313a245bb86742332bdf1bb0b625b77119a namespace=k8s.io Dec 13 01:37:25.382980 containerd[2093]: time="2024-12-13T01:37:25.382902700Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:37:25.410790 containerd[2093]: time="2024-12-13T01:37:25.410409063Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Dec 13 01:37:25.928871 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0cbd1ae1f0ac4904e1b27aa1bccbb313a245bb86742332bdf1bb0b625b77119a-rootfs.mount: Deactivated successfully. Dec 13 01:37:27.432068 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2142863212.mount: Deactivated successfully. Dec 13 01:37:28.414949 containerd[2093]: time="2024-12-13T01:37:28.414894389Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:37:28.418172 containerd[2093]: time="2024-12-13T01:37:28.417435863Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26866358" Dec 13 01:37:28.422545 containerd[2093]: time="2024-12-13T01:37:28.422402379Z" level=info msg="ImageCreate event name:\"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:37:28.432047 containerd[2093]: time="2024-12-13T01:37:28.431970561Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:37:28.434870 containerd[2093]: time="2024-12-13T01:37:28.434819758Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26855532\" in 3.024362807s" Dec 13 01:37:28.434870 containerd[2093]: time="2024-12-13T01:37:28.434873518Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" Dec 13 01:37:28.447493 containerd[2093]: time="2024-12-13T01:37:28.442990478Z" level=info msg="CreateContainer within sandbox \"e4d6b038d10d7091ebac64de8bc6fd33dc83bad96ad547314af4f545b4aa76e6\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 01:37:28.506311 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount445995859.mount: Deactivated successfully. Dec 13 01:37:28.528496 containerd[2093]: time="2024-12-13T01:37:28.528449408Z" level=info msg="CreateContainer within sandbox \"e4d6b038d10d7091ebac64de8bc6fd33dc83bad96ad547314af4f545b4aa76e6\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"dc62d94bbbbaf15199b99d4081f03123916e9a1a4b3f69fa7a57db83a20ded2b\"" Dec 13 01:37:28.530919 containerd[2093]: time="2024-12-13T01:37:28.529352988Z" level=info msg="StartContainer for \"dc62d94bbbbaf15199b99d4081f03123916e9a1a4b3f69fa7a57db83a20ded2b\"" Dec 13 01:37:28.696637 containerd[2093]: time="2024-12-13T01:37:28.696244558Z" level=info msg="StartContainer for \"dc62d94bbbbaf15199b99d4081f03123916e9a1a4b3f69fa7a57db83a20ded2b\" returns successfully" Dec 13 01:37:28.773373 kubelet[3661]: I1213 01:37:28.773303 3661 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 01:37:28.795960 containerd[2093]: time="2024-12-13T01:37:28.795885777Z" level=info msg="shim disconnected" id=dc62d94bbbbaf15199b99d4081f03123916e9a1a4b3f69fa7a57db83a20ded2b namespace=k8s.io Dec 13 01:37:28.795960 containerd[2093]: time="2024-12-13T01:37:28.795954177Z" level=warning msg="cleaning up after shim disconnected" id=dc62d94bbbbaf15199b99d4081f03123916e9a1a4b3f69fa7a57db83a20ded2b namespace=k8s.io Dec 13 01:37:28.795960 containerd[2093]: time="2024-12-13T01:37:28.795966557Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:37:28.820175 kubelet[3661]: I1213 01:37:28.820115 3661 topology_manager.go:215] "Topology Admit Handler" podUID="72751374-c0c7-46fe-867c-63fdf7b86cc0" podNamespace="kube-system" podName="coredns-76f75df574-gtxw6" Dec 13 01:37:28.823255 kubelet[3661]: I1213 01:37:28.820376 3661 topology_manager.go:215] "Topology Admit Handler" podUID="99f726eb-5c0a-478c-b656-ac2db91c49a7" podNamespace="kube-system" podName="coredns-76f75df574-hd7z4" Dec 13 01:37:28.885854 kubelet[3661]: I1213 01:37:28.885698 3661 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dpgp\" (UniqueName: \"kubernetes.io/projected/99f726eb-5c0a-478c-b656-ac2db91c49a7-kube-api-access-7dpgp\") pod \"coredns-76f75df574-hd7z4\" (UID: \"99f726eb-5c0a-478c-b656-ac2db91c49a7\") " pod="kube-system/coredns-76f75df574-hd7z4" Dec 13 01:37:28.885854 kubelet[3661]: I1213 01:37:28.885764 3661 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bg79r\" (UniqueName: \"kubernetes.io/projected/72751374-c0c7-46fe-867c-63fdf7b86cc0-kube-api-access-bg79r\") pod \"coredns-76f75df574-gtxw6\" (UID: \"72751374-c0c7-46fe-867c-63fdf7b86cc0\") " pod="kube-system/coredns-76f75df574-gtxw6" Dec 13 01:37:28.885854 kubelet[3661]: I1213 01:37:28.885789 3661 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/72751374-c0c7-46fe-867c-63fdf7b86cc0-config-volume\") pod \"coredns-76f75df574-gtxw6\" (UID: \"72751374-c0c7-46fe-867c-63fdf7b86cc0\") " pod="kube-system/coredns-76f75df574-gtxw6" Dec 13 01:37:28.885854 kubelet[3661]: I1213 01:37:28.885820 3661 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/99f726eb-5c0a-478c-b656-ac2db91c49a7-config-volume\") pod \"coredns-76f75df574-hd7z4\" (UID: \"99f726eb-5c0a-478c-b656-ac2db91c49a7\") " pod="kube-system/coredns-76f75df574-hd7z4" Dec 13 01:37:29.131475 containerd[2093]: time="2024-12-13T01:37:29.131399007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-gtxw6,Uid:72751374-c0c7-46fe-867c-63fdf7b86cc0,Namespace:kube-system,Attempt:0,}" Dec 13 01:37:29.135792 containerd[2093]: time="2024-12-13T01:37:29.135746226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-hd7z4,Uid:99f726eb-5c0a-478c-b656-ac2db91c49a7,Namespace:kube-system,Attempt:0,}" Dec 13 01:37:29.257316 containerd[2093]: time="2024-12-13T01:37:29.257188027Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-gtxw6,Uid:72751374-c0c7-46fe-867c-63fdf7b86cc0,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a7dddef693454db56e89e2dce13ab7c89e6831f133fce930ef774553f319417e\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 01:37:29.257943 kubelet[3661]: E1213 01:37:29.257917 3661 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7dddef693454db56e89e2dce13ab7c89e6831f133fce930ef774553f319417e\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 01:37:29.258163 kubelet[3661]: E1213 01:37:29.257989 3661 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7dddef693454db56e89e2dce13ab7c89e6831f133fce930ef774553f319417e\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-gtxw6" Dec 13 01:37:29.258163 kubelet[3661]: E1213 01:37:29.258016 3661 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7dddef693454db56e89e2dce13ab7c89e6831f133fce930ef774553f319417e\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-gtxw6" Dec 13 01:37:29.258163 kubelet[3661]: E1213 01:37:29.258086 3661 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-gtxw6_kube-system(72751374-c0c7-46fe-867c-63fdf7b86cc0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-gtxw6_kube-system(72751374-c0c7-46fe-867c-63fdf7b86cc0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a7dddef693454db56e89e2dce13ab7c89e6831f133fce930ef774553f319417e\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-76f75df574-gtxw6" podUID="72751374-c0c7-46fe-867c-63fdf7b86cc0" Dec 13 01:37:29.273801 containerd[2093]: time="2024-12-13T01:37:29.273745804Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-hd7z4,Uid:99f726eb-5c0a-478c-b656-ac2db91c49a7,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"edb4b5ba422551fcc9258033e13a4a2bad41dde35e88a3cb48adbe033b61e7cb\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 01:37:29.274093 kubelet[3661]: E1213 01:37:29.274061 3661 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"edb4b5ba422551fcc9258033e13a4a2bad41dde35e88a3cb48adbe033b61e7cb\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 01:37:29.274562 kubelet[3661]: E1213 01:37:29.274206 3661 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"edb4b5ba422551fcc9258033e13a4a2bad41dde35e88a3cb48adbe033b61e7cb\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-hd7z4" Dec 13 01:37:29.274562 kubelet[3661]: E1213 01:37:29.274239 3661 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"edb4b5ba422551fcc9258033e13a4a2bad41dde35e88a3cb48adbe033b61e7cb\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-hd7z4" Dec 13 01:37:29.274562 kubelet[3661]: E1213 01:37:29.274303 3661 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-hd7z4_kube-system(99f726eb-5c0a-478c-b656-ac2db91c49a7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-hd7z4_kube-system(99f726eb-5c0a-478c-b656-ac2db91c49a7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"edb4b5ba422551fcc9258033e13a4a2bad41dde35e88a3cb48adbe033b61e7cb\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-76f75df574-hd7z4" podUID="99f726eb-5c0a-478c-b656-ac2db91c49a7" Dec 13 01:37:29.468152 containerd[2093]: time="2024-12-13T01:37:29.467796917Z" level=info msg="CreateContainer within sandbox \"e4d6b038d10d7091ebac64de8bc6fd33dc83bad96ad547314af4f545b4aa76e6\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Dec 13 01:37:29.500996 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dc62d94bbbbaf15199b99d4081f03123916e9a1a4b3f69fa7a57db83a20ded2b-rootfs.mount: Deactivated successfully. Dec 13 01:37:29.536497 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3279700208.mount: Deactivated successfully. Dec 13 01:37:29.541665 containerd[2093]: time="2024-12-13T01:37:29.541614181Z" level=info msg="CreateContainer within sandbox \"e4d6b038d10d7091ebac64de8bc6fd33dc83bad96ad547314af4f545b4aa76e6\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"758067ee1eac958ba04c2729505f8aa97f1584c88e1027aabf4f82765ad2996f\"" Dec 13 01:37:29.542414 containerd[2093]: time="2024-12-13T01:37:29.542371462Z" level=info msg="StartContainer for \"758067ee1eac958ba04c2729505f8aa97f1584c88e1027aabf4f82765ad2996f\"" Dec 13 01:37:29.612313 containerd[2093]: time="2024-12-13T01:37:29.612264952Z" level=info msg="StartContainer for \"758067ee1eac958ba04c2729505f8aa97f1584c88e1027aabf4f82765ad2996f\" returns successfully" Dec 13 01:37:30.538142 kubelet[3661]: I1213 01:37:30.536978 3661 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-6tlch" podStartSLOduration=3.319288461 podStartE2EDuration="8.536820803s" podCreationTimestamp="2024-12-13 01:37:22 +0000 UTC" firstStartedPulling="2024-12-13 01:37:23.218402527 +0000 UTC m=+13.180214801" lastFinishedPulling="2024-12-13 01:37:28.435934854 +0000 UTC m=+18.397747143" observedRunningTime="2024-12-13 01:37:30.536660829 +0000 UTC m=+20.498473125" watchObservedRunningTime="2024-12-13 01:37:30.536820803 +0000 UTC m=+20.498633119" Dec 13 01:37:30.667256 (udev-worker)[4197]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:37:30.697047 systemd-networkd[1648]: flannel.1: Link UP Dec 13 01:37:30.697061 systemd-networkd[1648]: flannel.1: Gained carrier Dec 13 01:37:32.547235 systemd-networkd[1648]: flannel.1: Gained IPv6LL Dec 13 01:37:34.588021 ntpd[2043]: Listen normally on 6 flannel.1 192.168.0.0:123 Dec 13 01:37:34.588112 ntpd[2043]: Listen normally on 7 flannel.1 [fe80::60da:34ff:fe63:57e7%4]:123 Dec 13 01:37:34.588581 ntpd[2043]: 13 Dec 01:37:34 ntpd[2043]: Listen normally on 6 flannel.1 192.168.0.0:123 Dec 13 01:37:34.588581 ntpd[2043]: 13 Dec 01:37:34 ntpd[2043]: Listen normally on 7 flannel.1 [fe80::60da:34ff:fe63:57e7%4]:123 Dec 13 01:37:43.303337 containerd[2093]: time="2024-12-13T01:37:43.303290661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-hd7z4,Uid:99f726eb-5c0a-478c-b656-ac2db91c49a7,Namespace:kube-system,Attempt:0,}" Dec 13 01:37:43.371804 systemd-networkd[1648]: cni0: Link UP Dec 13 01:37:43.371815 systemd-networkd[1648]: cni0: Gained carrier Dec 13 01:37:43.379997 (udev-worker)[4334]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:37:43.380080 systemd-networkd[1648]: cni0: Lost carrier Dec 13 01:37:43.410771 systemd-networkd[1648]: vethc9f26075: Link UP Dec 13 01:37:43.419144 kernel: cni0: port 1(vethc9f26075) entered blocking state Dec 13 01:37:43.419252 kernel: cni0: port 1(vethc9f26075) entered disabled state Dec 13 01:37:43.422271 kernel: vethc9f26075: entered allmulticast mode Dec 13 01:37:43.422372 kernel: vethc9f26075: entered promiscuous mode Dec 13 01:37:43.424612 kernel: cni0: port 1(vethc9f26075) entered blocking state Dec 13 01:37:43.424819 kernel: cni0: port 1(vethc9f26075) entered forwarding state Dec 13 01:37:43.424859 kernel: cni0: port 1(vethc9f26075) entered disabled state Dec 13 01:37:43.426140 (udev-worker)[4340]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:37:43.433250 kernel: cni0: port 1(vethc9f26075) entered blocking state Dec 13 01:37:43.433335 kernel: cni0: port 1(vethc9f26075) entered forwarding state Dec 13 01:37:43.433353 systemd-networkd[1648]: vethc9f26075: Gained carrier Dec 13 01:37:43.435248 systemd-networkd[1648]: cni0: Gained carrier Dec 13 01:37:43.438800 containerd[2093]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009a8e8), "name":"cbr0", "type":"bridge"} Dec 13 01:37:43.438800 containerd[2093]: delegateAdd: netconf sent to delegate plugin: Dec 13 01:37:43.484939 containerd[2093]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":8951,"name":"cbr0","type":"bridge"}time="2024-12-13T01:37:43.484683817Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:37:43.484939 containerd[2093]: time="2024-12-13T01:37:43.484746035Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:37:43.484939 containerd[2093]: time="2024-12-13T01:37:43.484762842Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:37:43.484939 containerd[2093]: time="2024-12-13T01:37:43.484876769Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:37:43.565741 containerd[2093]: time="2024-12-13T01:37:43.565581890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-hd7z4,Uid:99f726eb-5c0a-478c-b656-ac2db91c49a7,Namespace:kube-system,Attempt:0,} returns sandbox id \"d44475fb8a7807172afae62382c4c788b8747e8b116abcf820eb2d0a8b50747e\"" Dec 13 01:37:43.579336 containerd[2093]: time="2024-12-13T01:37:43.579148895Z" level=info msg="CreateContainer within sandbox \"d44475fb8a7807172afae62382c4c788b8747e8b116abcf820eb2d0a8b50747e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:37:43.617351 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4058758581.mount: Deactivated successfully. Dec 13 01:37:43.630179 containerd[2093]: time="2024-12-13T01:37:43.630103024Z" level=info msg="CreateContainer within sandbox \"d44475fb8a7807172afae62382c4c788b8747e8b116abcf820eb2d0a8b50747e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"35f984287640fdd85012b58c15ce5fc4c6969744d714bb1d14bb603fc56123cb\"" Dec 13 01:37:43.631107 containerd[2093]: time="2024-12-13T01:37:43.631078299Z" level=info msg="StartContainer for \"35f984287640fdd85012b58c15ce5fc4c6969744d714bb1d14bb603fc56123cb\"" Dec 13 01:37:43.729951 containerd[2093]: time="2024-12-13T01:37:43.729907717Z" level=info msg="StartContainer for \"35f984287640fdd85012b58c15ce5fc4c6969744d714bb1d14bb603fc56123cb\" returns successfully" Dec 13 01:37:44.326653 containerd[2093]: time="2024-12-13T01:37:44.326559351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-gtxw6,Uid:72751374-c0c7-46fe-867c-63fdf7b86cc0,Namespace:kube-system,Attempt:0,}" Dec 13 01:37:44.390563 systemd-networkd[1648]: veth7b6e13cb: Link UP Dec 13 01:37:44.391507 kernel: cni0: port 2(veth7b6e13cb) entered blocking state Dec 13 01:37:44.391554 kernel: cni0: port 2(veth7b6e13cb) entered disabled state Dec 13 01:37:44.393363 kernel: veth7b6e13cb: entered allmulticast mode Dec 13 01:37:44.396147 kernel: veth7b6e13cb: entered promiscuous mode Dec 13 01:37:44.396475 (udev-worker)[4336]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:37:44.419629 kernel: cni0: port 2(veth7b6e13cb) entered blocking state Dec 13 01:37:44.419733 kernel: cni0: port 2(veth7b6e13cb) entered forwarding state Dec 13 01:37:44.419973 systemd-networkd[1648]: veth7b6e13cb: Gained carrier Dec 13 01:37:44.423323 containerd[2093]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00001a938), "name":"cbr0", "type":"bridge"} Dec 13 01:37:44.423323 containerd[2093]: delegateAdd: netconf sent to delegate plugin: Dec 13 01:37:44.461520 containerd[2093]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":8951,"name":"cbr0","type":"bridge"}time="2024-12-13T01:37:44.461390424Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:37:44.461520 containerd[2093]: time="2024-12-13T01:37:44.461476248Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:37:44.462143 containerd[2093]: time="2024-12-13T01:37:44.461502120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:37:44.462143 containerd[2093]: time="2024-12-13T01:37:44.462043153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:37:44.517386 systemd[1]: run-containerd-runc-k8s.io-96b4ad4594bef2801b7ad51ca653bf94309cbf7bf621101e489191df7e63b1c0-runc.I6dmOX.mount: Deactivated successfully. Dec 13 01:37:44.677431 containerd[2093]: time="2024-12-13T01:37:44.674401423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-gtxw6,Uid:72751374-c0c7-46fe-867c-63fdf7b86cc0,Namespace:kube-system,Attempt:0,} returns sandbox id \"96b4ad4594bef2801b7ad51ca653bf94309cbf7bf621101e489191df7e63b1c0\"" Dec 13 01:37:44.723147 kubelet[3661]: I1213 01:37:44.719698 3661 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-hd7z4" podStartSLOduration=22.719632889 podStartE2EDuration="22.719632889s" podCreationTimestamp="2024-12-13 01:37:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:37:44.618056372 +0000 UTC m=+34.579868667" watchObservedRunningTime="2024-12-13 01:37:44.719632889 +0000 UTC m=+34.681445184" Dec 13 01:37:44.787460 containerd[2093]: time="2024-12-13T01:37:44.787326349Z" level=info msg="CreateContainer within sandbox \"96b4ad4594bef2801b7ad51ca653bf94309cbf7bf621101e489191df7e63b1c0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:37:44.815573 containerd[2093]: time="2024-12-13T01:37:44.815506182Z" level=info msg="CreateContainer within sandbox \"96b4ad4594bef2801b7ad51ca653bf94309cbf7bf621101e489191df7e63b1c0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c06337185ac4222cb82e08ab2de3bde4568048b141f7bfcb420c8b92fe6b8867\"" Dec 13 01:37:44.821872 containerd[2093]: time="2024-12-13T01:37:44.818563647Z" level=info msg="StartContainer for \"c06337185ac4222cb82e08ab2de3bde4568048b141f7bfcb420c8b92fe6b8867\"" Dec 13 01:37:44.937260 containerd[2093]: time="2024-12-13T01:37:44.935242970Z" level=info msg="StartContainer for \"c06337185ac4222cb82e08ab2de3bde4568048b141f7bfcb420c8b92fe6b8867\" returns successfully" Dec 13 01:37:45.279480 systemd-networkd[1648]: cni0: Gained IPv6LL Dec 13 01:37:45.407469 systemd-networkd[1648]: vethc9f26075: Gained IPv6LL Dec 13 01:37:45.535480 systemd-networkd[1648]: veth7b6e13cb: Gained IPv6LL Dec 13 01:37:45.571022 kubelet[3661]: I1213 01:37:45.570455 3661 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-gtxw6" podStartSLOduration=23.570403942 podStartE2EDuration="23.570403942s" podCreationTimestamp="2024-12-13 01:37:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:37:45.568314035 +0000 UTC m=+35.530126330" watchObservedRunningTime="2024-12-13 01:37:45.570403942 +0000 UTC m=+35.532216237" Dec 13 01:37:47.588100 ntpd[2043]: Listen normally on 8 cni0 192.168.0.1:123 Dec 13 01:37:47.588279 ntpd[2043]: Listen normally on 9 cni0 [fe80::3097:3dff:fea3:2e1f%5]:123 Dec 13 01:37:47.588743 ntpd[2043]: 13 Dec 01:37:47 ntpd[2043]: Listen normally on 8 cni0 192.168.0.1:123 Dec 13 01:37:47.588743 ntpd[2043]: 13 Dec 01:37:47 ntpd[2043]: Listen normally on 9 cni0 [fe80::3097:3dff:fea3:2e1f%5]:123 Dec 13 01:37:47.588743 ntpd[2043]: 13 Dec 01:37:47 ntpd[2043]: Listen normally on 10 vethc9f26075 [fe80::4cd:80ff:fe59:749d%6]:123 Dec 13 01:37:47.588743 ntpd[2043]: 13 Dec 01:37:47 ntpd[2043]: Listen normally on 11 veth7b6e13cb [fe80::f0c4:b4ff:fe1d:b5f5%7]:123 Dec 13 01:37:47.588341 ntpd[2043]: Listen normally on 10 vethc9f26075 [fe80::4cd:80ff:fe59:749d%6]:123 Dec 13 01:37:47.588384 ntpd[2043]: Listen normally on 11 veth7b6e13cb [fe80::f0c4:b4ff:fe1d:b5f5%7]:123 Dec 13 01:37:50.955599 systemd[1]: Started sshd@7-172.31.20.145:22-139.178.68.195:45662.service - OpenSSH per-connection server daemon (139.178.68.195:45662). Dec 13 01:37:51.190479 sshd[4571]: Accepted publickey for core from 139.178.68.195 port 45662 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:37:51.194222 sshd[4571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:51.214797 systemd-logind[2059]: New session 8 of user core. Dec 13 01:37:51.219645 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 01:37:51.509702 sshd[4571]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:51.528374 systemd[1]: sshd@7-172.31.20.145:22-139.178.68.195:45662.service: Deactivated successfully. Dec 13 01:37:51.543037 systemd-logind[2059]: Session 8 logged out. Waiting for processes to exit. Dec 13 01:37:51.545220 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 01:37:51.548537 systemd-logind[2059]: Removed session 8. Dec 13 01:37:56.541693 systemd[1]: Started sshd@8-172.31.20.145:22-139.178.68.195:48032.service - OpenSSH per-connection server daemon (139.178.68.195:48032). Dec 13 01:37:56.718469 sshd[4625]: Accepted publickey for core from 139.178.68.195 port 48032 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:37:56.722975 sshd[4625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:56.740342 systemd-logind[2059]: New session 9 of user core. Dec 13 01:37:56.749467 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 01:37:56.978796 sshd[4625]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:56.987345 systemd-logind[2059]: Session 9 logged out. Waiting for processes to exit. Dec 13 01:37:56.990730 systemd[1]: sshd@8-172.31.20.145:22-139.178.68.195:48032.service: Deactivated successfully. Dec 13 01:37:56.995006 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 01:37:56.996670 systemd-logind[2059]: Removed session 9. Dec 13 01:38:02.027760 systemd[1]: Started sshd@9-172.31.20.145:22-139.178.68.195:48034.service - OpenSSH per-connection server daemon (139.178.68.195:48034). Dec 13 01:38:02.299753 sshd[4660]: Accepted publickey for core from 139.178.68.195 port 48034 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:38:02.305772 sshd[4660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:38:02.338412 systemd-logind[2059]: New session 10 of user core. Dec 13 01:38:02.347293 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 01:38:03.004039 sshd[4660]: pam_unix(sshd:session): session closed for user core Dec 13 01:38:03.019960 systemd-logind[2059]: Session 10 logged out. Waiting for processes to exit. Dec 13 01:38:03.022005 systemd[1]: sshd@9-172.31.20.145:22-139.178.68.195:48034.service: Deactivated successfully. Dec 13 01:38:03.038374 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 01:38:03.040835 systemd-logind[2059]: Removed session 10. Dec 13 01:38:08.035770 systemd[1]: Started sshd@10-172.31.20.145:22-139.178.68.195:44810.service - OpenSSH per-connection server daemon (139.178.68.195:44810). Dec 13 01:38:08.216018 sshd[4696]: Accepted publickey for core from 139.178.68.195 port 44810 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:38:08.217945 sshd[4696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:38:08.230440 systemd-logind[2059]: New session 11 of user core. Dec 13 01:38:08.244454 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 01:38:08.457346 sshd[4696]: pam_unix(sshd:session): session closed for user core Dec 13 01:38:08.465016 systemd[1]: sshd@10-172.31.20.145:22-139.178.68.195:44810.service: Deactivated successfully. Dec 13 01:38:08.470305 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 01:38:08.471483 systemd-logind[2059]: Session 11 logged out. Waiting for processes to exit. Dec 13 01:38:08.472979 systemd-logind[2059]: Removed session 11. Dec 13 01:38:08.486726 systemd[1]: Started sshd@11-172.31.20.145:22-139.178.68.195:44816.service - OpenSSH per-connection server daemon (139.178.68.195:44816). Dec 13 01:38:08.662301 sshd[4711]: Accepted publickey for core from 139.178.68.195 port 44816 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:38:08.664233 sshd[4711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:38:08.671909 systemd-logind[2059]: New session 12 of user core. Dec 13 01:38:08.675643 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 01:38:09.006381 sshd[4711]: pam_unix(sshd:session): session closed for user core Dec 13 01:38:09.017869 systemd[1]: sshd@11-172.31.20.145:22-139.178.68.195:44816.service: Deactivated successfully. Dec 13 01:38:09.033176 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 01:38:09.037683 systemd-logind[2059]: Session 12 logged out. Waiting for processes to exit. Dec 13 01:38:09.046556 systemd[1]: Started sshd@12-172.31.20.145:22-139.178.68.195:44820.service - OpenSSH per-connection server daemon (139.178.68.195:44820). Dec 13 01:38:09.050876 systemd-logind[2059]: Removed session 12. Dec 13 01:38:09.206702 sshd[4723]: Accepted publickey for core from 139.178.68.195 port 44820 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:38:09.209528 sshd[4723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:38:09.222074 systemd-logind[2059]: New session 13 of user core. Dec 13 01:38:09.230072 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 01:38:09.630871 sshd[4723]: pam_unix(sshd:session): session closed for user core Dec 13 01:38:09.638197 systemd[1]: sshd@12-172.31.20.145:22-139.178.68.195:44820.service: Deactivated successfully. Dec 13 01:38:09.638538 systemd-logind[2059]: Session 13 logged out. Waiting for processes to exit. Dec 13 01:38:09.643877 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 01:38:09.645606 systemd-logind[2059]: Removed session 13. Dec 13 01:38:14.671906 systemd[1]: Started sshd@13-172.31.20.145:22-139.178.68.195:44834.service - OpenSSH per-connection server daemon (139.178.68.195:44834). Dec 13 01:38:14.855726 sshd[4760]: Accepted publickey for core from 139.178.68.195 port 44834 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:38:14.856574 sshd[4760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:38:14.862973 systemd-logind[2059]: New session 14 of user core. Dec 13 01:38:14.868555 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 01:38:15.147212 sshd[4760]: pam_unix(sshd:session): session closed for user core Dec 13 01:38:15.156256 systemd[1]: sshd@13-172.31.20.145:22-139.178.68.195:44834.service: Deactivated successfully. Dec 13 01:38:15.164207 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 01:38:15.167235 systemd-logind[2059]: Session 14 logged out. Waiting for processes to exit. Dec 13 01:38:15.173795 systemd-logind[2059]: Removed session 14. Dec 13 01:38:15.185631 systemd[1]: Started sshd@14-172.31.20.145:22-139.178.68.195:44840.service - OpenSSH per-connection server daemon (139.178.68.195:44840). Dec 13 01:38:15.359060 sshd[4774]: Accepted publickey for core from 139.178.68.195 port 44840 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:38:15.366644 sshd[4774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:38:15.393894 systemd-logind[2059]: New session 15 of user core. Dec 13 01:38:15.403331 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 01:38:16.090617 sshd[4774]: pam_unix(sshd:session): session closed for user core Dec 13 01:38:16.097399 systemd[1]: sshd@14-172.31.20.145:22-139.178.68.195:44840.service: Deactivated successfully. Dec 13 01:38:16.104939 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 01:38:16.106773 systemd-logind[2059]: Session 15 logged out. Waiting for processes to exit. Dec 13 01:38:16.110524 systemd-logind[2059]: Removed session 15. Dec 13 01:38:16.123727 systemd[1]: Started sshd@15-172.31.20.145:22-139.178.68.195:45222.service - OpenSSH per-connection server daemon (139.178.68.195:45222). Dec 13 01:38:16.313554 sshd[4792]: Accepted publickey for core from 139.178.68.195 port 45222 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:38:16.321507 sshd[4792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:38:16.368582 systemd-logind[2059]: New session 16 of user core. Dec 13 01:38:16.375596 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 01:38:18.742558 sshd[4792]: pam_unix(sshd:session): session closed for user core Dec 13 01:38:18.754092 systemd[1]: sshd@15-172.31.20.145:22-139.178.68.195:45222.service: Deactivated successfully. Dec 13 01:38:18.762828 systemd-logind[2059]: Session 16 logged out. Waiting for processes to exit. Dec 13 01:38:18.770647 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 01:38:18.787712 systemd[1]: Started sshd@16-172.31.20.145:22-139.178.68.195:45236.service - OpenSSH per-connection server daemon (139.178.68.195:45236). Dec 13 01:38:18.790015 systemd-logind[2059]: Removed session 16. Dec 13 01:38:18.962900 sshd[4825]: Accepted publickey for core from 139.178.68.195 port 45236 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:38:18.967188 sshd[4825]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:38:18.975021 systemd-logind[2059]: New session 17 of user core. Dec 13 01:38:18.981538 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 01:38:19.378537 sshd[4825]: pam_unix(sshd:session): session closed for user core Dec 13 01:38:19.387664 systemd[1]: sshd@16-172.31.20.145:22-139.178.68.195:45236.service: Deactivated successfully. Dec 13 01:38:19.395235 systemd-logind[2059]: Session 17 logged out. Waiting for processes to exit. Dec 13 01:38:19.397628 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 01:38:19.409675 systemd[1]: Started sshd@17-172.31.20.145:22-139.178.68.195:45238.service - OpenSSH per-connection server daemon (139.178.68.195:45238). Dec 13 01:38:19.412251 systemd-logind[2059]: Removed session 17. Dec 13 01:38:19.576631 sshd[4837]: Accepted publickey for core from 139.178.68.195 port 45238 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:38:19.577725 sshd[4837]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:38:19.584196 systemd-logind[2059]: New session 18 of user core. Dec 13 01:38:19.592479 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 01:38:19.854240 sshd[4837]: pam_unix(sshd:session): session closed for user core Dec 13 01:38:19.861792 systemd[1]: sshd@17-172.31.20.145:22-139.178.68.195:45238.service: Deactivated successfully. Dec 13 01:38:19.868923 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 01:38:19.871111 systemd-logind[2059]: Session 18 logged out. Waiting for processes to exit. Dec 13 01:38:19.872936 systemd-logind[2059]: Removed session 18. Dec 13 01:38:24.886690 systemd[1]: Started sshd@18-172.31.20.145:22-139.178.68.195:45246.service - OpenSSH per-connection server daemon (139.178.68.195:45246). Dec 13 01:38:25.073331 sshd[4875]: Accepted publickey for core from 139.178.68.195 port 45246 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:38:25.074977 sshd[4875]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:38:25.083067 systemd-logind[2059]: New session 19 of user core. Dec 13 01:38:25.088569 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 01:38:25.330782 sshd[4875]: pam_unix(sshd:session): session closed for user core Dec 13 01:38:25.334749 systemd[1]: sshd@18-172.31.20.145:22-139.178.68.195:45246.service: Deactivated successfully. Dec 13 01:38:25.340377 systemd-logind[2059]: Session 19 logged out. Waiting for processes to exit. Dec 13 01:38:25.340875 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 01:38:25.345847 systemd-logind[2059]: Removed session 19. Dec 13 01:38:30.360030 systemd[1]: Started sshd@19-172.31.20.145:22-139.178.68.195:37430.service - OpenSSH per-connection server daemon (139.178.68.195:37430). Dec 13 01:38:30.527162 sshd[4913]: Accepted publickey for core from 139.178.68.195 port 37430 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:38:30.528223 sshd[4913]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:38:30.534527 systemd-logind[2059]: New session 20 of user core. Dec 13 01:38:30.538486 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 01:38:30.740001 sshd[4913]: pam_unix(sshd:session): session closed for user core Dec 13 01:38:30.746653 systemd-logind[2059]: Session 20 logged out. Waiting for processes to exit. Dec 13 01:38:30.747792 systemd[1]: sshd@19-172.31.20.145:22-139.178.68.195:37430.service: Deactivated successfully. Dec 13 01:38:30.754280 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 01:38:30.756891 systemd-logind[2059]: Removed session 20. Dec 13 01:38:35.768857 systemd[1]: Started sshd@20-172.31.20.145:22-139.178.68.195:37446.service - OpenSSH per-connection server daemon (139.178.68.195:37446). Dec 13 01:38:35.960157 sshd[4948]: Accepted publickey for core from 139.178.68.195 port 37446 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:38:35.961239 sshd[4948]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:38:35.967912 systemd-logind[2059]: New session 21 of user core. Dec 13 01:38:35.975733 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 01:38:36.238789 sshd[4948]: pam_unix(sshd:session): session closed for user core Dec 13 01:38:36.245703 systemd[1]: sshd@20-172.31.20.145:22-139.178.68.195:37446.service: Deactivated successfully. Dec 13 01:38:36.256375 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 01:38:36.258206 systemd-logind[2059]: Session 21 logged out. Waiting for processes to exit. Dec 13 01:38:36.259542 systemd-logind[2059]: Removed session 21. Dec 13 01:38:41.266920 systemd[1]: Started sshd@21-172.31.20.145:22-139.178.68.195:59838.service - OpenSSH per-connection server daemon (139.178.68.195:59838). Dec 13 01:38:41.428695 sshd[4989]: Accepted publickey for core from 139.178.68.195 port 59838 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:38:41.431218 sshd[4989]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:38:41.438725 systemd-logind[2059]: New session 22 of user core. Dec 13 01:38:41.446177 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 01:38:41.666796 sshd[4989]: pam_unix(sshd:session): session closed for user core Dec 13 01:38:41.673599 systemd[1]: sshd@21-172.31.20.145:22-139.178.68.195:59838.service: Deactivated successfully. Dec 13 01:38:41.679046 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 01:38:41.680012 systemd-logind[2059]: Session 22 logged out. Waiting for processes to exit. Dec 13 01:38:41.681467 systemd-logind[2059]: Removed session 22. Dec 13 01:38:55.089968 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9d9cbeb0648c7927fb79d678cda3b80ac95fc87a2bca8901f6363dc25f1b6a0d-rootfs.mount: Deactivated successfully. Dec 13 01:38:55.100046 containerd[2093]: time="2024-12-13T01:38:55.098647572Z" level=info msg="shim disconnected" id=9d9cbeb0648c7927fb79d678cda3b80ac95fc87a2bca8901f6363dc25f1b6a0d namespace=k8s.io Dec 13 01:38:55.100046 containerd[2093]: time="2024-12-13T01:38:55.098994067Z" level=warning msg="cleaning up after shim disconnected" id=9d9cbeb0648c7927fb79d678cda3b80ac95fc87a2bca8901f6363dc25f1b6a0d namespace=k8s.io Dec 13 01:38:55.100046 containerd[2093]: time="2024-12-13T01:38:55.099136458Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:38:55.789993 kubelet[3661]: I1213 01:38:55.789961 3661 scope.go:117] "RemoveContainer" containerID="9d9cbeb0648c7927fb79d678cda3b80ac95fc87a2bca8901f6363dc25f1b6a0d" Dec 13 01:38:55.793757 containerd[2093]: time="2024-12-13T01:38:55.793718630Z" level=info msg="CreateContainer within sandbox \"47635c9f743579cd5b50b85a1dec757f12685dbf28314eee32e283e219749e85\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Dec 13 01:38:55.821482 containerd[2093]: time="2024-12-13T01:38:55.821432990Z" level=info msg="CreateContainer within sandbox \"47635c9f743579cd5b50b85a1dec757f12685dbf28314eee32e283e219749e85\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"b9a16398d0becfaf8dca2edd51c67e96fc05e2278e996d608e80332e889bdb05\"" Dec 13 01:38:55.822017 containerd[2093]: time="2024-12-13T01:38:55.821983518Z" level=info msg="StartContainer for \"b9a16398d0becfaf8dca2edd51c67e96fc05e2278e996d608e80332e889bdb05\"" Dec 13 01:38:55.934995 containerd[2093]: time="2024-12-13T01:38:55.934806274Z" level=info msg="StartContainer for \"b9a16398d0becfaf8dca2edd51c67e96fc05e2278e996d608e80332e889bdb05\" returns successfully" Dec 13 01:38:56.090562 systemd[1]: run-containerd-runc-k8s.io-b9a16398d0becfaf8dca2edd51c67e96fc05e2278e996d608e80332e889bdb05-runc.mjfH2w.mount: Deactivated successfully. Dec 13 01:39:01.462555 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-feb95357a86f627fcd1c0eb65d7803551f359d1d885c40b74d48a6599dbcb700-rootfs.mount: Deactivated successfully. Dec 13 01:39:01.479484 containerd[2093]: time="2024-12-13T01:39:01.479042123Z" level=info msg="shim disconnected" id=feb95357a86f627fcd1c0eb65d7803551f359d1d885c40b74d48a6599dbcb700 namespace=k8s.io Dec 13 01:39:01.480063 containerd[2093]: time="2024-12-13T01:39:01.479486761Z" level=warning msg="cleaning up after shim disconnected" id=feb95357a86f627fcd1c0eb65d7803551f359d1d885c40b74d48a6599dbcb700 namespace=k8s.io Dec 13 01:39:01.480063 containerd[2093]: time="2024-12-13T01:39:01.479509733Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:39:01.861625 kubelet[3661]: I1213 01:39:01.861137 3661 scope.go:117] "RemoveContainer" containerID="feb95357a86f627fcd1c0eb65d7803551f359d1d885c40b74d48a6599dbcb700" Dec 13 01:39:01.897140 containerd[2093]: time="2024-12-13T01:39:01.893435874Z" level=info msg="CreateContainer within sandbox \"f464137bbccc1eeb3f3c0e8ca9f14e03fe8f1b173477e026b0440ae137802b92\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Dec 13 01:39:01.980914 containerd[2093]: time="2024-12-13T01:39:01.980556961Z" level=info msg="CreateContainer within sandbox \"f464137bbccc1eeb3f3c0e8ca9f14e03fe8f1b173477e026b0440ae137802b92\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"e332a7935ad73eb2f4142b020d94263dbb7fdd10cda8dbe372c5e98ca98bcfe8\"" Dec 13 01:39:01.981937 containerd[2093]: time="2024-12-13T01:39:01.981607913Z" level=info msg="StartContainer for \"e332a7935ad73eb2f4142b020d94263dbb7fdd10cda8dbe372c5e98ca98bcfe8\"" Dec 13 01:39:01.993203 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3945685181.mount: Deactivated successfully. Dec 13 01:39:02.454706 containerd[2093]: time="2024-12-13T01:39:02.454545811Z" level=info msg="StartContainer for \"e332a7935ad73eb2f4142b020d94263dbb7fdd10cda8dbe372c5e98ca98bcfe8\" returns successfully" Dec 13 01:39:02.563196 kubelet[3661]: E1213 01:39:02.560159 3661 request.go:1116] Unexpected error when reading response body: context deadline exceeded Dec 13 01:39:02.563196 kubelet[3661]: E1213 01:39:02.560240 3661 controller.go:195] "Failed to update lease" err="unexpected error when reading response body. Please retry. Original error: context deadline exceeded"