Jan 13 21:22:06.193369 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:40:50 -00 2025 Jan 13 21:22:06.193408 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:22:06.193425 kernel: BIOS-provided physical RAM map: Jan 13 21:22:06.193436 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 13 21:22:06.193447 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 13 21:22:06.193457 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 13 21:22:06.193473 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Jan 13 21:22:06.193486 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Jan 13 21:22:06.193497 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Jan 13 21:22:06.193509 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 13 21:22:06.193520 kernel: NX (Execute Disable) protection: active Jan 13 21:22:06.193531 kernel: APIC: Static calls initialized Jan 13 21:22:06.195584 kernel: SMBIOS 2.7 present. Jan 13 21:22:06.195598 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jan 13 21:22:06.195616 kernel: Hypervisor detected: KVM Jan 13 21:22:06.195629 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 13 21:22:06.195641 kernel: kvm-clock: using sched offset of 6618400084 cycles Jan 13 21:22:06.195653 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 13 21:22:06.195666 kernel: tsc: Detected 2499.996 MHz processor Jan 13 21:22:06.195679 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 13 21:22:06.195691 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 13 21:22:06.195706 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Jan 13 21:22:06.195719 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 13 21:22:06.195731 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 13 21:22:06.195744 kernel: Using GB pages for direct mapping Jan 13 21:22:06.195756 kernel: ACPI: Early table checksum verification disabled Jan 13 21:22:06.195767 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Jan 13 21:22:06.195779 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Jan 13 21:22:06.195791 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 13 21:22:06.195802 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 13 21:22:06.195861 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Jan 13 21:22:06.195874 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 13 21:22:06.195886 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 13 21:22:06.195898 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jan 13 21:22:06.195909 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 13 21:22:06.195922 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jan 13 21:22:06.196017 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jan 13 21:22:06.196034 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 13 21:22:06.196048 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Jan 13 21:22:06.196066 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Jan 13 21:22:06.196084 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Jan 13 21:22:06.196098 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Jan 13 21:22:06.196112 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Jan 13 21:22:06.196126 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Jan 13 21:22:06.196144 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Jan 13 21:22:06.196156 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Jan 13 21:22:06.196169 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Jan 13 21:22:06.196316 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Jan 13 21:22:06.196332 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 13 21:22:06.196346 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 13 21:22:06.196359 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jan 13 21:22:06.196373 kernel: NUMA: Initialized distance table, cnt=1 Jan 13 21:22:06.196386 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Jan 13 21:22:06.196403 kernel: Zone ranges: Jan 13 21:22:06.196416 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 13 21:22:06.196429 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Jan 13 21:22:06.196442 kernel: Normal empty Jan 13 21:22:06.196455 kernel: Movable zone start for each node Jan 13 21:22:06.196468 kernel: Early memory node ranges Jan 13 21:22:06.196481 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 13 21:22:06.196494 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Jan 13 21:22:06.196506 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Jan 13 21:22:06.196522 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 21:22:06.196548 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 13 21:22:06.196561 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Jan 13 21:22:06.196575 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 13 21:22:06.196588 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 13 21:22:06.196663 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jan 13 21:22:06.196677 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 13 21:22:06.196690 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 13 21:22:06.196703 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 13 21:22:06.196716 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 13 21:22:06.196780 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 13 21:22:06.196796 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 13 21:22:06.196811 kernel: TSC deadline timer available Jan 13 21:22:06.196824 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 13 21:22:06.196837 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 13 21:22:06.196850 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Jan 13 21:22:06.196863 kernel: Booting paravirtualized kernel on KVM Jan 13 21:22:06.196876 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 13 21:22:06.196889 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 13 21:22:06.196906 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 13 21:22:06.196919 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 13 21:22:06.196932 kernel: pcpu-alloc: [0] 0 1 Jan 13 21:22:06.196944 kernel: kvm-guest: PV spinlocks enabled Jan 13 21:22:06.196957 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 13 21:22:06.196971 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:22:06.196984 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 21:22:06.196997 kernel: random: crng init done Jan 13 21:22:06.197013 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 21:22:06.197027 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 13 21:22:06.197120 kernel: Fallback order for Node 0: 0 Jan 13 21:22:06.197138 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Jan 13 21:22:06.197152 kernel: Policy zone: DMA32 Jan 13 21:22:06.197167 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 21:22:06.197260 kernel: Memory: 1932348K/2057760K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42844K init, 2348K bss, 125152K reserved, 0K cma-reserved) Jan 13 21:22:06.197277 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 13 21:22:06.197295 kernel: Kernel/User page tables isolation: enabled Jan 13 21:22:06.197309 kernel: ftrace: allocating 37918 entries in 149 pages Jan 13 21:22:06.197322 kernel: ftrace: allocated 149 pages with 4 groups Jan 13 21:22:06.197335 kernel: Dynamic Preempt: voluntary Jan 13 21:22:06.197348 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 21:22:06.197362 kernel: rcu: RCU event tracing is enabled. Jan 13 21:22:06.197375 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 13 21:22:06.197389 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 21:22:06.197401 kernel: Rude variant of Tasks RCU enabled. Jan 13 21:22:06.197415 kernel: Tracing variant of Tasks RCU enabled. Jan 13 21:22:06.197431 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 21:22:06.197444 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 13 21:22:06.197457 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 13 21:22:06.197470 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 21:22:06.197482 kernel: Console: colour VGA+ 80x25 Jan 13 21:22:06.197495 kernel: printk: console [ttyS0] enabled Jan 13 21:22:06.197508 kernel: ACPI: Core revision 20230628 Jan 13 21:22:06.197520 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jan 13 21:22:06.198565 kernel: APIC: Switch to symmetric I/O mode setup Jan 13 21:22:06.198597 kernel: x2apic enabled Jan 13 21:22:06.198614 kernel: APIC: Switched APIC routing to: physical x2apic Jan 13 21:22:06.198641 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Jan 13 21:22:06.198659 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Jan 13 21:22:06.198674 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 13 21:22:06.198687 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 13 21:22:06.198700 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 13 21:22:06.198713 kernel: Spectre V2 : Mitigation: Retpolines Jan 13 21:22:06.198727 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 13 21:22:06.198741 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 13 21:22:06.198756 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 13 21:22:06.198770 kernel: RETBleed: Vulnerable Jan 13 21:22:06.198787 kernel: Speculative Store Bypass: Vulnerable Jan 13 21:22:06.198800 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jan 13 21:22:06.198814 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 13 21:22:06.198828 kernel: GDS: Unknown: Dependent on hypervisor status Jan 13 21:22:06.198841 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 13 21:22:06.198854 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 13 21:22:06.198871 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 13 21:22:06.198885 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jan 13 21:22:06.198899 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jan 13 21:22:06.198913 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 13 21:22:06.198927 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 13 21:22:06.198941 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 13 21:22:06.198956 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 13 21:22:06.198970 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 13 21:22:06.198984 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jan 13 21:22:06.198999 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jan 13 21:22:06.199013 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jan 13 21:22:06.199030 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jan 13 21:22:06.199045 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jan 13 21:22:06.199060 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jan 13 21:22:06.199074 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jan 13 21:22:06.199089 kernel: Freeing SMP alternatives memory: 32K Jan 13 21:22:06.199104 kernel: pid_max: default: 32768 minimum: 301 Jan 13 21:22:06.199118 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 21:22:06.199133 kernel: landlock: Up and running. Jan 13 21:22:06.199148 kernel: SELinux: Initializing. Jan 13 21:22:06.199162 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 13 21:22:06.199178 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 13 21:22:06.199193 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 13 21:22:06.199212 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 21:22:06.199228 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 21:22:06.199243 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 21:22:06.199258 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 13 21:22:06.199273 kernel: signal: max sigframe size: 3632 Jan 13 21:22:06.199289 kernel: rcu: Hierarchical SRCU implementation. Jan 13 21:22:06.199305 kernel: rcu: Max phase no-delay instances is 400. Jan 13 21:22:06.199320 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 13 21:22:06.199335 kernel: smp: Bringing up secondary CPUs ... Jan 13 21:22:06.199354 kernel: smpboot: x86: Booting SMP configuration: Jan 13 21:22:06.199449 kernel: .... node #0, CPUs: #1 Jan 13 21:22:06.199505 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 13 21:22:06.199523 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 13 21:22:06.199564 kernel: smp: Brought up 1 node, 2 CPUs Jan 13 21:22:06.199580 kernel: smpboot: Max logical packages: 1 Jan 13 21:22:06.199596 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Jan 13 21:22:06.199611 kernel: devtmpfs: initialized Jan 13 21:22:06.199632 kernel: x86/mm: Memory block size: 128MB Jan 13 21:22:06.199648 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 21:22:06.199663 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 13 21:22:06.199678 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 21:22:06.199694 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 21:22:06.199713 kernel: audit: initializing netlink subsys (disabled) Jan 13 21:22:06.199729 kernel: audit: type=2000 audit(1736803324.779:1): state=initialized audit_enabled=0 res=1 Jan 13 21:22:06.199745 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 21:22:06.199967 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 13 21:22:06.200026 kernel: cpuidle: using governor menu Jan 13 21:22:06.200045 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 21:22:06.200060 kernel: dca service started, version 1.12.1 Jan 13 21:22:06.200075 kernel: PCI: Using configuration type 1 for base access Jan 13 21:22:06.200091 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 13 21:22:06.200106 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 21:22:06.200119 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 21:22:06.200136 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 21:22:06.200153 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 21:22:06.200175 kernel: ACPI: Added _OSI(Module Device) Jan 13 21:22:06.200193 kernel: ACPI: Added _OSI(Processor Device) Jan 13 21:22:06.200209 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 21:22:06.200226 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 21:22:06.200244 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 13 21:22:06.200261 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 13 21:22:06.200317 kernel: ACPI: Interpreter enabled Jan 13 21:22:06.200336 kernel: ACPI: PM: (supports S0 S5) Jan 13 21:22:06.200355 kernel: ACPI: Using IOAPIC for interrupt routing Jan 13 21:22:06.200377 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 13 21:22:06.200394 kernel: PCI: Using E820 reservations for host bridge windows Jan 13 21:22:06.203649 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jan 13 21:22:06.203677 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 21:22:06.204120 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 13 21:22:06.204603 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 13 21:22:06.204852 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 13 21:22:06.204885 kernel: acpiphp: Slot [3] registered Jan 13 21:22:06.204903 kernel: acpiphp: Slot [4] registered Jan 13 21:22:06.204920 kernel: acpiphp: Slot [5] registered Jan 13 21:22:06.204937 kernel: acpiphp: Slot [6] registered Jan 13 21:22:06.204953 kernel: acpiphp: Slot [7] registered Jan 13 21:22:06.204970 kernel: acpiphp: Slot [8] registered Jan 13 21:22:06.204987 kernel: acpiphp: Slot [9] registered Jan 13 21:22:06.205003 kernel: acpiphp: Slot [10] registered Jan 13 21:22:06.205020 kernel: acpiphp: Slot [11] registered Jan 13 21:22:06.205036 kernel: acpiphp: Slot [12] registered Jan 13 21:22:06.205057 kernel: acpiphp: Slot [13] registered Jan 13 21:22:06.205073 kernel: acpiphp: Slot [14] registered Jan 13 21:22:06.205090 kernel: acpiphp: Slot [15] registered Jan 13 21:22:06.205106 kernel: acpiphp: Slot [16] registered Jan 13 21:22:06.205124 kernel: acpiphp: Slot [17] registered Jan 13 21:22:06.205140 kernel: acpiphp: Slot [18] registered Jan 13 21:22:06.205157 kernel: acpiphp: Slot [19] registered Jan 13 21:22:06.205173 kernel: acpiphp: Slot [20] registered Jan 13 21:22:06.205189 kernel: acpiphp: Slot [21] registered Jan 13 21:22:06.205209 kernel: acpiphp: Slot [22] registered Jan 13 21:22:06.205226 kernel: acpiphp: Slot [23] registered Jan 13 21:22:06.205242 kernel: acpiphp: Slot [24] registered Jan 13 21:22:06.205258 kernel: acpiphp: Slot [25] registered Jan 13 21:22:06.205274 kernel: acpiphp: Slot [26] registered Jan 13 21:22:06.205290 kernel: acpiphp: Slot [27] registered Jan 13 21:22:06.205307 kernel: acpiphp: Slot [28] registered Jan 13 21:22:06.205324 kernel: acpiphp: Slot [29] registered Jan 13 21:22:06.205341 kernel: acpiphp: Slot [30] registered Jan 13 21:22:06.205358 kernel: acpiphp: Slot [31] registered Jan 13 21:22:06.205378 kernel: PCI host bridge to bus 0000:00 Jan 13 21:22:06.205553 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 13 21:22:06.207860 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 13 21:22:06.207991 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 13 21:22:06.208109 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 13 21:22:06.208225 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 21:22:06.208433 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 13 21:22:06.209646 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 13 21:22:06.209834 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Jan 13 21:22:06.210001 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 13 21:22:06.210139 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Jan 13 21:22:06.210273 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jan 13 21:22:06.210405 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jan 13 21:22:06.213375 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jan 13 21:22:06.213561 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jan 13 21:22:06.213695 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jan 13 21:22:06.213822 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jan 13 21:22:06.213999 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Jan 13 21:22:06.214129 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Jan 13 21:22:06.214256 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jan 13 21:22:06.214382 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 13 21:22:06.214639 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 13 21:22:06.214803 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Jan 13 21:22:06.215051 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 13 21:22:06.215195 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Jan 13 21:22:06.215217 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 13 21:22:06.215234 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 13 21:22:06.215255 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 13 21:22:06.215270 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 13 21:22:06.215284 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 13 21:22:06.215297 kernel: iommu: Default domain type: Translated Jan 13 21:22:06.215310 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 13 21:22:06.215324 kernel: PCI: Using ACPI for IRQ routing Jan 13 21:22:06.215339 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 13 21:22:06.215355 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 13 21:22:06.215371 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Jan 13 21:22:06.216618 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jan 13 21:22:06.216797 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jan 13 21:22:06.216953 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 13 21:22:06.216977 kernel: vgaarb: loaded Jan 13 21:22:06.216995 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jan 13 21:22:06.217013 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jan 13 21:22:06.217030 kernel: clocksource: Switched to clocksource kvm-clock Jan 13 21:22:06.217046 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 21:22:06.217119 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 21:22:06.217140 kernel: pnp: PnP ACPI init Jan 13 21:22:06.217156 kernel: pnp: PnP ACPI: found 5 devices Jan 13 21:22:06.217173 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 13 21:22:06.217189 kernel: NET: Registered PF_INET protocol family Jan 13 21:22:06.217205 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 21:22:06.217222 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 13 21:22:06.217237 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 21:22:06.217251 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 13 21:22:06.217266 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 13 21:22:06.217285 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 13 21:22:06.217299 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 13 21:22:06.217315 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 13 21:22:06.217330 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 21:22:06.217346 kernel: NET: Registered PF_XDP protocol family Jan 13 21:22:06.217503 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 13 21:22:06.224719 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 13 21:22:06.224871 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 13 21:22:06.224996 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 13 21:22:06.225143 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 13 21:22:06.225167 kernel: PCI: CLS 0 bytes, default 64 Jan 13 21:22:06.225185 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 13 21:22:06.225202 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Jan 13 21:22:06.225217 kernel: clocksource: Switched to clocksource tsc Jan 13 21:22:06.225234 kernel: Initialise system trusted keyrings Jan 13 21:22:06.225251 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 13 21:22:06.225273 kernel: Key type asymmetric registered Jan 13 21:22:06.225287 kernel: Asymmetric key parser 'x509' registered Jan 13 21:22:06.225301 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 13 21:22:06.225314 kernel: io scheduler mq-deadline registered Jan 13 21:22:06.225328 kernel: io scheduler kyber registered Jan 13 21:22:06.225342 kernel: io scheduler bfq registered Jan 13 21:22:06.225357 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 13 21:22:06.225372 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 21:22:06.225386 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 13 21:22:06.225405 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 13 21:22:06.225419 kernel: i8042: Warning: Keylock active Jan 13 21:22:06.225434 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 13 21:22:06.225449 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 13 21:22:06.225633 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 13 21:22:06.225770 kernel: rtc_cmos 00:00: registered as rtc0 Jan 13 21:22:06.225917 kernel: rtc_cmos 00:00: setting system clock to 2025-01-13T21:22:05 UTC (1736803325) Jan 13 21:22:06.226057 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 13 21:22:06.226084 kernel: intel_pstate: CPU model not supported Jan 13 21:22:06.226101 kernel: NET: Registered PF_INET6 protocol family Jan 13 21:22:06.226118 kernel: Segment Routing with IPv6 Jan 13 21:22:06.226136 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 21:22:06.226153 kernel: NET: Registered PF_PACKET protocol family Jan 13 21:22:06.226170 kernel: Key type dns_resolver registered Jan 13 21:22:06.226187 kernel: IPI shorthand broadcast: enabled Jan 13 21:22:06.226204 kernel: sched_clock: Marking stable (664237903, 275331318)->(1038178280, -98609059) Jan 13 21:22:06.226221 kernel: registered taskstats version 1 Jan 13 21:22:06.226243 kernel: Loading compiled-in X.509 certificates Jan 13 21:22:06.226260 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: e8ca4908f7ff887d90a0430272c92dde55624447' Jan 13 21:22:06.226276 kernel: Key type .fscrypt registered Jan 13 21:22:06.226292 kernel: Key type fscrypt-provisioning registered Jan 13 21:22:06.226310 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 21:22:06.226327 kernel: ima: Allocated hash algorithm: sha1 Jan 13 21:22:06.226344 kernel: ima: No architecture policies found Jan 13 21:22:06.226361 kernel: clk: Disabling unused clocks Jan 13 21:22:06.226378 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 13 21:22:06.226400 kernel: Write protecting the kernel read-only data: 36864k Jan 13 21:22:06.226417 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 13 21:22:06.226434 kernel: Run /init as init process Jan 13 21:22:06.226450 kernel: with arguments: Jan 13 21:22:06.226468 kernel: /init Jan 13 21:22:06.226484 kernel: with environment: Jan 13 21:22:06.226501 kernel: HOME=/ Jan 13 21:22:06.226517 kernel: TERM=linux Jan 13 21:22:06.227612 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 21:22:06.227655 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:22:06.227688 systemd[1]: Detected virtualization amazon. Jan 13 21:22:06.227709 systemd[1]: Detected architecture x86-64. Jan 13 21:22:06.227725 systemd[1]: Running in initrd. Jan 13 21:22:06.227742 systemd[1]: No hostname configured, using default hostname. Jan 13 21:22:06.227761 systemd[1]: Hostname set to . Jan 13 21:22:06.227778 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:22:06.227795 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 13 21:22:06.227816 systemd[1]: Queued start job for default target initrd.target. Jan 13 21:22:06.227837 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:22:06.227853 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:22:06.227871 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 21:22:06.227889 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:22:06.227908 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 21:22:06.227927 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 21:22:06.227953 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 21:22:06.227972 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 21:22:06.227991 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:22:06.228009 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:22:06.228028 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:22:06.228046 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:22:06.228065 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:22:06.228086 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:22:06.228106 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:22:06.228125 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:22:06.228144 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 21:22:06.228163 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 21:22:06.228181 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:22:06.228201 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:22:06.228220 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:22:06.228241 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:22:06.228258 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 21:22:06.228275 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:22:06.228293 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 21:22:06.228310 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 21:22:06.228331 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:22:06.228352 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:22:06.228369 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:22:06.228426 systemd-journald[178]: Collecting audit messages is disabled. Jan 13 21:22:06.228468 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 21:22:06.228485 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:22:06.228504 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 21:22:06.228524 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 21:22:06.229583 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:22:06.229615 systemd-journald[178]: Journal started Jan 13 21:22:06.229652 systemd-journald[178]: Runtime Journal (/run/log/journal/ec2213c849da0ac63934b35e0236b74d) is 4.8M, max 38.6M, 33.7M free. Jan 13 21:22:06.236889 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:22:06.236951 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:22:06.241929 systemd-modules-load[179]: Inserted module 'overlay' Jan 13 21:22:06.399505 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 21:22:06.399563 kernel: Bridge firewalling registered Jan 13 21:22:06.248996 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:22:06.297935 systemd-modules-load[179]: Inserted module 'br_netfilter' Jan 13 21:22:06.404108 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:22:06.413434 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:22:06.419761 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:22:06.424920 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:22:06.434709 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:22:06.453868 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:22:06.469728 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:22:06.481025 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:22:06.485738 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:22:06.498807 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 21:22:06.536103 dracut-cmdline[212]: dracut-dracut-053 Jan 13 21:22:06.543771 systemd-resolved[207]: Positive Trust Anchors: Jan 13 21:22:06.558447 dracut-cmdline[212]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:22:06.543792 systemd-resolved[207]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:22:06.543888 systemd-resolved[207]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:22:06.552142 systemd-resolved[207]: Defaulting to hostname 'linux'. Jan 13 21:22:06.554136 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:22:06.555688 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:22:06.654567 kernel: SCSI subsystem initialized Jan 13 21:22:06.668583 kernel: Loading iSCSI transport class v2.0-870. Jan 13 21:22:06.681567 kernel: iscsi: registered transport (tcp) Jan 13 21:22:06.710698 kernel: iscsi: registered transport (qla4xxx) Jan 13 21:22:06.710785 kernel: QLogic iSCSI HBA Driver Jan 13 21:22:06.767021 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 21:22:06.777259 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 21:22:06.816599 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 21:22:06.816717 kernel: device-mapper: uevent: version 1.0.3 Jan 13 21:22:06.816743 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 21:22:06.873583 kernel: raid6: avx512x4 gen() 13734 MB/s Jan 13 21:22:06.890591 kernel: raid6: avx512x2 gen() 12662 MB/s Jan 13 21:22:06.907595 kernel: raid6: avx512x1 gen() 14871 MB/s Jan 13 21:22:06.924581 kernel: raid6: avx2x4 gen() 15963 MB/s Jan 13 21:22:06.941587 kernel: raid6: avx2x2 gen() 13558 MB/s Jan 13 21:22:06.958866 kernel: raid6: avx2x1 gen() 9878 MB/s Jan 13 21:22:06.958940 kernel: raid6: using algorithm avx2x4 gen() 15963 MB/s Jan 13 21:22:06.977560 kernel: raid6: .... xor() 4636 MB/s, rmw enabled Jan 13 21:22:06.977634 kernel: raid6: using avx512x2 recovery algorithm Jan 13 21:22:07.003562 kernel: xor: automatically using best checksumming function avx Jan 13 21:22:07.233563 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 21:22:07.249792 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:22:07.259332 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:22:07.294647 systemd-udevd[395]: Using default interface naming scheme 'v255'. Jan 13 21:22:07.302025 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:22:07.312774 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 21:22:07.343633 dracut-pre-trigger[400]: rd.md=0: removing MD RAID activation Jan 13 21:22:07.404835 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:22:07.412230 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:22:07.487405 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:22:07.500940 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 21:22:07.554891 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 21:22:07.558920 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:22:07.564970 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:22:07.576526 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:22:07.591003 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 21:22:07.624962 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:22:07.648673 kernel: cryptd: max_cpu_qlen set to 1000 Jan 13 21:22:07.676304 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 13 21:22:07.689597 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 13 21:22:07.689796 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jan 13 21:22:07.690007 kernel: AVX2 version of gcm_enc/dec engaged. Jan 13 21:22:07.690035 kernel: AES CTR mode by8 optimization enabled Jan 13 21:22:07.690053 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:9a:8e:ba:4d:31 Jan 13 21:22:07.686655 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:22:07.686829 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:22:07.693683 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:22:07.694814 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:22:07.697492 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:22:07.701182 (udev-worker)[451]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:22:07.706947 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:22:07.718477 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:22:07.730995 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 13 21:22:07.731386 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 13 21:22:07.748679 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 13 21:22:07.755558 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 21:22:07.755626 kernel: GPT:9289727 != 16777215 Jan 13 21:22:07.755652 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 21:22:07.755670 kernel: GPT:9289727 != 16777215 Jan 13 21:22:07.755687 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 21:22:07.755704 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 21:22:07.848260 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (453) Jan 13 21:22:07.849558 kernel: BTRFS: device fsid b8e2d3c5-4bed-4339-bed5-268c66823686 devid 1 transid 37 /dev/nvme0n1p3 scanned by (udev-worker) (448) Jan 13 21:22:07.915244 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:22:07.926142 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:22:07.953658 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 13 21:22:07.973775 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 13 21:22:07.996754 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 13 21:22:07.998406 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:22:08.016139 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 13 21:22:08.017729 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 13 21:22:08.027759 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 21:22:08.039776 disk-uuid[627]: Primary Header is updated. Jan 13 21:22:08.039776 disk-uuid[627]: Secondary Entries is updated. Jan 13 21:22:08.039776 disk-uuid[627]: Secondary Header is updated. Jan 13 21:22:08.054602 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 21:22:08.062563 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 21:22:09.071616 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 21:22:09.072515 disk-uuid[628]: The operation has completed successfully. Jan 13 21:22:09.251966 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 21:22:09.252104 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 21:22:09.277742 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 21:22:09.285771 sh[886]: Success Jan 13 21:22:09.307821 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 13 21:22:09.445832 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 21:22:09.453664 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 21:22:09.459492 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 21:22:09.497726 kernel: BTRFS info (device dm-0): first mount of filesystem b8e2d3c5-4bed-4339-bed5-268c66823686 Jan 13 21:22:09.497788 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:22:09.497809 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 21:22:09.498581 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 21:22:09.499674 kernel: BTRFS info (device dm-0): using free space tree Jan 13 21:22:09.532568 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 13 21:22:09.535580 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 21:22:09.538744 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 21:22:09.545747 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 21:22:09.550816 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 21:22:09.571044 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:22:09.571106 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:22:09.571119 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 13 21:22:09.576649 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 21:22:09.586631 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:22:09.586325 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 21:22:09.592291 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 21:22:09.600715 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 21:22:09.695707 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:22:09.710530 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:22:09.768874 systemd-networkd[1078]: lo: Link UP Jan 13 21:22:09.768886 systemd-networkd[1078]: lo: Gained carrier Jan 13 21:22:09.773604 systemd-networkd[1078]: Enumeration completed Jan 13 21:22:09.773730 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:22:09.775086 systemd[1]: Reached target network.target - Network. Jan 13 21:22:09.779410 systemd-networkd[1078]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:22:09.779415 systemd-networkd[1078]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:22:09.786877 systemd-networkd[1078]: eth0: Link UP Jan 13 21:22:09.787639 systemd-networkd[1078]: eth0: Gained carrier Jan 13 21:22:09.787657 systemd-networkd[1078]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:22:09.796280 ignition[997]: Ignition 2.19.0 Jan 13 21:22:09.796293 ignition[997]: Stage: fetch-offline Jan 13 21:22:09.796579 ignition[997]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:22:09.798406 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:22:09.796593 ignition[997]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 21:22:09.799799 systemd-networkd[1078]: eth0: DHCPv4 address 172.31.23.220/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 13 21:22:09.796889 ignition[997]: Ignition finished successfully Jan 13 21:22:09.807773 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 13 21:22:09.835344 ignition[1087]: Ignition 2.19.0 Jan 13 21:22:09.835358 ignition[1087]: Stage: fetch Jan 13 21:22:09.835849 ignition[1087]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:22:09.835862 ignition[1087]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 21:22:09.835971 ignition[1087]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 21:22:09.874172 ignition[1087]: PUT result: OK Jan 13 21:22:09.877223 ignition[1087]: parsed url from cmdline: "" Jan 13 21:22:09.877235 ignition[1087]: no config URL provided Jan 13 21:22:09.877245 ignition[1087]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 21:22:09.877260 ignition[1087]: no config at "/usr/lib/ignition/user.ign" Jan 13 21:22:09.877293 ignition[1087]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 21:22:09.879295 ignition[1087]: PUT result: OK Jan 13 21:22:09.879836 ignition[1087]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 13 21:22:09.882068 ignition[1087]: GET result: OK Jan 13 21:22:09.883182 ignition[1087]: parsing config with SHA512: 886da39d9134714fe65f2b77edcb518b07db9ee3119b1b5f790273d70bebb1a611279a4f771c2f698ce52f257e23b76cc57165b9b70ea1df98aebeb6d8c7b6cb Jan 13 21:22:09.888439 unknown[1087]: fetched base config from "system" Jan 13 21:22:09.888453 unknown[1087]: fetched base config from "system" Jan 13 21:22:09.889068 ignition[1087]: fetch: fetch complete Jan 13 21:22:09.888461 unknown[1087]: fetched user config from "aws" Jan 13 21:22:09.889075 ignition[1087]: fetch: fetch passed Jan 13 21:22:09.891230 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 13 21:22:09.889129 ignition[1087]: Ignition finished successfully Jan 13 21:22:09.897582 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 21:22:09.916377 ignition[1094]: Ignition 2.19.0 Jan 13 21:22:09.916391 ignition[1094]: Stage: kargs Jan 13 21:22:09.916869 ignition[1094]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:22:09.916881 ignition[1094]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 21:22:09.916999 ignition[1094]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 21:22:09.919322 ignition[1094]: PUT result: OK Jan 13 21:22:09.930064 ignition[1094]: kargs: kargs passed Jan 13 21:22:09.930423 ignition[1094]: Ignition finished successfully Jan 13 21:22:09.933411 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 21:22:09.943775 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 21:22:09.977045 ignition[1101]: Ignition 2.19.0 Jan 13 21:22:09.977150 ignition[1101]: Stage: disks Jan 13 21:22:09.977778 ignition[1101]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:22:09.977794 ignition[1101]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 21:22:09.978018 ignition[1101]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 21:22:09.981606 ignition[1101]: PUT result: OK Jan 13 21:22:09.996479 ignition[1101]: disks: disks passed Jan 13 21:22:09.996595 ignition[1101]: Ignition finished successfully Jan 13 21:22:09.999780 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 21:22:10.001480 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 21:22:10.003403 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 21:22:10.005195 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:22:10.008345 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:22:10.011833 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:22:10.017814 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 21:22:10.058325 systemd-fsck[1109]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 13 21:22:10.063771 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 21:22:10.073097 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 21:22:10.207603 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 39899d4c-a8b1-4feb-9875-e812cc535888 r/w with ordered data mode. Quota mode: none. Jan 13 21:22:10.208738 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 21:22:10.211791 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 21:22:10.225638 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:22:10.230610 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 21:22:10.233770 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 21:22:10.236104 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 21:22:10.237792 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:22:10.250675 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 21:22:10.253731 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1128) Jan 13 21:22:10.256903 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:22:10.256949 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:22:10.256963 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 13 21:22:10.262224 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 21:22:10.274563 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 21:22:10.277983 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:22:10.380866 initrd-setup-root[1158]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 21:22:10.388885 initrd-setup-root[1165]: cut: /sysroot/etc/group: No such file or directory Jan 13 21:22:10.398685 initrd-setup-root[1172]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 21:22:10.408145 initrd-setup-root[1179]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 21:22:10.558975 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 21:22:10.566734 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 21:22:10.571724 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 21:22:10.582556 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:22:10.582518 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 21:22:10.621214 ignition[1247]: INFO : Ignition 2.19.0 Jan 13 21:22:10.623345 ignition[1247]: INFO : Stage: mount Jan 13 21:22:10.623345 ignition[1247]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:22:10.623345 ignition[1247]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 21:22:10.623345 ignition[1247]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 21:22:10.629399 ignition[1247]: INFO : PUT result: OK Jan 13 21:22:10.633152 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 21:22:10.634524 ignition[1247]: INFO : mount: mount passed Jan 13 21:22:10.634524 ignition[1247]: INFO : Ignition finished successfully Jan 13 21:22:10.637677 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 21:22:10.647660 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 21:22:10.674774 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:22:10.698766 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1260) Jan 13 21:22:10.698832 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:22:10.700879 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:22:10.700926 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 13 21:22:10.707869 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 21:22:10.709942 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:22:10.738285 ignition[1277]: INFO : Ignition 2.19.0 Jan 13 21:22:10.738285 ignition[1277]: INFO : Stage: files Jan 13 21:22:10.740437 ignition[1277]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:22:10.740437 ignition[1277]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 21:22:10.740437 ignition[1277]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 21:22:10.740437 ignition[1277]: INFO : PUT result: OK Jan 13 21:22:10.748331 ignition[1277]: DEBUG : files: compiled without relabeling support, skipping Jan 13 21:22:10.750810 ignition[1277]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 21:22:10.750810 ignition[1277]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 21:22:10.758302 ignition[1277]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 21:22:10.759886 ignition[1277]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 21:22:10.759886 ignition[1277]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 21:22:10.759070 unknown[1277]: wrote ssh authorized keys file for user: core Jan 13 21:22:10.764157 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 21:22:10.764157 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 13 21:22:10.889457 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 13 21:22:11.032622 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 21:22:11.035086 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 13 21:22:11.035086 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 21:22:11.040502 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:22:11.040502 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:22:11.040502 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:22:11.040502 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:22:11.040502 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:22:11.040502 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:22:11.058614 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:22:11.058614 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:22:11.058614 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 21:22:11.058614 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 21:22:11.058614 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 21:22:11.058614 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 13 21:22:11.475728 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 13 21:22:11.804835 systemd-networkd[1078]: eth0: Gained IPv6LL Jan 13 21:22:11.975314 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 21:22:11.975314 ignition[1277]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 13 21:22:11.980336 ignition[1277]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:22:11.983660 ignition[1277]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:22:11.983660 ignition[1277]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 13 21:22:11.983660 ignition[1277]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 13 21:22:11.989889 ignition[1277]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 21:22:11.989889 ignition[1277]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:22:11.989889 ignition[1277]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:22:11.989889 ignition[1277]: INFO : files: files passed Jan 13 21:22:11.989889 ignition[1277]: INFO : Ignition finished successfully Jan 13 21:22:11.997099 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 21:22:12.005876 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 21:22:12.012516 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 21:22:12.016710 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 21:22:12.016916 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 21:22:12.046983 initrd-setup-root-after-ignition[1306]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:22:12.046983 initrd-setup-root-after-ignition[1306]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:22:12.053259 initrd-setup-root-after-ignition[1310]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:22:12.052882 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:22:12.056339 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 21:22:12.065016 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 21:22:12.109501 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 21:22:12.109681 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 21:22:12.112876 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 21:22:12.114866 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 21:22:12.115151 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 21:22:12.124906 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 21:22:12.139849 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:22:12.145868 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 21:22:12.170601 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:22:12.173562 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:22:12.174218 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 21:22:12.174787 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 21:22:12.174971 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:22:12.176128 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 21:22:12.176808 systemd[1]: Stopped target basic.target - Basic System. Jan 13 21:22:12.177282 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 21:22:12.177699 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:22:12.177861 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 21:22:12.178025 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 21:22:12.178182 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:22:12.178362 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 21:22:12.178518 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 21:22:12.178835 systemd[1]: Stopped target swap.target - Swaps. Jan 13 21:22:12.178960 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 21:22:12.179105 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:22:12.181326 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:22:12.182545 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:22:12.183113 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 21:22:12.195547 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:22:12.200123 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 21:22:12.200328 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 21:22:12.202500 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 21:22:12.202668 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:22:12.206692 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 21:22:12.206825 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 21:22:12.245781 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 21:22:12.249806 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 21:22:12.250330 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:22:12.269994 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 21:22:12.271936 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 21:22:12.273512 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:22:12.275241 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 21:22:12.276556 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:22:12.291022 ignition[1330]: INFO : Ignition 2.19.0 Jan 13 21:22:12.291022 ignition[1330]: INFO : Stage: umount Jan 13 21:22:12.291022 ignition[1330]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:22:12.291022 ignition[1330]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 21:22:12.291022 ignition[1330]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 21:22:12.287784 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 21:22:12.298951 ignition[1330]: INFO : PUT result: OK Jan 13 21:22:12.287915 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 21:22:12.304585 ignition[1330]: INFO : umount: umount passed Jan 13 21:22:12.304585 ignition[1330]: INFO : Ignition finished successfully Jan 13 21:22:12.309177 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 21:22:12.310091 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 21:22:12.312038 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 21:22:12.312094 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 21:22:12.314366 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 21:22:12.316826 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 21:22:12.324095 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 13 21:22:12.324183 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 13 21:22:12.326360 systemd[1]: Stopped target network.target - Network. Jan 13 21:22:12.330361 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 21:22:12.331379 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:22:12.333728 systemd[1]: Stopped target paths.target - Path Units. Jan 13 21:22:12.335441 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 21:22:12.338630 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:22:12.341743 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 21:22:12.342790 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 21:22:12.344810 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 21:22:12.344874 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:22:12.349040 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 21:22:12.349127 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:22:12.351872 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 21:22:12.352053 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 21:22:12.354248 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 21:22:12.355937 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 21:22:12.362062 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 21:22:12.363317 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 21:22:12.367595 systemd-networkd[1078]: eth0: DHCPv6 lease lost Jan 13 21:22:12.378187 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 21:22:12.379033 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 21:22:12.379166 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 21:22:12.382115 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 21:22:12.382190 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:22:12.390821 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 21:22:12.393021 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 21:22:12.394111 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:22:12.397967 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:22:12.401287 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 21:22:12.402512 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 21:22:12.413780 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 21:22:12.415489 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:22:12.416821 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 21:22:12.416885 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 21:22:12.420189 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 21:22:12.420258 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:22:12.424645 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 21:22:12.424819 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:22:12.427527 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 21:22:12.427847 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 21:22:12.433252 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 21:22:12.433325 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 21:22:12.437665 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 21:22:12.437721 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:22:12.440890 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 21:22:12.441297 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:22:12.444807 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 21:22:12.444874 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 21:22:12.448117 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:22:12.448232 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:22:12.461937 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 21:22:12.463228 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 21:22:12.463320 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:22:12.465150 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:22:12.465214 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:22:12.475278 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 21:22:12.475408 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 21:22:12.491159 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 21:22:12.491302 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 21:22:12.494263 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 21:22:12.496967 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 21:22:12.497057 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 21:22:12.505991 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 21:22:12.516013 systemd[1]: Switching root. Jan 13 21:22:12.558783 systemd-journald[178]: Journal stopped Jan 13 21:22:13.886744 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Jan 13 21:22:13.886823 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 21:22:13.886847 kernel: SELinux: policy capability open_perms=1 Jan 13 21:22:13.886867 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 21:22:13.886884 kernel: SELinux: policy capability always_check_network=0 Jan 13 21:22:13.886900 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 21:22:13.886923 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 21:22:13.886939 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 21:22:13.886955 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 21:22:13.886973 kernel: audit: type=1403 audit(1736803332.739:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 21:22:13.886991 systemd[1]: Successfully loaded SELinux policy in 42.241ms. Jan 13 21:22:13.887021 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.906ms. Jan 13 21:22:13.887041 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:22:13.887059 systemd[1]: Detected virtualization amazon. Jan 13 21:22:13.887079 systemd[1]: Detected architecture x86-64. Jan 13 21:22:13.887099 systemd[1]: Detected first boot. Jan 13 21:22:13.887116 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:22:13.887135 zram_generator::config[1373]: No configuration found. Jan 13 21:22:13.887154 systemd[1]: Populated /etc with preset unit settings. Jan 13 21:22:13.887172 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 21:22:13.887191 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 21:22:13.887209 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 21:22:13.887227 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 21:22:13.887248 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 21:22:13.887272 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 21:22:13.887290 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 21:22:13.887308 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 21:22:13.887328 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 21:22:13.887347 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 21:22:13.887367 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 21:22:13.887385 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:22:13.887407 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:22:13.887425 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 21:22:13.887442 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 21:22:13.887460 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 21:22:13.887479 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:22:13.887497 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 13 21:22:13.887515 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:22:13.887545 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 21:22:13.887566 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 21:22:13.887588 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 21:22:13.887606 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 21:22:13.887624 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:22:13.887647 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:22:13.887665 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:22:13.887685 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:22:13.887704 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 21:22:13.887724 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 21:22:13.887747 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:22:13.887766 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:22:13.887786 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:22:13.887805 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 21:22:13.887828 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 21:22:13.887848 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 21:22:13.887867 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 21:22:13.887887 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:22:13.887907 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 21:22:13.887929 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 21:22:13.887950 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 21:22:13.887971 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 21:22:13.887990 systemd[1]: Reached target machines.target - Containers. Jan 13 21:22:13.888010 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 21:22:13.888030 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:22:13.888050 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:22:13.888070 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 21:22:13.888093 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:22:13.888114 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:22:13.888135 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:22:13.888155 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 21:22:13.888175 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:22:13.888196 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 21:22:13.888216 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 21:22:13.888236 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 21:22:13.888256 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 21:22:13.888280 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 21:22:13.888300 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:22:13.888321 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:22:13.888342 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 21:22:13.888364 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 21:22:13.888386 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:22:13.888407 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 21:22:13.888427 systemd[1]: Stopped verity-setup.service. Jan 13 21:22:13.888449 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:22:13.888500 systemd-journald[1452]: Collecting audit messages is disabled. Jan 13 21:22:13.889595 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 21:22:13.889630 systemd-journald[1452]: Journal started Jan 13 21:22:13.889669 systemd-journald[1452]: Runtime Journal (/run/log/journal/ec2213c849da0ac63934b35e0236b74d) is 4.8M, max 38.6M, 33.7M free. Jan 13 21:22:13.541509 systemd[1]: Queued start job for default target multi-user.target. Jan 13 21:22:13.567665 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 13 21:22:13.568058 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 21:22:13.891808 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 21:22:13.921011 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:22:13.921093 kernel: fuse: init (API version 7.39) Jan 13 21:22:13.896229 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 21:22:13.898748 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 21:22:13.901029 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 21:22:13.902419 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 21:22:13.904727 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:22:13.907315 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 21:22:13.908657 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 21:22:13.910713 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:22:13.910886 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:22:13.913067 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:22:13.913240 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:22:13.915137 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 21:22:13.919764 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 21:22:13.919957 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 21:22:13.944209 kernel: ACPI: bus type drm_connector registered Jan 13 21:22:13.945853 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:22:13.946087 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:22:13.948584 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:22:13.954603 kernel: loop: module loaded Jan 13 21:22:13.951100 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 21:22:13.952984 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:22:13.953360 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:22:13.978224 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 21:22:13.990888 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 21:22:13.999659 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 21:22:14.000966 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 21:22:14.001021 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:22:14.006615 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 21:22:14.022763 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 21:22:14.033979 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 21:22:14.036107 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:22:14.040866 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 21:22:14.048937 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 21:22:14.050350 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:22:14.055236 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 21:22:14.056729 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:22:14.074009 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:22:14.080767 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 21:22:14.088855 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 21:22:14.093083 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 21:22:14.095944 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 21:22:14.098674 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 21:22:14.112301 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:22:14.132748 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 21:22:14.153759 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 21:22:14.161458 systemd-journald[1452]: Time spent on flushing to /var/log/journal/ec2213c849da0ac63934b35e0236b74d is 165.602ms for 960 entries. Jan 13 21:22:14.161458 systemd-journald[1452]: System Journal (/var/log/journal/ec2213c849da0ac63934b35e0236b74d) is 8.0M, max 195.6M, 187.6M free. Jan 13 21:22:14.358580 systemd-journald[1452]: Received client request to flush runtime journal. Jan 13 21:22:14.358666 kernel: loop0: detected capacity change from 0 to 61336 Jan 13 21:22:14.196086 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 21:22:14.198051 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 21:22:14.210935 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 21:22:14.267407 udevadm[1507]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 13 21:22:14.311860 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:22:14.327913 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 21:22:14.339112 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:22:14.365939 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 21:22:14.378613 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 21:22:14.381195 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 21:22:14.383622 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 21:22:14.417605 kernel: loop1: detected capacity change from 0 to 140768 Jan 13 21:22:14.420440 systemd-tmpfiles[1513]: ACLs are not supported, ignoring. Jan 13 21:22:14.420470 systemd-tmpfiles[1513]: ACLs are not supported, ignoring. Jan 13 21:22:14.433760 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:22:14.511178 kernel: loop2: detected capacity change from 0 to 142488 Jan 13 21:22:14.631630 kernel: loop3: detected capacity change from 0 to 210664 Jan 13 21:22:14.714959 kernel: loop4: detected capacity change from 0 to 61336 Jan 13 21:22:14.751806 kernel: loop5: detected capacity change from 0 to 140768 Jan 13 21:22:14.811594 kernel: loop6: detected capacity change from 0 to 142488 Jan 13 21:22:14.872603 kernel: loop7: detected capacity change from 0 to 210664 Jan 13 21:22:14.927419 (sd-merge)[1525]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 13 21:22:14.928130 (sd-merge)[1525]: Merged extensions into '/usr'. Jan 13 21:22:14.940949 systemd[1]: Reloading requested from client PID 1499 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 21:22:14.940967 systemd[1]: Reloading... Jan 13 21:22:15.071462 zram_generator::config[1551]: No configuration found. Jan 13 21:22:15.271097 ldconfig[1494]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 21:22:15.354341 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:22:15.448121 systemd[1]: Reloading finished in 506 ms. Jan 13 21:22:15.474783 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 21:22:15.476440 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 21:22:15.492906 systemd[1]: Starting ensure-sysext.service... Jan 13 21:22:15.498903 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:22:15.523881 systemd[1]: Reloading requested from client PID 1600 ('systemctl') (unit ensure-sysext.service)... Jan 13 21:22:15.523904 systemd[1]: Reloading... Jan 13 21:22:15.581063 systemd-tmpfiles[1601]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 21:22:15.581614 systemd-tmpfiles[1601]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 21:22:15.584199 systemd-tmpfiles[1601]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 21:22:15.584898 systemd-tmpfiles[1601]: ACLs are not supported, ignoring. Jan 13 21:22:15.587807 systemd-tmpfiles[1601]: ACLs are not supported, ignoring. Jan 13 21:22:15.594880 systemd-tmpfiles[1601]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:22:15.595047 systemd-tmpfiles[1601]: Skipping /boot Jan 13 21:22:15.615191 systemd-tmpfiles[1601]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:22:15.615352 systemd-tmpfiles[1601]: Skipping /boot Jan 13 21:22:15.672424 zram_generator::config[1628]: No configuration found. Jan 13 21:22:15.895987 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:22:15.979169 systemd[1]: Reloading finished in 454 ms. Jan 13 21:22:16.000685 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 21:22:16.012393 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:22:16.034810 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 21:22:16.043857 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 21:22:16.058567 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 21:22:16.064700 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:22:16.081446 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:22:16.096130 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 21:22:16.105969 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:22:16.106482 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:22:16.113821 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:22:16.126959 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:22:16.143001 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:22:16.145093 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:22:16.156805 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 21:22:16.158099 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:22:16.163506 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:22:16.165082 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:22:16.165337 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:22:16.165490 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:22:16.191622 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:22:16.192241 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:22:16.202102 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:22:16.203829 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:22:16.204132 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 21:22:16.205400 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:22:16.208226 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 21:22:16.224266 systemd[1]: Finished ensure-sysext.service. Jan 13 21:22:16.226501 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 21:22:16.229226 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:22:16.229549 systemd-udevd[1690]: Using default interface naming scheme 'v255'. Jan 13 21:22:16.230212 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:22:16.232126 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:22:16.233174 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:22:16.235466 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:22:16.235965 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:22:16.237884 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:22:16.238068 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:22:16.249697 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:22:16.249835 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:22:16.259757 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 21:22:16.263853 augenrules[1712]: No rules Jan 13 21:22:16.265600 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 21:22:16.282199 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 21:22:16.306467 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:22:16.316856 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:22:16.319388 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 21:22:16.329674 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 21:22:16.332136 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 21:22:16.399377 (udev-worker)[1730]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:22:16.507498 systemd-networkd[1724]: lo: Link UP Jan 13 21:22:16.507516 systemd-networkd[1724]: lo: Gained carrier Jan 13 21:22:16.512208 systemd-networkd[1724]: Enumeration completed Jan 13 21:22:16.512342 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:22:16.517843 systemd-networkd[1724]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:22:16.517858 systemd-networkd[1724]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:22:16.520086 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 21:22:16.524757 systemd-networkd[1724]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:22:16.524809 systemd-networkd[1724]: eth0: Link UP Jan 13 21:22:16.525019 systemd-networkd[1724]: eth0: Gained carrier Jan 13 21:22:16.525039 systemd-networkd[1724]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:22:16.529672 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 13 21:22:16.534615 systemd-networkd[1724]: eth0: DHCPv4 address 172.31.23.220/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 13 21:22:16.535741 systemd-resolved[1687]: Positive Trust Anchors: Jan 13 21:22:16.535757 systemd-resolved[1687]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:22:16.535813 systemd-resolved[1687]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:22:16.547417 systemd-resolved[1687]: Defaulting to hostname 'linux'. Jan 13 21:22:16.550135 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:22:16.551347 systemd[1]: Reached target network.target - Network. Jan 13 21:22:16.552444 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:22:16.586558 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 13 21:22:16.591639 kernel: ACPI: button: Power Button [PWRF] Jan 13 21:22:16.599573 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Jan 13 21:22:16.605569 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Jan 13 21:22:16.615562 kernel: ACPI: button: Sleep Button [SLPF] Jan 13 21:22:16.630922 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input5 Jan 13 21:22:16.641370 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:22:16.673611 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (1740) Jan 13 21:22:16.673685 kernel: mousedev: PS/2 mouse device common for all mice Jan 13 21:22:16.831912 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 13 21:22:16.886072 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 21:22:16.910620 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 21:22:16.926785 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 21:22:16.930086 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:22:16.949980 lvm[1844]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:22:16.951277 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 21:22:16.981572 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 21:22:16.983103 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:22:16.984200 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:22:16.985327 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 21:22:16.986779 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 21:22:16.988782 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 21:22:16.990278 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 21:22:16.993511 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 21:22:16.995164 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 21:22:16.995213 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:22:16.996221 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:22:16.997896 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 21:22:17.001601 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 21:22:17.011941 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 21:22:17.014784 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 21:22:17.017104 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 21:22:17.018673 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:22:17.020395 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:22:17.021875 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:22:17.021915 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:22:17.025679 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 21:22:17.031799 lvm[1852]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:22:17.039754 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 13 21:22:17.051066 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 21:22:17.071000 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 21:22:17.075769 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 21:22:17.078759 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 21:22:17.081061 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 21:22:17.110680 systemd[1]: Started ntpd.service - Network Time Service. Jan 13 21:22:17.121042 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 21:22:17.132746 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 13 21:22:17.138343 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 21:22:17.143034 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 21:22:17.160214 jq[1856]: false Jan 13 21:22:17.159162 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 21:22:17.162226 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 21:22:17.163400 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 21:22:17.174039 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 21:22:17.179833 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 21:22:17.183154 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 21:22:17.194143 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 21:22:17.194368 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 21:22:17.196116 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 21:22:17.196360 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 21:22:17.224579 jq[1869]: true Jan 13 21:22:17.255989 (ntainerd)[1877]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 21:22:17.256699 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 21:22:17.283032 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 21:22:17.284522 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 21:22:17.299927 ntpd[1859]: ntpd 4.2.8p17@1.4004-o Mon Jan 13 19:01:38 UTC 2025 (1): Starting Jan 13 21:22:17.315645 ntpd[1859]: 13 Jan 21:22:17 ntpd[1859]: ntpd 4.2.8p17@1.4004-o Mon Jan 13 19:01:38 UTC 2025 (1): Starting Jan 13 21:22:17.315645 ntpd[1859]: 13 Jan 21:22:17 ntpd[1859]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 13 21:22:17.315645 ntpd[1859]: 13 Jan 21:22:17 ntpd[1859]: ---------------------------------------------------- Jan 13 21:22:17.315645 ntpd[1859]: 13 Jan 21:22:17 ntpd[1859]: ntp-4 is maintained by Network Time Foundation, Jan 13 21:22:17.315645 ntpd[1859]: 13 Jan 21:22:17 ntpd[1859]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 13 21:22:17.315645 ntpd[1859]: 13 Jan 21:22:17 ntpd[1859]: corporation. Support and training for ntp-4 are Jan 13 21:22:17.315645 ntpd[1859]: 13 Jan 21:22:17 ntpd[1859]: available at https://www.nwtime.org/support Jan 13 21:22:17.315645 ntpd[1859]: 13 Jan 21:22:17 ntpd[1859]: ---------------------------------------------------- Jan 13 21:22:17.315645 ntpd[1859]: 13 Jan 21:22:17 ntpd[1859]: proto: precision = 0.083 usec (-23) Jan 13 21:22:17.315645 ntpd[1859]: 13 Jan 21:22:17 ntpd[1859]: basedate set to 2025-01-01 Jan 13 21:22:17.315645 ntpd[1859]: 13 Jan 21:22:17 ntpd[1859]: gps base set to 2025-01-05 (week 2348) Jan 13 21:22:17.315645 ntpd[1859]: 13 Jan 21:22:17 ntpd[1859]: Listen and drop on 0 v6wildcard [::]:123 Jan 13 21:22:17.315645 ntpd[1859]: 13 Jan 21:22:17 ntpd[1859]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 13 21:22:17.315020 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 21:22:17.316374 update_engine[1867]: I20250113 21:22:17.304477 1867 main.cc:92] Flatcar Update Engine starting Jan 13 21:22:17.299961 ntpd[1859]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 13 21:22:17.344342 ntpd[1859]: 13 Jan 21:22:17 ntpd[1859]: Listen normally on 2 lo 127.0.0.1:123 Jan 13 21:22:17.344342 ntpd[1859]: 13 Jan 21:22:17 ntpd[1859]: Listen normally on 3 eth0 172.31.23.220:123 Jan 13 21:22:17.344342 ntpd[1859]: 13 Jan 21:22:17 ntpd[1859]: Listen normally on 4 lo [::1]:123 Jan 13 21:22:17.344342 ntpd[1859]: 13 Jan 21:22:17 ntpd[1859]: bind(21) AF_INET6 fe80::49a:8eff:feba:4d31%2#123 flags 0x11 failed: Cannot assign requested address Jan 13 21:22:17.344342 ntpd[1859]: 13 Jan 21:22:17 ntpd[1859]: unable to create socket on eth0 (5) for fe80::49a:8eff:feba:4d31%2#123 Jan 13 21:22:17.344342 ntpd[1859]: 13 Jan 21:22:17 ntpd[1859]: failed to init interface for address fe80::49a:8eff:feba:4d31%2 Jan 13 21:22:17.344342 ntpd[1859]: 13 Jan 21:22:17 ntpd[1859]: Listening on routing socket on fd #21 for interface updates Jan 13 21:22:17.344342 ntpd[1859]: 13 Jan 21:22:17 ntpd[1859]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 21:22:17.344342 ntpd[1859]: 13 Jan 21:22:17 ntpd[1859]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 21:22:17.322150 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 21:22:17.299972 ntpd[1859]: ---------------------------------------------------- Jan 13 21:22:17.322185 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 21:22:17.299983 ntpd[1859]: ntp-4 is maintained by Network Time Foundation, Jan 13 21:22:17.330298 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 21:22:17.299993 ntpd[1859]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 13 21:22:17.330330 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 21:22:17.300002 ntpd[1859]: corporation. Support and training for ntp-4 are Jan 13 21:22:17.300012 ntpd[1859]: available at https://www.nwtime.org/support Jan 13 21:22:17.300022 ntpd[1859]: ---------------------------------------------------- Jan 13 21:22:17.305564 ntpd[1859]: proto: precision = 0.083 usec (-23) Jan 13 21:22:17.307172 dbus-daemon[1855]: [system] SELinux support is enabled Jan 13 21:22:17.307254 ntpd[1859]: basedate set to 2025-01-01 Jan 13 21:22:17.307272 ntpd[1859]: gps base set to 2025-01-05 (week 2348) Jan 13 21:22:17.314953 ntpd[1859]: Listen and drop on 0 v6wildcard [::]:123 Jan 13 21:22:17.315004 ntpd[1859]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 13 21:22:17.317511 ntpd[1859]: Listen normally on 2 lo 127.0.0.1:123 Jan 13 21:22:17.317573 ntpd[1859]: Listen normally on 3 eth0 172.31.23.220:123 Jan 13 21:22:17.317615 ntpd[1859]: Listen normally on 4 lo [::1]:123 Jan 13 21:22:17.317670 ntpd[1859]: bind(21) AF_INET6 fe80::49a:8eff:feba:4d31%2#123 flags 0x11 failed: Cannot assign requested address Jan 13 21:22:17.317692 ntpd[1859]: unable to create socket on eth0 (5) for fe80::49a:8eff:feba:4d31%2#123 Jan 13 21:22:17.317710 ntpd[1859]: failed to init interface for address fe80::49a:8eff:feba:4d31%2 Jan 13 21:22:17.317745 ntpd[1859]: Listening on routing socket on fd #21 for interface updates Jan 13 21:22:17.320478 ntpd[1859]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 21:22:17.320511 ntpd[1859]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 21:22:17.340876 dbus-daemon[1855]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1724 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 13 21:22:17.348463 dbus-daemon[1855]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 13 21:22:17.357572 update_engine[1867]: I20250113 21:22:17.357280 1867 update_check_scheduler.cc:74] Next update check in 9m8s Jan 13 21:22:17.362801 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 13 21:22:17.368628 systemd[1]: Started update-engine.service - Update Engine. Jan 13 21:22:17.391991 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 21:22:17.397846 extend-filesystems[1857]: Found loop4 Jan 13 21:22:17.403708 extend-filesystems[1857]: Found loop5 Jan 13 21:22:17.403708 extend-filesystems[1857]: Found loop6 Jan 13 21:22:17.403708 extend-filesystems[1857]: Found loop7 Jan 13 21:22:17.403708 extend-filesystems[1857]: Found nvme0n1 Jan 13 21:22:17.403708 extend-filesystems[1857]: Found nvme0n1p1 Jan 13 21:22:17.403708 extend-filesystems[1857]: Found nvme0n1p2 Jan 13 21:22:17.403708 extend-filesystems[1857]: Found nvme0n1p3 Jan 13 21:22:17.403708 extend-filesystems[1857]: Found usr Jan 13 21:22:17.403708 extend-filesystems[1857]: Found nvme0n1p4 Jan 13 21:22:17.403708 extend-filesystems[1857]: Found nvme0n1p6 Jan 13 21:22:17.403708 extend-filesystems[1857]: Found nvme0n1p7 Jan 13 21:22:17.403708 extend-filesystems[1857]: Found nvme0n1p9 Jan 13 21:22:17.471495 extend-filesystems[1857]: Checking size of /dev/nvme0n1p9 Jan 13 21:22:17.475381 tar[1871]: linux-amd64/helm Jan 13 21:22:17.475700 jq[1876]: true Jan 13 21:22:17.472907 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 13 21:22:17.494668 extend-filesystems[1857]: Resized partition /dev/nvme0n1p9 Jan 13 21:22:17.509582 extend-filesystems[1915]: resize2fs 1.47.1 (20-May-2024) Jan 13 21:22:17.539962 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jan 13 21:22:17.552896 systemd-logind[1866]: Watching system buttons on /dev/input/event1 (Power Button) Jan 13 21:22:17.570033 systemd-logind[1866]: Watching system buttons on /dev/input/event2 (Sleep Button) Jan 13 21:22:17.570072 systemd-logind[1866]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 13 21:22:17.571007 systemd-logind[1866]: New seat seat0. Jan 13 21:22:17.575579 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 21:22:17.599782 dbus-daemon[1855]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 13 21:22:17.658864 coreos-metadata[1854]: Jan 13 21:22:17.600 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 13 21:22:17.658864 coreos-metadata[1854]: Jan 13 21:22:17.614 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 13 21:22:17.658864 coreos-metadata[1854]: Jan 13 21:22:17.619 INFO Fetch successful Jan 13 21:22:17.658864 coreos-metadata[1854]: Jan 13 21:22:17.619 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 13 21:22:17.658864 coreos-metadata[1854]: Jan 13 21:22:17.623 INFO Fetch successful Jan 13 21:22:17.658864 coreos-metadata[1854]: Jan 13 21:22:17.623 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 13 21:22:17.658864 coreos-metadata[1854]: Jan 13 21:22:17.625 INFO Fetch successful Jan 13 21:22:17.658864 coreos-metadata[1854]: Jan 13 21:22:17.625 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 13 21:22:17.658864 coreos-metadata[1854]: Jan 13 21:22:17.626 INFO Fetch successful Jan 13 21:22:17.658864 coreos-metadata[1854]: Jan 13 21:22:17.626 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 13 21:22:17.658864 coreos-metadata[1854]: Jan 13 21:22:17.631 INFO Fetch failed with 404: resource not found Jan 13 21:22:17.658864 coreos-metadata[1854]: Jan 13 21:22:17.631 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 13 21:22:17.658864 coreos-metadata[1854]: Jan 13 21:22:17.632 INFO Fetch successful Jan 13 21:22:17.658864 coreos-metadata[1854]: Jan 13 21:22:17.632 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 13 21:22:17.658864 coreos-metadata[1854]: Jan 13 21:22:17.633 INFO Fetch successful Jan 13 21:22:17.658864 coreos-metadata[1854]: Jan 13 21:22:17.633 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 13 21:22:17.658864 coreos-metadata[1854]: Jan 13 21:22:17.634 INFO Fetch successful Jan 13 21:22:17.658864 coreos-metadata[1854]: Jan 13 21:22:17.634 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 13 21:22:17.658864 coreos-metadata[1854]: Jan 13 21:22:17.635 INFO Fetch successful Jan 13 21:22:17.658864 coreos-metadata[1854]: Jan 13 21:22:17.635 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 13 21:22:17.658864 coreos-metadata[1854]: Jan 13 21:22:17.637 INFO Fetch successful Jan 13 21:22:17.600452 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 13 21:22:17.601281 dbus-daemon[1855]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1896 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 13 21:22:17.618672 systemd[1]: Starting polkit.service - Authorization Manager... Jan 13 21:22:17.641590 polkitd[1933]: Started polkitd version 121 Jan 13 21:22:17.671622 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jan 13 21:22:17.684080 extend-filesystems[1915]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 13 21:22:17.684080 extend-filesystems[1915]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 21:22:17.684080 extend-filesystems[1915]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jan 13 21:22:17.700702 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (1729) Jan 13 21:22:17.685778 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 21:22:17.707006 extend-filesystems[1857]: Resized filesystem in /dev/nvme0n1p9 Jan 13 21:22:17.686286 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 21:22:17.692671 systemd-networkd[1724]: eth0: Gained IPv6LL Jan 13 21:22:17.708480 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 21:22:17.715244 bash[1926]: Updated "/home/core/.ssh/authorized_keys" Jan 13 21:22:17.719148 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 21:22:17.730930 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 21:22:17.748903 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 13 21:22:17.766000 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:22:17.779163 polkitd[1933]: Loading rules from directory /etc/polkit-1/rules.d Jan 13 21:22:17.772033 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 21:22:17.779252 polkitd[1933]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 13 21:22:17.781434 polkitd[1933]: Finished loading, compiling and executing 2 rules Jan 13 21:22:17.782991 systemd[1]: Starting sshkeys.service... Jan 13 21:22:17.811300 dbus-daemon[1855]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 13 21:22:17.811716 systemd[1]: Started polkit.service - Authorization Manager. Jan 13 21:22:17.817218 polkitd[1933]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 13 21:22:17.874953 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 13 21:22:17.887377 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 21:22:17.946533 amazon-ssm-agent[1959]: Initializing new seelog logger Jan 13 21:22:17.946533 amazon-ssm-agent[1959]: New Seelog Logger Creation Complete Jan 13 21:22:17.946533 amazon-ssm-agent[1959]: 2025/01/13 21:22:17 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:22:17.946533 amazon-ssm-agent[1959]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:22:17.946533 amazon-ssm-agent[1959]: 2025/01/13 21:22:17 processing appconfig overrides Jan 13 21:22:17.940952 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 13 21:22:17.951156 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 13 21:22:17.957057 amazon-ssm-agent[1959]: 2025/01/13 21:22:17 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:22:17.957057 amazon-ssm-agent[1959]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:22:17.957057 amazon-ssm-agent[1959]: 2025/01/13 21:22:17 processing appconfig overrides Jan 13 21:22:17.957057 amazon-ssm-agent[1959]: 2025/01/13 21:22:17 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:22:17.957057 amazon-ssm-agent[1959]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:22:17.957057 amazon-ssm-agent[1959]: 2025/01/13 21:22:17 processing appconfig overrides Jan 13 21:22:17.967309 amazon-ssm-agent[1959]: 2025-01-13 21:22:17 INFO Proxy environment variables: Jan 13 21:22:18.019263 amazon-ssm-agent[1959]: 2025/01/13 21:22:18 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:22:18.019263 amazon-ssm-agent[1959]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:22:18.019263 amazon-ssm-agent[1959]: 2025/01/13 21:22:18 processing appconfig overrides Jan 13 21:22:18.055008 systemd-hostnamed[1896]: Hostname set to (transient) Jan 13 21:22:18.067826 amazon-ssm-agent[1959]: 2025-01-13 21:22:17 INFO https_proxy: Jan 13 21:22:18.069465 systemd-resolved[1687]: System hostname changed to 'ip-172-31-23-220'. Jan 13 21:22:18.120270 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 21:22:18.169589 amazon-ssm-agent[1959]: 2025-01-13 21:22:17 INFO http_proxy: Jan 13 21:22:18.271624 amazon-ssm-agent[1959]: 2025-01-13 21:22:17 INFO no_proxy: Jan 13 21:22:18.371550 amazon-ssm-agent[1959]: 2025-01-13 21:22:17 INFO Checking if agent identity type OnPrem can be assumed Jan 13 21:22:18.381134 sshd_keygen[1875]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 21:22:18.472306 amazon-ssm-agent[1959]: 2025-01-13 21:22:17 INFO Checking if agent identity type EC2 can be assumed Jan 13 21:22:18.471843 locksmithd[1900]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 21:22:18.489006 coreos-metadata[2011]: Jan 13 21:22:18.488 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 13 21:22:18.489006 coreos-metadata[2011]: Jan 13 21:22:18.488 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 13 21:22:18.489006 coreos-metadata[2011]: Jan 13 21:22:18.488 INFO Fetch successful Jan 13 21:22:18.489006 coreos-metadata[2011]: Jan 13 21:22:18.488 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 13 21:22:18.489006 coreos-metadata[2011]: Jan 13 21:22:18.488 INFO Fetch successful Jan 13 21:22:18.492639 unknown[2011]: wrote ssh authorized keys file for user: core Jan 13 21:22:18.554626 containerd[1877]: time="2025-01-13T21:22:18.554503486Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 13 21:22:18.568629 amazon-ssm-agent[1959]: 2025-01-13 21:22:18 INFO Agent will take identity from EC2 Jan 13 21:22:18.571964 update-ssh-keys[2080]: Updated "/home/core/.ssh/authorized_keys" Jan 13 21:22:18.573939 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 13 21:22:18.584014 systemd[1]: Finished sshkeys.service. Jan 13 21:22:18.588745 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 21:22:18.606217 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 21:22:18.608870 systemd[1]: Started sshd@0-172.31.23.220:22-147.75.109.163:45778.service - OpenSSH per-connection server daemon (147.75.109.163:45778). Jan 13 21:22:18.659666 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 21:22:18.659897 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 21:22:18.667632 amazon-ssm-agent[1959]: 2025-01-13 21:22:18 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 13 21:22:18.676239 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 21:22:18.691891 containerd[1877]: time="2025-01-13T21:22:18.689585162Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:22:18.692562 containerd[1877]: time="2025-01-13T21:22:18.692075125Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:22:18.692562 containerd[1877]: time="2025-01-13T21:22:18.692119449Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 21:22:18.692562 containerd[1877]: time="2025-01-13T21:22:18.692144742Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 21:22:18.692562 containerd[1877]: time="2025-01-13T21:22:18.692320039Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 21:22:18.692562 containerd[1877]: time="2025-01-13T21:22:18.692340927Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 21:22:18.692562 containerd[1877]: time="2025-01-13T21:22:18.692414834Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:22:18.692562 containerd[1877]: time="2025-01-13T21:22:18.692434082Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:22:18.695214 containerd[1877]: time="2025-01-13T21:22:18.694602165Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:22:18.695214 containerd[1877]: time="2025-01-13T21:22:18.694641954Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 21:22:18.695214 containerd[1877]: time="2025-01-13T21:22:18.694667216Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:22:18.695214 containerd[1877]: time="2025-01-13T21:22:18.694685314Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 21:22:18.695214 containerd[1877]: time="2025-01-13T21:22:18.694812236Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:22:18.695214 containerd[1877]: time="2025-01-13T21:22:18.695072466Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:22:18.695485 containerd[1877]: time="2025-01-13T21:22:18.695228203Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:22:18.695485 containerd[1877]: time="2025-01-13T21:22:18.695250994Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 21:22:18.695485 containerd[1877]: time="2025-01-13T21:22:18.695351057Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 21:22:18.695485 containerd[1877]: time="2025-01-13T21:22:18.695413404Z" level=info msg="metadata content store policy set" policy=shared Jan 13 21:22:18.713046 containerd[1877]: time="2025-01-13T21:22:18.710289719Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 21:22:18.713046 containerd[1877]: time="2025-01-13T21:22:18.710425097Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 21:22:18.713046 containerd[1877]: time="2025-01-13T21:22:18.710451453Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 21:22:18.713046 containerd[1877]: time="2025-01-13T21:22:18.710596924Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 21:22:18.713046 containerd[1877]: time="2025-01-13T21:22:18.710623668Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 21:22:18.713046 containerd[1877]: time="2025-01-13T21:22:18.712799325Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 21:22:18.713940 containerd[1877]: time="2025-01-13T21:22:18.713434017Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 21:22:18.713940 containerd[1877]: time="2025-01-13T21:22:18.713686115Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 21:22:18.713940 containerd[1877]: time="2025-01-13T21:22:18.713728825Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 21:22:18.713940 containerd[1877]: time="2025-01-13T21:22:18.713749866Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 21:22:18.713940 containerd[1877]: time="2025-01-13T21:22:18.713770273Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 21:22:18.713940 containerd[1877]: time="2025-01-13T21:22:18.713805360Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 21:22:18.713940 containerd[1877]: time="2025-01-13T21:22:18.713834657Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 21:22:18.713940 containerd[1877]: time="2025-01-13T21:22:18.713856488Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 21:22:18.713940 containerd[1877]: time="2025-01-13T21:22:18.713893124Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 21:22:18.713940 containerd[1877]: time="2025-01-13T21:22:18.713912300Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 21:22:18.714338 containerd[1877]: time="2025-01-13T21:22:18.713935241Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 21:22:18.714338 containerd[1877]: time="2025-01-13T21:22:18.713969842Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 21:22:18.714338 containerd[1877]: time="2025-01-13T21:22:18.713999080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 21:22:18.714338 containerd[1877]: time="2025-01-13T21:22:18.714032860Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 21:22:18.714338 containerd[1877]: time="2025-01-13T21:22:18.714052531Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 21:22:18.714338 containerd[1877]: time="2025-01-13T21:22:18.714072309Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 21:22:18.714338 containerd[1877]: time="2025-01-13T21:22:18.714105469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 21:22:18.714338 containerd[1877]: time="2025-01-13T21:22:18.714126182Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 21:22:18.714338 containerd[1877]: time="2025-01-13T21:22:18.714144344Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 21:22:18.714338 containerd[1877]: time="2025-01-13T21:22:18.714165307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 21:22:18.714338 containerd[1877]: time="2025-01-13T21:22:18.714200584Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 21:22:18.714338 containerd[1877]: time="2025-01-13T21:22:18.714224926Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 21:22:18.714338 containerd[1877]: time="2025-01-13T21:22:18.714243368Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 21:22:18.714338 containerd[1877]: time="2025-01-13T21:22:18.714277075Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 21:22:18.715221 containerd[1877]: time="2025-01-13T21:22:18.714297390Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 21:22:18.715221 containerd[1877]: time="2025-01-13T21:22:18.714320972Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 21:22:18.715221 containerd[1877]: time="2025-01-13T21:22:18.714381257Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 21:22:18.715221 containerd[1877]: time="2025-01-13T21:22:18.714417341Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 21:22:18.715221 containerd[1877]: time="2025-01-13T21:22:18.714435746Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 21:22:18.715221 containerd[1877]: time="2025-01-13T21:22:18.714523133Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 21:22:18.715221 containerd[1877]: time="2025-01-13T21:22:18.714645352Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 21:22:18.715221 containerd[1877]: time="2025-01-13T21:22:18.714665083Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 21:22:18.715221 containerd[1877]: time="2025-01-13T21:22:18.714683808Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 21:22:18.715221 containerd[1877]: time="2025-01-13T21:22:18.714793467Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 21:22:18.715221 containerd[1877]: time="2025-01-13T21:22:18.714816940Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 21:22:18.715221 containerd[1877]: time="2025-01-13T21:22:18.714833570Z" level=info msg="NRI interface is disabled by configuration." Jan 13 21:22:18.720512 containerd[1877]: time="2025-01-13T21:22:18.714848796Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 21:22:18.720671 containerd[1877]: time="2025-01-13T21:22:18.717056508Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 21:22:18.720671 containerd[1877]: time="2025-01-13T21:22:18.717157341Z" level=info msg="Connect containerd service" Jan 13 21:22:18.720671 containerd[1877]: time="2025-01-13T21:22:18.717230441Z" level=info msg="using legacy CRI server" Jan 13 21:22:18.720671 containerd[1877]: time="2025-01-13T21:22:18.717255849Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 21:22:18.720671 containerd[1877]: time="2025-01-13T21:22:18.717469050Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 21:22:18.721530 containerd[1877]: time="2025-01-13T21:22:18.721387318Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 21:22:18.726392 containerd[1877]: time="2025-01-13T21:22:18.723839329Z" level=info msg="Start subscribing containerd event" Jan 13 21:22:18.726392 containerd[1877]: time="2025-01-13T21:22:18.723914837Z" level=info msg="Start recovering state" Jan 13 21:22:18.726392 containerd[1877]: time="2025-01-13T21:22:18.724022565Z" level=info msg="Start event monitor" Jan 13 21:22:18.726392 containerd[1877]: time="2025-01-13T21:22:18.724055029Z" level=info msg="Start snapshots syncer" Jan 13 21:22:18.726392 containerd[1877]: time="2025-01-13T21:22:18.724070549Z" level=info msg="Start cni network conf syncer for default" Jan 13 21:22:18.726392 containerd[1877]: time="2025-01-13T21:22:18.724089420Z" level=info msg="Start streaming server" Jan 13 21:22:18.727204 containerd[1877]: time="2025-01-13T21:22:18.727172040Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 21:22:18.732465 containerd[1877]: time="2025-01-13T21:22:18.727722531Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 21:22:18.732465 containerd[1877]: time="2025-01-13T21:22:18.729613899Z" level=info msg="containerd successfully booted in 0.180277s" Jan 13 21:22:18.731335 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 21:22:18.741724 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 21:22:18.756625 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 21:22:18.767791 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 13 21:22:18.769841 amazon-ssm-agent[1959]: 2025-01-13 21:22:18 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 13 21:22:18.770233 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 21:22:18.869243 amazon-ssm-agent[1959]: 2025-01-13 21:22:18 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 13 21:22:18.916001 sshd[2086]: Accepted publickey for core from 147.75.109.163 port 45778 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:22:18.921524 sshd[2086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:22:18.944015 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 21:22:18.953298 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 21:22:18.966439 systemd-logind[1866]: New session 1 of user core. Jan 13 21:22:18.973913 amazon-ssm-agent[1959]: 2025-01-13 21:22:18 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 13 21:22:18.995435 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 21:22:19.009200 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 21:22:19.025522 (systemd)[2100]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 21:22:19.070942 amazon-ssm-agent[1959]: 2025-01-13 21:22:18 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jan 13 21:22:19.171242 amazon-ssm-agent[1959]: 2025-01-13 21:22:18 INFO [amazon-ssm-agent] Starting Core Agent Jan 13 21:22:19.196192 amazon-ssm-agent[1959]: 2025-01-13 21:22:18 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 13 21:22:19.196192 amazon-ssm-agent[1959]: 2025-01-13 21:22:18 INFO [Registrar] Starting registrar module Jan 13 21:22:19.196192 amazon-ssm-agent[1959]: 2025-01-13 21:22:18 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 13 21:22:19.196192 amazon-ssm-agent[1959]: 2025-01-13 21:22:19 INFO [EC2Identity] EC2 registration was successful. Jan 13 21:22:19.196472 amazon-ssm-agent[1959]: 2025-01-13 21:22:19 INFO [CredentialRefresher] credentialRefresher has started Jan 13 21:22:19.196472 amazon-ssm-agent[1959]: 2025-01-13 21:22:19 INFO [CredentialRefresher] Starting credentials refresher loop Jan 13 21:22:19.196472 amazon-ssm-agent[1959]: 2025-01-13 21:22:19 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 13 21:22:19.271305 amazon-ssm-agent[1959]: 2025-01-13 21:22:19 INFO [CredentialRefresher] Next credential rotation will be in 31.3749939048 minutes Jan 13 21:22:19.280579 systemd[2100]: Queued start job for default target default.target. Jan 13 21:22:19.289813 systemd[2100]: Created slice app.slice - User Application Slice. Jan 13 21:22:19.289869 systemd[2100]: Reached target paths.target - Paths. Jan 13 21:22:19.289891 systemd[2100]: Reached target timers.target - Timers. Jan 13 21:22:19.294969 systemd[2100]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 21:22:19.320898 systemd[2100]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 21:22:19.322032 systemd[2100]: Reached target sockets.target - Sockets. Jan 13 21:22:19.322055 systemd[2100]: Reached target basic.target - Basic System. Jan 13 21:22:19.322118 systemd[2100]: Reached target default.target - Main User Target. Jan 13 21:22:19.322156 systemd[2100]: Startup finished in 284ms. Jan 13 21:22:19.323162 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 21:22:19.336854 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 21:22:19.504913 systemd[1]: Started sshd@1-172.31.23.220:22-147.75.109.163:52174.service - OpenSSH per-connection server daemon (147.75.109.163:52174). Jan 13 21:22:19.510954 tar[1871]: linux-amd64/LICENSE Jan 13 21:22:19.510954 tar[1871]: linux-amd64/README.md Jan 13 21:22:19.553305 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 21:22:19.712247 sshd[2111]: Accepted publickey for core from 147.75.109.163 port 52174 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:22:19.714062 sshd[2111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:22:19.722369 systemd-logind[1866]: New session 2 of user core. Jan 13 21:22:19.728983 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 21:22:19.858568 sshd[2111]: pam_unix(sshd:session): session closed for user core Jan 13 21:22:19.865920 systemd[1]: sshd@1-172.31.23.220:22-147.75.109.163:52174.service: Deactivated successfully. Jan 13 21:22:19.868133 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 21:22:19.869305 systemd-logind[1866]: Session 2 logged out. Waiting for processes to exit. Jan 13 21:22:19.871276 systemd-logind[1866]: Removed session 2. Jan 13 21:22:19.888831 systemd[1]: Started sshd@2-172.31.23.220:22-147.75.109.163:52182.service - OpenSSH per-connection server daemon (147.75.109.163:52182). Jan 13 21:22:20.065661 sshd[2121]: Accepted publickey for core from 147.75.109.163 port 52182 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:22:20.067672 sshd[2121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:22:20.075972 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:22:20.076038 (kubelet)[2127]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:22:20.080246 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 21:22:20.084703 systemd[1]: Startup finished in 938ms (kernel) + 6.877s (initrd) + 7.383s (userspace) = 15.200s. Jan 13 21:22:20.095968 systemd-logind[1866]: New session 3 of user core. Jan 13 21:22:20.105783 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 21:22:20.228630 amazon-ssm-agent[1959]: 2025-01-13 21:22:20 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 13 21:22:20.271700 sshd[2121]: pam_unix(sshd:session): session closed for user core Jan 13 21:22:20.277261 systemd[1]: sshd@2-172.31.23.220:22-147.75.109.163:52182.service: Deactivated successfully. Jan 13 21:22:20.278055 systemd-logind[1866]: Session 3 logged out. Waiting for processes to exit. Jan 13 21:22:20.282882 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 21:22:20.287705 systemd-logind[1866]: Removed session 3. Jan 13 21:22:20.302230 ntpd[1859]: Listen normally on 6 eth0 [fe80::49a:8eff:feba:4d31%2]:123 Jan 13 21:22:20.305157 ntpd[1859]: 13 Jan 21:22:20 ntpd[1859]: Listen normally on 6 eth0 [fe80::49a:8eff:feba:4d31%2]:123 Jan 13 21:22:20.329524 amazon-ssm-agent[1959]: 2025-01-13 21:22:20 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2133) started Jan 13 21:22:20.430318 amazon-ssm-agent[1959]: 2025-01-13 21:22:20 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 13 21:22:21.157522 kubelet[2127]: E0113 21:22:21.157403 2127 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:22:21.160111 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:22:21.160316 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:22:21.160693 systemd[1]: kubelet.service: Consumed 1.071s CPU time. Jan 13 21:22:24.502566 systemd-resolved[1687]: Clock change detected. Flushing caches. Jan 13 21:22:30.508658 systemd[1]: Started sshd@3-172.31.23.220:22-147.75.109.163:60242.service - OpenSSH per-connection server daemon (147.75.109.163:60242). Jan 13 21:22:30.691522 sshd[2157]: Accepted publickey for core from 147.75.109.163 port 60242 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:22:30.693196 sshd[2157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:22:30.700640 systemd-logind[1866]: New session 4 of user core. Jan 13 21:22:30.709774 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 21:22:30.837075 sshd[2157]: pam_unix(sshd:session): session closed for user core Jan 13 21:22:30.842103 systemd[1]: sshd@3-172.31.23.220:22-147.75.109.163:60242.service: Deactivated successfully. Jan 13 21:22:30.846194 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 21:22:30.849906 systemd-logind[1866]: Session 4 logged out. Waiting for processes to exit. Jan 13 21:22:30.853221 systemd-logind[1866]: Removed session 4. Jan 13 21:22:30.876857 systemd[1]: Started sshd@4-172.31.23.220:22-147.75.109.163:60256.service - OpenSSH per-connection server daemon (147.75.109.163:60256). Jan 13 21:22:31.031650 sshd[2164]: Accepted publickey for core from 147.75.109.163 port 60256 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:22:31.033166 sshd[2164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:22:31.039596 systemd-logind[1866]: New session 5 of user core. Jan 13 21:22:31.043653 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 21:22:31.160034 sshd[2164]: pam_unix(sshd:session): session closed for user core Jan 13 21:22:31.163616 systemd[1]: sshd@4-172.31.23.220:22-147.75.109.163:60256.service: Deactivated successfully. Jan 13 21:22:31.165612 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 21:22:31.166931 systemd-logind[1866]: Session 5 logged out. Waiting for processes to exit. Jan 13 21:22:31.168420 systemd-logind[1866]: Removed session 5. Jan 13 21:22:31.196900 systemd[1]: Started sshd@5-172.31.23.220:22-147.75.109.163:60268.service - OpenSSH per-connection server daemon (147.75.109.163:60268). Jan 13 21:22:31.354417 sshd[2171]: Accepted publickey for core from 147.75.109.163 port 60268 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:22:31.355980 sshd[2171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:22:31.361041 systemd-logind[1866]: New session 6 of user core. Jan 13 21:22:31.373019 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 21:22:31.374706 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 21:22:31.384046 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:22:31.508041 sshd[2171]: pam_unix(sshd:session): session closed for user core Jan 13 21:22:31.517252 systemd[1]: sshd@5-172.31.23.220:22-147.75.109.163:60268.service: Deactivated successfully. Jan 13 21:22:31.526363 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 21:22:31.531640 systemd-logind[1866]: Session 6 logged out. Waiting for processes to exit. Jan 13 21:22:31.554963 systemd[1]: Started sshd@6-172.31.23.220:22-147.75.109.163:60276.service - OpenSSH per-connection server daemon (147.75.109.163:60276). Jan 13 21:22:31.556286 systemd-logind[1866]: Removed session 6. Jan 13 21:22:31.586273 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:22:31.592263 (kubelet)[2188]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:22:31.653600 kubelet[2188]: E0113 21:22:31.653551 2188 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:22:31.657961 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:22:31.658159 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:22:31.715638 sshd[2181]: Accepted publickey for core from 147.75.109.163 port 60276 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:22:31.717145 sshd[2181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:22:31.722342 systemd-logind[1866]: New session 7 of user core. Jan 13 21:22:31.729708 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 21:22:31.838124 sudo[2197]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 21:22:31.838716 sudo[2197]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:22:32.219883 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 21:22:32.219989 (dockerd)[2212]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 21:22:32.645130 dockerd[2212]: time="2025-01-13T21:22:32.645065139Z" level=info msg="Starting up" Jan 13 21:22:32.807082 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1032394734-merged.mount: Deactivated successfully. Jan 13 21:22:32.868545 dockerd[2212]: time="2025-01-13T21:22:32.868262498Z" level=info msg="Loading containers: start." Jan 13 21:22:33.119369 kernel: Initializing XFRM netlink socket Jan 13 21:22:33.173304 (udev-worker)[2232]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:22:33.250797 systemd-networkd[1724]: docker0: Link UP Jan 13 21:22:33.292882 dockerd[2212]: time="2025-01-13T21:22:33.292837275Z" level=info msg="Loading containers: done." Jan 13 21:22:33.323894 dockerd[2212]: time="2025-01-13T21:22:33.323845758Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 21:22:33.324346 dockerd[2212]: time="2025-01-13T21:22:33.324060780Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 13 21:22:33.324346 dockerd[2212]: time="2025-01-13T21:22:33.324326103Z" level=info msg="Daemon has completed initialization" Jan 13 21:22:33.409535 dockerd[2212]: time="2025-01-13T21:22:33.408841622Z" level=info msg="API listen on /run/docker.sock" Jan 13 21:22:33.409074 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 21:22:34.603789 containerd[1877]: time="2025-01-13T21:22:34.603706426Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\"" Jan 13 21:22:35.361369 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount701703497.mount: Deactivated successfully. Jan 13 21:22:38.086401 containerd[1877]: time="2025-01-13T21:22:38.086340529Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:38.088366 containerd[1877]: time="2025-01-13T21:22:38.088294364Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.8: active requests=0, bytes read=32675642" Jan 13 21:22:38.091208 containerd[1877]: time="2025-01-13T21:22:38.091133209Z" level=info msg="ImageCreate event name:\"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:38.100129 containerd[1877]: time="2025-01-13T21:22:38.099835498Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.8\" with image id \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\", size \"32672442\" in 3.496045627s" Jan 13 21:22:38.100129 containerd[1877]: time="2025-01-13T21:22:38.099910373Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\" returns image reference \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\"" Jan 13 21:22:38.100598 containerd[1877]: time="2025-01-13T21:22:38.100351129Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:38.125003 containerd[1877]: time="2025-01-13T21:22:38.124806852Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\"" Jan 13 21:22:41.428671 containerd[1877]: time="2025-01-13T21:22:41.428622094Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:41.431448 containerd[1877]: time="2025-01-13T21:22:41.431364657Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.8: active requests=0, bytes read=29606409" Jan 13 21:22:41.434644 containerd[1877]: time="2025-01-13T21:22:41.434560363Z" level=info msg="ImageCreate event name:\"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:41.439562 containerd[1877]: time="2025-01-13T21:22:41.439491282Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:41.441116 containerd[1877]: time="2025-01-13T21:22:41.440623025Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.8\" with image id \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\", size \"31051521\" in 3.315778408s" Jan 13 21:22:41.441116 containerd[1877]: time="2025-01-13T21:22:41.440671555Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\" returns image reference \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\"" Jan 13 21:22:41.468744 containerd[1877]: time="2025-01-13T21:22:41.468671591Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\"" Jan 13 21:22:41.853288 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 21:22:41.861349 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:22:42.075366 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:22:42.081698 (kubelet)[2432]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:22:42.140386 kubelet[2432]: E0113 21:22:42.139801 2432 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:22:42.143239 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:22:42.143438 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:22:43.601287 containerd[1877]: time="2025-01-13T21:22:43.601233558Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:43.603304 containerd[1877]: time="2025-01-13T21:22:43.603101295Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.8: active requests=0, bytes read=17783035" Jan 13 21:22:43.605815 containerd[1877]: time="2025-01-13T21:22:43.605330312Z" level=info msg="ImageCreate event name:\"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:43.609998 containerd[1877]: time="2025-01-13T21:22:43.609957476Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:43.611136 containerd[1877]: time="2025-01-13T21:22:43.611096315Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.8\" with image id \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\", size \"19228165\" in 2.142356765s" Jan 13 21:22:43.611297 containerd[1877]: time="2025-01-13T21:22:43.611275085Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\" returns image reference \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\"" Jan 13 21:22:43.642748 containerd[1877]: time="2025-01-13T21:22:43.642689269Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Jan 13 21:22:45.255468 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1758292335.mount: Deactivated successfully. Jan 13 21:22:45.907897 containerd[1877]: time="2025-01-13T21:22:45.907802500Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:45.909823 containerd[1877]: time="2025-01-13T21:22:45.909661352Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=29057470" Jan 13 21:22:45.913753 containerd[1877]: time="2025-01-13T21:22:45.912177285Z" level=info msg="ImageCreate event name:\"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:45.916731 containerd[1877]: time="2025-01-13T21:22:45.915534795Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:45.916731 containerd[1877]: time="2025-01-13T21:22:45.916381310Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"29056489\" in 2.273638105s" Jan 13 21:22:45.916731 containerd[1877]: time="2025-01-13T21:22:45.916409551Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\"" Jan 13 21:22:45.944890 containerd[1877]: time="2025-01-13T21:22:45.944851765Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 13 21:22:46.667523 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3552011861.mount: Deactivated successfully. Jan 13 21:22:47.982461 containerd[1877]: time="2025-01-13T21:22:47.982403187Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:47.984456 containerd[1877]: time="2025-01-13T21:22:47.984096705Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 13 21:22:47.986513 containerd[1877]: time="2025-01-13T21:22:47.986296392Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:47.991145 containerd[1877]: time="2025-01-13T21:22:47.991075118Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:47.992835 containerd[1877]: time="2025-01-13T21:22:47.992656316Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.047756258s" Jan 13 21:22:47.992835 containerd[1877]: time="2025-01-13T21:22:47.992707392Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 13 21:22:48.020406 containerd[1877]: time="2025-01-13T21:22:48.020368752Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 13 21:22:48.287406 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 13 21:22:48.620461 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount67521214.mount: Deactivated successfully. Jan 13 21:22:48.642615 containerd[1877]: time="2025-01-13T21:22:48.642556109Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:48.644806 containerd[1877]: time="2025-01-13T21:22:48.644620365Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jan 13 21:22:48.654526 containerd[1877]: time="2025-01-13T21:22:48.650911055Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:48.659081 containerd[1877]: time="2025-01-13T21:22:48.659031855Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:48.660687 containerd[1877]: time="2025-01-13T21:22:48.660601735Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 640.191165ms" Jan 13 21:22:48.660824 containerd[1877]: time="2025-01-13T21:22:48.660690054Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 13 21:22:48.696476 containerd[1877]: time="2025-01-13T21:22:48.696439589Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 13 21:22:49.378381 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3040002000.mount: Deactivated successfully. Jan 13 21:22:52.353945 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 13 21:22:52.364635 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:22:53.105891 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:22:53.111321 (kubelet)[2572]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:22:53.238001 kubelet[2572]: E0113 21:22:53.237944 2572 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:22:53.241406 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:22:53.242977 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:22:53.605186 containerd[1877]: time="2025-01-13T21:22:53.605130701Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:53.607369 containerd[1877]: time="2025-01-13T21:22:53.607149697Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Jan 13 21:22:53.609622 containerd[1877]: time="2025-01-13T21:22:53.609580961Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:53.614894 containerd[1877]: time="2025-01-13T21:22:53.614832598Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:53.616892 containerd[1877]: time="2025-01-13T21:22:53.616024006Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 4.919547117s" Jan 13 21:22:53.616892 containerd[1877]: time="2025-01-13T21:22:53.616068463Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 13 21:22:57.299723 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:22:57.308888 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:22:57.372202 systemd[1]: Reloading requested from client PID 2646 ('systemctl') (unit session-7.scope)... Jan 13 21:22:57.372222 systemd[1]: Reloading... Jan 13 21:22:57.543513 zram_generator::config[2686]: No configuration found. Jan 13 21:22:57.715798 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:22:57.813360 systemd[1]: Reloading finished in 440 ms. Jan 13 21:22:57.875988 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 21:22:57.876105 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 21:22:57.876394 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:22:57.883130 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:22:58.108399 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:22:58.122368 (kubelet)[2747]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:22:58.185503 kubelet[2747]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:22:58.187523 kubelet[2747]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 21:22:58.187523 kubelet[2747]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:22:58.187523 kubelet[2747]: I0113 21:22:58.186111 2747 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 21:22:58.676708 kubelet[2747]: I0113 21:22:58.676667 2747 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 13 21:22:58.676708 kubelet[2747]: I0113 21:22:58.676698 2747 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 21:22:58.677113 kubelet[2747]: I0113 21:22:58.677090 2747 server.go:927] "Client rotation is on, will bootstrap in background" Jan 13 21:22:58.725785 kubelet[2747]: I0113 21:22:58.725741 2747 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:22:58.728010 kubelet[2747]: E0113 21:22:58.727609 2747 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.23.220:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.23.220:6443: connect: connection refused Jan 13 21:22:58.749897 kubelet[2747]: I0113 21:22:58.749865 2747 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 21:22:58.753528 kubelet[2747]: I0113 21:22:58.753451 2747 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 21:22:58.753833 kubelet[2747]: I0113 21:22:58.753528 2747 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-23-220","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 21:22:58.754863 kubelet[2747]: I0113 21:22:58.754759 2747 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 21:22:58.754954 kubelet[2747]: I0113 21:22:58.754878 2747 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 21:22:58.755058 kubelet[2747]: I0113 21:22:58.755039 2747 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:22:58.758529 kubelet[2747]: I0113 21:22:58.756156 2747 kubelet.go:400] "Attempting to sync node with API server" Jan 13 21:22:58.758529 kubelet[2747]: I0113 21:22:58.756182 2747 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 21:22:58.758529 kubelet[2747]: I0113 21:22:58.756210 2747 kubelet.go:312] "Adding apiserver pod source" Jan 13 21:22:58.758529 kubelet[2747]: I0113 21:22:58.756228 2747 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 21:22:58.761275 kubelet[2747]: W0113 21:22:58.760522 2747 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.23.220:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-220&limit=500&resourceVersion=0": dial tcp 172.31.23.220:6443: connect: connection refused Jan 13 21:22:58.761275 kubelet[2747]: E0113 21:22:58.760572 2747 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.23.220:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-220&limit=500&resourceVersion=0": dial tcp 172.31.23.220:6443: connect: connection refused Jan 13 21:22:58.761275 kubelet[2747]: W0113 21:22:58.760658 2747 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.23.220:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.23.220:6443: connect: connection refused Jan 13 21:22:58.761275 kubelet[2747]: E0113 21:22:58.760692 2747 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.23.220:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.23.220:6443: connect: connection refused Jan 13 21:22:58.762112 kubelet[2747]: I0113 21:22:58.761695 2747 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 13 21:22:58.765520 kubelet[2747]: I0113 21:22:58.764241 2747 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 21:22:58.765520 kubelet[2747]: W0113 21:22:58.764324 2747 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 21:22:58.765520 kubelet[2747]: I0113 21:22:58.765454 2747 server.go:1264] "Started kubelet" Jan 13 21:22:58.771275 kubelet[2747]: I0113 21:22:58.771145 2747 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 21:22:58.773530 kubelet[2747]: I0113 21:22:58.772696 2747 server.go:455] "Adding debug handlers to kubelet server" Jan 13 21:22:58.783630 kubelet[2747]: I0113 21:22:58.781292 2747 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 21:22:58.787760 kubelet[2747]: E0113 21:22:58.785504 2747 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.23.220:6443/api/v1/namespaces/default/events\": dial tcp 172.31.23.220:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-23-220.181a5d789402e912 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-23-220,UID:ip-172-31-23-220,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-23-220,},FirstTimestamp:2025-01-13 21:22:58.765424914 +0000 UTC m=+0.637570492,LastTimestamp:2025-01-13 21:22:58.765424914 +0000 UTC m=+0.637570492,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-23-220,}" Jan 13 21:22:58.789306 kubelet[2747]: I0113 21:22:58.789110 2747 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 21:22:58.792520 kubelet[2747]: I0113 21:22:58.789462 2747 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 21:22:58.808425 kubelet[2747]: E0113 21:22:58.807913 2747 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-23-220\" not found" Jan 13 21:22:58.808425 kubelet[2747]: I0113 21:22:58.807985 2747 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 21:22:58.808425 kubelet[2747]: I0113 21:22:58.808159 2747 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 13 21:22:58.808425 kubelet[2747]: I0113 21:22:58.808225 2747 reconciler.go:26] "Reconciler: start to sync state" Jan 13 21:22:58.809158 kubelet[2747]: W0113 21:22:58.809106 2747 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.23.220:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.220:6443: connect: connection refused Jan 13 21:22:58.809277 kubelet[2747]: E0113 21:22:58.809266 2747 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.23.220:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.220:6443: connect: connection refused Jan 13 21:22:58.809919 kubelet[2747]: E0113 21:22:58.809894 2747 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 21:22:58.810659 kubelet[2747]: E0113 21:22:58.810615 2747 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.220:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-220?timeout=10s\": dial tcp 172.31.23.220:6443: connect: connection refused" interval="200ms" Jan 13 21:22:58.811283 kubelet[2747]: I0113 21:22:58.811262 2747 factory.go:221] Registration of the systemd container factory successfully Jan 13 21:22:58.811368 kubelet[2747]: I0113 21:22:58.811347 2747 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 21:22:58.815225 kubelet[2747]: I0113 21:22:58.814167 2747 factory.go:221] Registration of the containerd container factory successfully Jan 13 21:22:58.836416 kubelet[2747]: I0113 21:22:58.834654 2747 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 21:22:58.836960 kubelet[2747]: I0113 21:22:58.836800 2747 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 21:22:58.836960 kubelet[2747]: I0113 21:22:58.836834 2747 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 21:22:58.836960 kubelet[2747]: I0113 21:22:58.836858 2747 kubelet.go:2337] "Starting kubelet main sync loop" Jan 13 21:22:58.837183 kubelet[2747]: E0113 21:22:58.837157 2747 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 21:22:58.856067 kubelet[2747]: W0113 21:22:58.855907 2747 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.23.220:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.220:6443: connect: connection refused Jan 13 21:22:58.857405 kubelet[2747]: E0113 21:22:58.856558 2747 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.23.220:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.220:6443: connect: connection refused Jan 13 21:22:58.864207 kubelet[2747]: I0113 21:22:58.864182 2747 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 21:22:58.864705 kubelet[2747]: I0113 21:22:58.864531 2747 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 21:22:58.864705 kubelet[2747]: I0113 21:22:58.864702 2747 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:22:58.868654 kubelet[2747]: I0113 21:22:58.868626 2747 policy_none.go:49] "None policy: Start" Jan 13 21:22:58.869727 kubelet[2747]: I0113 21:22:58.869570 2747 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 21:22:58.869885 kubelet[2747]: I0113 21:22:58.869700 2747 state_mem.go:35] "Initializing new in-memory state store" Jan 13 21:22:58.880443 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 21:22:58.891320 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 21:22:58.896364 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 21:22:58.907763 kubelet[2747]: I0113 21:22:58.907556 2747 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 21:22:58.907967 kubelet[2747]: I0113 21:22:58.907829 2747 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 21:22:58.908030 kubelet[2747]: I0113 21:22:58.907984 2747 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 21:22:58.919730 kubelet[2747]: E0113 21:22:58.918805 2747 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-23-220\" not found" Jan 13 21:22:58.921733 kubelet[2747]: I0113 21:22:58.920249 2747 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-220" Jan 13 21:22:58.921733 kubelet[2747]: E0113 21:22:58.920987 2747 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.23.220:6443/api/v1/nodes\": dial tcp 172.31.23.220:6443: connect: connection refused" node="ip-172-31-23-220" Jan 13 21:22:58.938367 kubelet[2747]: I0113 21:22:58.938184 2747 topology_manager.go:215] "Topology Admit Handler" podUID="021388dd97c898a01a6994681fff5225" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-23-220" Jan 13 21:22:58.942219 kubelet[2747]: I0113 21:22:58.942188 2747 topology_manager.go:215] "Topology Admit Handler" podUID="80aa007f264bcde83e0b035448470421" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-23-220" Jan 13 21:22:58.945001 kubelet[2747]: I0113 21:22:58.944867 2747 topology_manager.go:215] "Topology Admit Handler" podUID="a0adfa0f3502f337657b62a29e87eb20" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-23-220" Jan 13 21:22:58.958514 systemd[1]: Created slice kubepods-burstable-pod80aa007f264bcde83e0b035448470421.slice - libcontainer container kubepods-burstable-pod80aa007f264bcde83e0b035448470421.slice. Jan 13 21:22:58.979259 systemd[1]: Created slice kubepods-burstable-pod021388dd97c898a01a6994681fff5225.slice - libcontainer container kubepods-burstable-pod021388dd97c898a01a6994681fff5225.slice. Jan 13 21:22:58.986805 systemd[1]: Created slice kubepods-burstable-poda0adfa0f3502f337657b62a29e87eb20.slice - libcontainer container kubepods-burstable-poda0adfa0f3502f337657b62a29e87eb20.slice. Jan 13 21:22:59.011748 kubelet[2747]: E0113 21:22:59.011686 2747 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.220:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-220?timeout=10s\": dial tcp 172.31.23.220:6443: connect: connection refused" interval="400ms" Jan 13 21:22:59.110272 kubelet[2747]: I0113 21:22:59.110219 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/80aa007f264bcde83e0b035448470421-kubeconfig\") pod \"kube-controller-manager-ip-172-31-23-220\" (UID: \"80aa007f264bcde83e0b035448470421\") " pod="kube-system/kube-controller-manager-ip-172-31-23-220" Jan 13 21:22:59.110272 kubelet[2747]: I0113 21:22:59.110268 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/80aa007f264bcde83e0b035448470421-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-23-220\" (UID: \"80aa007f264bcde83e0b035448470421\") " pod="kube-system/kube-controller-manager-ip-172-31-23-220" Jan 13 21:22:59.110497 kubelet[2747]: I0113 21:22:59.110301 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/021388dd97c898a01a6994681fff5225-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-23-220\" (UID: \"021388dd97c898a01a6994681fff5225\") " pod="kube-system/kube-apiserver-ip-172-31-23-220" Jan 13 21:22:59.110497 kubelet[2747]: I0113 21:22:59.110322 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/80aa007f264bcde83e0b035448470421-ca-certs\") pod \"kube-controller-manager-ip-172-31-23-220\" (UID: \"80aa007f264bcde83e0b035448470421\") " pod="kube-system/kube-controller-manager-ip-172-31-23-220" Jan 13 21:22:59.110497 kubelet[2747]: I0113 21:22:59.110344 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/80aa007f264bcde83e0b035448470421-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-23-220\" (UID: \"80aa007f264bcde83e0b035448470421\") " pod="kube-system/kube-controller-manager-ip-172-31-23-220" Jan 13 21:22:59.110497 kubelet[2747]: I0113 21:22:59.110363 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/80aa007f264bcde83e0b035448470421-k8s-certs\") pod \"kube-controller-manager-ip-172-31-23-220\" (UID: \"80aa007f264bcde83e0b035448470421\") " pod="kube-system/kube-controller-manager-ip-172-31-23-220" Jan 13 21:22:59.110497 kubelet[2747]: I0113 21:22:59.110384 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a0adfa0f3502f337657b62a29e87eb20-kubeconfig\") pod \"kube-scheduler-ip-172-31-23-220\" (UID: \"a0adfa0f3502f337657b62a29e87eb20\") " pod="kube-system/kube-scheduler-ip-172-31-23-220" Jan 13 21:22:59.110681 kubelet[2747]: I0113 21:22:59.110404 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/021388dd97c898a01a6994681fff5225-ca-certs\") pod \"kube-apiserver-ip-172-31-23-220\" (UID: \"021388dd97c898a01a6994681fff5225\") " pod="kube-system/kube-apiserver-ip-172-31-23-220" Jan 13 21:22:59.110681 kubelet[2747]: I0113 21:22:59.110424 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/021388dd97c898a01a6994681fff5225-k8s-certs\") pod \"kube-apiserver-ip-172-31-23-220\" (UID: \"021388dd97c898a01a6994681fff5225\") " pod="kube-system/kube-apiserver-ip-172-31-23-220" Jan 13 21:22:59.122817 kubelet[2747]: I0113 21:22:59.122781 2747 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-220" Jan 13 21:22:59.123163 kubelet[2747]: E0113 21:22:59.123120 2747 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.23.220:6443/api/v1/nodes\": dial tcp 172.31.23.220:6443: connect: connection refused" node="ip-172-31-23-220" Jan 13 21:22:59.277461 containerd[1877]: time="2025-01-13T21:22:59.277333586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-23-220,Uid:80aa007f264bcde83e0b035448470421,Namespace:kube-system,Attempt:0,}" Jan 13 21:22:59.290360 containerd[1877]: time="2025-01-13T21:22:59.290256310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-23-220,Uid:021388dd97c898a01a6994681fff5225,Namespace:kube-system,Attempt:0,}" Jan 13 21:22:59.290572 containerd[1877]: time="2025-01-13T21:22:59.290271213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-23-220,Uid:a0adfa0f3502f337657b62a29e87eb20,Namespace:kube-system,Attempt:0,}" Jan 13 21:22:59.412642 kubelet[2747]: E0113 21:22:59.412595 2747 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.220:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-220?timeout=10s\": dial tcp 172.31.23.220:6443: connect: connection refused" interval="800ms" Jan 13 21:22:59.525687 kubelet[2747]: I0113 21:22:59.525653 2747 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-220" Jan 13 21:22:59.526068 kubelet[2747]: E0113 21:22:59.526039 2747 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.23.220:6443/api/v1/nodes\": dial tcp 172.31.23.220:6443: connect: connection refused" node="ip-172-31-23-220" Jan 13 21:22:59.771471 kubelet[2747]: W0113 21:22:59.771405 2747 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.23.220:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.220:6443: connect: connection refused Jan 13 21:22:59.771471 kubelet[2747]: E0113 21:22:59.771491 2747 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.23.220:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.220:6443: connect: connection refused Jan 13 21:22:59.840967 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3943355526.mount: Deactivated successfully. Jan 13 21:22:59.857331 containerd[1877]: time="2025-01-13T21:22:59.857282019Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:22:59.859583 containerd[1877]: time="2025-01-13T21:22:59.859530745Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 13 21:22:59.861303 containerd[1877]: time="2025-01-13T21:22:59.861265043Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:22:59.863211 containerd[1877]: time="2025-01-13T21:22:59.863174493Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:22:59.864917 containerd[1877]: time="2025-01-13T21:22:59.864862706Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 21:22:59.867409 containerd[1877]: time="2025-01-13T21:22:59.867358671Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:22:59.869251 containerd[1877]: time="2025-01-13T21:22:59.869189497Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 21:22:59.872620 containerd[1877]: time="2025-01-13T21:22:59.872561225Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:22:59.875504 containerd[1877]: time="2025-01-13T21:22:59.873977973Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 583.597082ms" Jan 13 21:22:59.876807 containerd[1877]: time="2025-01-13T21:22:59.876706035Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 586.10328ms" Jan 13 21:22:59.878420 containerd[1877]: time="2025-01-13T21:22:59.878379483Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 600.957667ms" Jan 13 21:23:00.188552 kubelet[2747]: W0113 21:23:00.186047 2747 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.23.220:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-220&limit=500&resourceVersion=0": dial tcp 172.31.23.220:6443: connect: connection refused Jan 13 21:23:00.188552 kubelet[2747]: E0113 21:23:00.186172 2747 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.23.220:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-220&limit=500&resourceVersion=0": dial tcp 172.31.23.220:6443: connect: connection refused Jan 13 21:23:00.188552 kubelet[2747]: W0113 21:23:00.186322 2747 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.23.220:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.23.220:6443: connect: connection refused Jan 13 21:23:00.188552 kubelet[2747]: E0113 21:23:00.186378 2747 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.23.220:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.23.220:6443: connect: connection refused Jan 13 21:23:00.221744 kubelet[2747]: E0113 21:23:00.221686 2747 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.220:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-220?timeout=10s\": dial tcp 172.31.23.220:6443: connect: connection refused" interval="1.6s" Jan 13 21:23:00.255204 containerd[1877]: time="2025-01-13T21:23:00.254909955Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:23:00.255204 containerd[1877]: time="2025-01-13T21:23:00.254988460Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:23:00.255204 containerd[1877]: time="2025-01-13T21:23:00.255014288Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:23:00.256410 containerd[1877]: time="2025-01-13T21:23:00.256308242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:23:00.263603 containerd[1877]: time="2025-01-13T21:23:00.262835085Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:23:00.263603 containerd[1877]: time="2025-01-13T21:23:00.263442396Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:23:00.263603 containerd[1877]: time="2025-01-13T21:23:00.263490851Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:23:00.266038 containerd[1877]: time="2025-01-13T21:23:00.264418924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:23:00.266573 containerd[1877]: time="2025-01-13T21:23:00.266083599Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:23:00.266573 containerd[1877]: time="2025-01-13T21:23:00.266147993Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:23:00.266573 containerd[1877]: time="2025-01-13T21:23:00.266164964Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:23:00.266573 containerd[1877]: time="2025-01-13T21:23:00.266322425Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:23:00.272943 kubelet[2747]: W0113 21:23:00.272868 2747 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.23.220:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.220:6443: connect: connection refused Jan 13 21:23:00.276512 kubelet[2747]: E0113 21:23:00.276177 2747 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.23.220:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.220:6443: connect: connection refused Jan 13 21:23:00.328030 systemd[1]: Started cri-containerd-cb88580c1fa6e2e54ad647b1a28775fdbdcb34a551dfa97814983e440679acf1.scope - libcontainer container cb88580c1fa6e2e54ad647b1a28775fdbdcb34a551dfa97814983e440679acf1. Jan 13 21:23:00.343313 kubelet[2747]: I0113 21:23:00.343273 2747 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-220" Jan 13 21:23:00.344369 systemd[1]: Started cri-containerd-39a4d48111211f52f99f888fc83406ee7990636ed75278abd4ccfdb3bbfcbce8.scope - libcontainer container 39a4d48111211f52f99f888fc83406ee7990636ed75278abd4ccfdb3bbfcbce8. Jan 13 21:23:00.350163 kubelet[2747]: E0113 21:23:00.350105 2747 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.23.220:6443/api/v1/nodes\": dial tcp 172.31.23.220:6443: connect: connection refused" node="ip-172-31-23-220" Jan 13 21:23:00.382687 systemd[1]: Started cri-containerd-7a05d728fa28bd307b0c7737e1b49503f8ddcabe68c9f22c781b7277afb39177.scope - libcontainer container 7a05d728fa28bd307b0c7737e1b49503f8ddcabe68c9f22c781b7277afb39177. Jan 13 21:23:00.482610 containerd[1877]: time="2025-01-13T21:23:00.480132593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-23-220,Uid:80aa007f264bcde83e0b035448470421,Namespace:kube-system,Attempt:0,} returns sandbox id \"cb88580c1fa6e2e54ad647b1a28775fdbdcb34a551dfa97814983e440679acf1\"" Jan 13 21:23:00.496343 containerd[1877]: time="2025-01-13T21:23:00.496296789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-23-220,Uid:021388dd97c898a01a6994681fff5225,Namespace:kube-system,Attempt:0,} returns sandbox id \"39a4d48111211f52f99f888fc83406ee7990636ed75278abd4ccfdb3bbfcbce8\"" Jan 13 21:23:00.504784 containerd[1877]: time="2025-01-13T21:23:00.504737780Z" level=info msg="CreateContainer within sandbox \"39a4d48111211f52f99f888fc83406ee7990636ed75278abd4ccfdb3bbfcbce8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 21:23:00.523809 containerd[1877]: time="2025-01-13T21:23:00.523760816Z" level=info msg="CreateContainer within sandbox \"cb88580c1fa6e2e54ad647b1a28775fdbdcb34a551dfa97814983e440679acf1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 21:23:00.528051 containerd[1877]: time="2025-01-13T21:23:00.528006788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-23-220,Uid:a0adfa0f3502f337657b62a29e87eb20,Namespace:kube-system,Attempt:0,} returns sandbox id \"7a05d728fa28bd307b0c7737e1b49503f8ddcabe68c9f22c781b7277afb39177\"" Jan 13 21:23:00.534640 containerd[1877]: time="2025-01-13T21:23:00.534442398Z" level=info msg="CreateContainer within sandbox \"7a05d728fa28bd307b0c7737e1b49503f8ddcabe68c9f22c781b7277afb39177\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 21:23:00.551904 containerd[1877]: time="2025-01-13T21:23:00.551856886Z" level=info msg="CreateContainer within sandbox \"39a4d48111211f52f99f888fc83406ee7990636ed75278abd4ccfdb3bbfcbce8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"00903d0e11a9d39ec38398082da310df6c06ceb6a522be03ae21a287ab5b5e5d\"" Jan 13 21:23:00.553714 containerd[1877]: time="2025-01-13T21:23:00.553677552Z" level=info msg="StartContainer for \"00903d0e11a9d39ec38398082da310df6c06ceb6a522be03ae21a287ab5b5e5d\"" Jan 13 21:23:00.570932 containerd[1877]: time="2025-01-13T21:23:00.570795134Z" level=info msg="CreateContainer within sandbox \"cb88580c1fa6e2e54ad647b1a28775fdbdcb34a551dfa97814983e440679acf1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"31d934b8866c5367247397edd6439547e7219dfb355477b9614eed1fcd8d363e\"" Jan 13 21:23:00.571689 containerd[1877]: time="2025-01-13T21:23:00.571614232Z" level=info msg="StartContainer for \"31d934b8866c5367247397edd6439547e7219dfb355477b9614eed1fcd8d363e\"" Jan 13 21:23:00.598215 containerd[1877]: time="2025-01-13T21:23:00.598038045Z" level=info msg="CreateContainer within sandbox \"7a05d728fa28bd307b0c7737e1b49503f8ddcabe68c9f22c781b7277afb39177\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9895b8fc9bdbda40c0e5b8535f3fa633eba05c5bfc8ddc09e6b3d035ef10b5dc\"" Jan 13 21:23:00.599504 containerd[1877]: time="2025-01-13T21:23:00.599369915Z" level=info msg="StartContainer for \"9895b8fc9bdbda40c0e5b8535f3fa633eba05c5bfc8ddc09e6b3d035ef10b5dc\"" Jan 13 21:23:00.662813 systemd[1]: Started cri-containerd-00903d0e11a9d39ec38398082da310df6c06ceb6a522be03ae21a287ab5b5e5d.scope - libcontainer container 00903d0e11a9d39ec38398082da310df6c06ceb6a522be03ae21a287ab5b5e5d. Jan 13 21:23:00.679109 systemd[1]: Started cri-containerd-9895b8fc9bdbda40c0e5b8535f3fa633eba05c5bfc8ddc09e6b3d035ef10b5dc.scope - libcontainer container 9895b8fc9bdbda40c0e5b8535f3fa633eba05c5bfc8ddc09e6b3d035ef10b5dc. Jan 13 21:23:00.693278 systemd[1]: Started cri-containerd-31d934b8866c5367247397edd6439547e7219dfb355477b9614eed1fcd8d363e.scope - libcontainer container 31d934b8866c5367247397edd6439547e7219dfb355477b9614eed1fcd8d363e. Jan 13 21:23:00.787190 containerd[1877]: time="2025-01-13T21:23:00.787073598Z" level=info msg="StartContainer for \"00903d0e11a9d39ec38398082da310df6c06ceb6a522be03ae21a287ab5b5e5d\" returns successfully" Jan 13 21:23:00.826872 containerd[1877]: time="2025-01-13T21:23:00.826815457Z" level=info msg="StartContainer for \"31d934b8866c5367247397edd6439547e7219dfb355477b9614eed1fcd8d363e\" returns successfully" Jan 13 21:23:00.894031 kubelet[2747]: E0113 21:23:00.893390 2747 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.23.220:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.23.220:6443: connect: connection refused Jan 13 21:23:00.926742 containerd[1877]: time="2025-01-13T21:23:00.926678485Z" level=info msg="StartContainer for \"9895b8fc9bdbda40c0e5b8535f3fa633eba05c5bfc8ddc09e6b3d035ef10b5dc\" returns successfully" Jan 13 21:23:01.314645 kubelet[2747]: E0113 21:23:01.314426 2747 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.23.220:6443/api/v1/namespaces/default/events\": dial tcp 172.31.23.220:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-23-220.181a5d789402e912 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-23-220,UID:ip-172-31-23-220,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-23-220,},FirstTimestamp:2025-01-13 21:22:58.765424914 +0000 UTC m=+0.637570492,LastTimestamp:2025-01-13 21:22:58.765424914 +0000 UTC m=+0.637570492,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-23-220,}" Jan 13 21:23:01.593874 kubelet[2747]: W0113 21:23:01.591384 2747 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.23.220:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.220:6443: connect: connection refused Jan 13 21:23:01.593874 kubelet[2747]: E0113 21:23:01.593593 2747 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.23.220:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.220:6443: connect: connection refused Jan 13 21:23:01.989510 kubelet[2747]: I0113 21:23:01.985654 2747 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-220" Jan 13 21:23:03.089512 update_engine[1867]: I20250113 21:23:03.088522 1867 update_attempter.cc:509] Updating boot flags... Jan 13 21:23:03.214589 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3039) Jan 13 21:23:03.593132 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3039) Jan 13 21:23:05.400504 kubelet[2747]: E0113 21:23:05.400438 2747 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-23-220\" not found" node="ip-172-31-23-220" Jan 13 21:23:05.587500 kubelet[2747]: I0113 21:23:05.585243 2747 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-23-220" Jan 13 21:23:05.760791 kubelet[2747]: I0113 21:23:05.760330 2747 apiserver.go:52] "Watching apiserver" Jan 13 21:23:05.809675 kubelet[2747]: I0113 21:23:05.809555 2747 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 13 21:23:05.885648 kubelet[2747]: E0113 21:23:05.885607 2747 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-23-220\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-23-220" Jan 13 21:23:08.105224 systemd[1]: Reloading requested from client PID 3208 ('systemctl') (unit session-7.scope)... Jan 13 21:23:08.105241 systemd[1]: Reloading... Jan 13 21:23:08.304540 zram_generator::config[3251]: No configuration found. Jan 13 21:23:08.497442 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:23:08.637439 systemd[1]: Reloading finished in 531 ms. Jan 13 21:23:08.693211 kubelet[2747]: E0113 21:23:08.692652 2747 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{ip-172-31-23-220.181a5d789402e912 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-23-220,UID:ip-172-31-23-220,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-23-220,},FirstTimestamp:2025-01-13 21:22:58.765424914 +0000 UTC m=+0.637570492,LastTimestamp:2025-01-13 21:22:58.765424914 +0000 UTC m=+0.637570492,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-23-220,}" Jan 13 21:23:08.692843 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:23:08.710312 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 21:23:08.710887 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:23:08.722031 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:23:09.027826 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:23:09.042898 (kubelet)[3305]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:23:09.212510 kubelet[3305]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:23:09.212510 kubelet[3305]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 21:23:09.212510 kubelet[3305]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:23:09.212510 kubelet[3305]: I0113 21:23:09.212079 3305 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 21:23:09.231578 kubelet[3305]: I0113 21:23:09.230769 3305 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 13 21:23:09.231578 kubelet[3305]: I0113 21:23:09.230803 3305 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 21:23:09.231578 kubelet[3305]: I0113 21:23:09.231455 3305 server.go:927] "Client rotation is on, will bootstrap in background" Jan 13 21:23:09.242527 kubelet[3305]: I0113 21:23:09.242489 3305 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 21:23:09.249274 kubelet[3305]: I0113 21:23:09.249186 3305 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:23:09.273632 kubelet[3305]: I0113 21:23:09.273599 3305 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 21:23:09.275404 kubelet[3305]: I0113 21:23:09.274267 3305 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 21:23:09.275404 kubelet[3305]: I0113 21:23:09.274313 3305 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-23-220","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 21:23:09.275404 kubelet[3305]: I0113 21:23:09.274598 3305 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 21:23:09.275404 kubelet[3305]: I0113 21:23:09.274616 3305 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 21:23:09.276117 kubelet[3305]: I0113 21:23:09.276088 3305 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:23:09.277199 kubelet[3305]: I0113 21:23:09.276343 3305 kubelet.go:400] "Attempting to sync node with API server" Jan 13 21:23:09.278186 kubelet[3305]: I0113 21:23:09.278167 3305 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 21:23:09.278498 kubelet[3305]: I0113 21:23:09.278446 3305 kubelet.go:312] "Adding apiserver pod source" Jan 13 21:23:09.278786 kubelet[3305]: I0113 21:23:09.278697 3305 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 21:23:09.323331 kubelet[3305]: I0113 21:23:09.323295 3305 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 13 21:23:09.323560 kubelet[3305]: I0113 21:23:09.323544 3305 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 21:23:09.330515 kubelet[3305]: I0113 21:23:09.328318 3305 server.go:1264] "Started kubelet" Jan 13 21:23:09.331651 kubelet[3305]: I0113 21:23:09.331632 3305 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 21:23:09.346162 kubelet[3305]: I0113 21:23:09.345666 3305 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 21:23:09.357082 kubelet[3305]: I0113 21:23:09.356933 3305 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 21:23:09.357466 kubelet[3305]: I0113 21:23:09.357402 3305 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 21:23:09.358028 kubelet[3305]: I0113 21:23:09.358004 3305 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 21:23:09.362491 kubelet[3305]: I0113 21:23:09.362450 3305 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 13 21:23:09.362812 kubelet[3305]: I0113 21:23:09.357202 3305 server.go:455] "Adding debug handlers to kubelet server" Jan 13 21:23:09.366314 kubelet[3305]: I0113 21:23:09.366291 3305 reconciler.go:26] "Reconciler: start to sync state" Jan 13 21:23:09.371737 kubelet[3305]: I0113 21:23:09.371691 3305 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 21:23:09.373279 kubelet[3305]: I0113 21:23:09.373248 3305 factory.go:221] Registration of the systemd container factory successfully Jan 13 21:23:09.373393 kubelet[3305]: I0113 21:23:09.373347 3305 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 21:23:09.373747 kubelet[3305]: I0113 21:23:09.373725 3305 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 21:23:09.373865 kubelet[3305]: I0113 21:23:09.373855 3305 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 21:23:09.373964 kubelet[3305]: I0113 21:23:09.373956 3305 kubelet.go:2337] "Starting kubelet main sync loop" Jan 13 21:23:09.374086 kubelet[3305]: E0113 21:23:09.374068 3305 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 21:23:09.379021 kubelet[3305]: I0113 21:23:09.378990 3305 factory.go:221] Registration of the containerd container factory successfully Jan 13 21:23:09.467304 kubelet[3305]: I0113 21:23:09.467280 3305 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-220" Jan 13 21:23:09.473446 kubelet[3305]: I0113 21:23:09.473319 3305 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 21:23:09.473941 kubelet[3305]: I0113 21:23:09.473731 3305 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 21:23:09.474140 kubelet[3305]: I0113 21:23:09.474129 3305 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:23:09.474632 kubelet[3305]: E0113 21:23:09.474578 3305 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 21:23:09.475345 kubelet[3305]: I0113 21:23:09.475332 3305 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 21:23:09.475441 kubelet[3305]: I0113 21:23:09.475414 3305 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 21:23:09.477216 kubelet[3305]: I0113 21:23:09.475830 3305 policy_none.go:49] "None policy: Start" Jan 13 21:23:09.480763 kubelet[3305]: I0113 21:23:09.479514 3305 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 21:23:09.480763 kubelet[3305]: I0113 21:23:09.479545 3305 state_mem.go:35] "Initializing new in-memory state store" Jan 13 21:23:09.480763 kubelet[3305]: I0113 21:23:09.479768 3305 state_mem.go:75] "Updated machine memory state" Jan 13 21:23:09.485240 kubelet[3305]: I0113 21:23:09.485220 3305 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-23-220" Jan 13 21:23:09.485408 kubelet[3305]: I0113 21:23:09.485399 3305 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-23-220" Jan 13 21:23:09.514821 kubelet[3305]: I0113 21:23:09.514795 3305 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 21:23:09.515118 kubelet[3305]: I0113 21:23:09.515087 3305 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 21:23:09.515311 kubelet[3305]: I0113 21:23:09.515298 3305 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 21:23:09.676382 kubelet[3305]: I0113 21:23:09.676174 3305 topology_manager.go:215] "Topology Admit Handler" podUID="80aa007f264bcde83e0b035448470421" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-23-220" Jan 13 21:23:09.676382 kubelet[3305]: I0113 21:23:09.676305 3305 topology_manager.go:215] "Topology Admit Handler" podUID="a0adfa0f3502f337657b62a29e87eb20" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-23-220" Jan 13 21:23:09.676382 kubelet[3305]: I0113 21:23:09.676374 3305 topology_manager.go:215] "Topology Admit Handler" podUID="021388dd97c898a01a6994681fff5225" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-23-220" Jan 13 21:23:09.770749 kubelet[3305]: I0113 21:23:09.769685 3305 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/80aa007f264bcde83e0b035448470421-k8s-certs\") pod \"kube-controller-manager-ip-172-31-23-220\" (UID: \"80aa007f264bcde83e0b035448470421\") " pod="kube-system/kube-controller-manager-ip-172-31-23-220" Jan 13 21:23:09.770749 kubelet[3305]: I0113 21:23:09.769817 3305 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/80aa007f264bcde83e0b035448470421-kubeconfig\") pod \"kube-controller-manager-ip-172-31-23-220\" (UID: \"80aa007f264bcde83e0b035448470421\") " pod="kube-system/kube-controller-manager-ip-172-31-23-220" Jan 13 21:23:09.770749 kubelet[3305]: I0113 21:23:09.769850 3305 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/80aa007f264bcde83e0b035448470421-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-23-220\" (UID: \"80aa007f264bcde83e0b035448470421\") " pod="kube-system/kube-controller-manager-ip-172-31-23-220" Jan 13 21:23:09.770749 kubelet[3305]: I0113 21:23:09.769888 3305 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a0adfa0f3502f337657b62a29e87eb20-kubeconfig\") pod \"kube-scheduler-ip-172-31-23-220\" (UID: \"a0adfa0f3502f337657b62a29e87eb20\") " pod="kube-system/kube-scheduler-ip-172-31-23-220" Jan 13 21:23:09.770749 kubelet[3305]: I0113 21:23:09.769915 3305 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/021388dd97c898a01a6994681fff5225-ca-certs\") pod \"kube-apiserver-ip-172-31-23-220\" (UID: \"021388dd97c898a01a6994681fff5225\") " pod="kube-system/kube-apiserver-ip-172-31-23-220" Jan 13 21:23:09.771026 kubelet[3305]: I0113 21:23:09.769941 3305 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/021388dd97c898a01a6994681fff5225-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-23-220\" (UID: \"021388dd97c898a01a6994681fff5225\") " pod="kube-system/kube-apiserver-ip-172-31-23-220" Jan 13 21:23:09.771026 kubelet[3305]: I0113 21:23:09.769977 3305 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/80aa007f264bcde83e0b035448470421-ca-certs\") pod \"kube-controller-manager-ip-172-31-23-220\" (UID: \"80aa007f264bcde83e0b035448470421\") " pod="kube-system/kube-controller-manager-ip-172-31-23-220" Jan 13 21:23:09.771026 kubelet[3305]: I0113 21:23:09.770003 3305 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/80aa007f264bcde83e0b035448470421-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-23-220\" (UID: \"80aa007f264bcde83e0b035448470421\") " pod="kube-system/kube-controller-manager-ip-172-31-23-220" Jan 13 21:23:09.771026 kubelet[3305]: I0113 21:23:09.770100 3305 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/021388dd97c898a01a6994681fff5225-k8s-certs\") pod \"kube-apiserver-ip-172-31-23-220\" (UID: \"021388dd97c898a01a6994681fff5225\") " pod="kube-system/kube-apiserver-ip-172-31-23-220" Jan 13 21:23:10.291196 kubelet[3305]: I0113 21:23:10.291158 3305 apiserver.go:52] "Watching apiserver" Jan 13 21:23:10.363965 kubelet[3305]: I0113 21:23:10.363830 3305 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 13 21:23:10.388994 kubelet[3305]: I0113 21:23:10.388737 3305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-23-220" podStartSLOduration=1.388717984 podStartE2EDuration="1.388717984s" podCreationTimestamp="2025-01-13 21:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:23:10.366862362 +0000 UTC m=+1.258261584" watchObservedRunningTime="2025-01-13 21:23:10.388717984 +0000 UTC m=+1.280117207" Jan 13 21:23:10.388994 kubelet[3305]: I0113 21:23:10.388877 3305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-23-220" podStartSLOduration=1.388868867 podStartE2EDuration="1.388868867s" podCreationTimestamp="2025-01-13 21:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:23:10.388509546 +0000 UTC m=+1.279908765" watchObservedRunningTime="2025-01-13 21:23:10.388868867 +0000 UTC m=+1.280268091" Jan 13 21:23:10.434931 kubelet[3305]: E0113 21:23:10.434872 3305 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-23-220\" already exists" pod="kube-system/kube-apiserver-ip-172-31-23-220" Jan 13 21:23:10.451502 kubelet[3305]: I0113 21:23:10.449222 3305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-23-220" podStartSLOduration=1.449197977 podStartE2EDuration="1.449197977s" podCreationTimestamp="2025-01-13 21:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:23:10.402426928 +0000 UTC m=+1.293826150" watchObservedRunningTime="2025-01-13 21:23:10.449197977 +0000 UTC m=+1.340597197" Jan 13 21:23:10.733425 sudo[2197]: pam_unix(sudo:session): session closed for user root Jan 13 21:23:10.758117 sshd[2181]: pam_unix(sshd:session): session closed for user core Jan 13 21:23:10.761309 systemd[1]: sshd@6-172.31.23.220:22-147.75.109.163:60276.service: Deactivated successfully. Jan 13 21:23:10.763464 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 21:23:10.763788 systemd[1]: session-7.scope: Consumed 4.525s CPU time, 188.6M memory peak, 0B memory swap peak. Jan 13 21:23:10.765288 systemd-logind[1866]: Session 7 logged out. Waiting for processes to exit. Jan 13 21:23:10.766689 systemd-logind[1866]: Removed session 7. Jan 13 21:23:22.816453 kubelet[3305]: I0113 21:23:22.816215 3305 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 21:23:22.819092 containerd[1877]: time="2025-01-13T21:23:22.817364939Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 21:23:22.821345 kubelet[3305]: I0113 21:23:22.818454 3305 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 21:23:22.920616 kubelet[3305]: I0113 21:23:22.920108 3305 topology_manager.go:215] "Topology Admit Handler" podUID="3472d1d2-88ff-4e0e-b5e1-00f474139a6f" podNamespace="kube-system" podName="kube-proxy-stwx5" Jan 13 21:23:22.937922 systemd[1]: Created slice kubepods-besteffort-pod3472d1d2_88ff_4e0e_b5e1_00f474139a6f.slice - libcontainer container kubepods-besteffort-pod3472d1d2_88ff_4e0e_b5e1_00f474139a6f.slice. Jan 13 21:23:22.940529 kubelet[3305]: W0113 21:23:22.940378 3305 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-23-220" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-23-220' and this object Jan 13 21:23:22.940529 kubelet[3305]: E0113 21:23:22.940424 3305 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-23-220" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-23-220' and this object Jan 13 21:23:22.940843 kubelet[3305]: W0113 21:23:22.940823 3305 reflector.go:547] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-23-220" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-23-220' and this object Jan 13 21:23:22.940957 kubelet[3305]: E0113 21:23:22.940945 3305 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-23-220" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-23-220' and this object Jan 13 21:23:22.976816 kubelet[3305]: I0113 21:23:22.976775 3305 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3472d1d2-88ff-4e0e-b5e1-00f474139a6f-kube-proxy\") pod \"kube-proxy-stwx5\" (UID: \"3472d1d2-88ff-4e0e-b5e1-00f474139a6f\") " pod="kube-system/kube-proxy-stwx5" Jan 13 21:23:22.977258 kubelet[3305]: I0113 21:23:22.977235 3305 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3472d1d2-88ff-4e0e-b5e1-00f474139a6f-lib-modules\") pod \"kube-proxy-stwx5\" (UID: \"3472d1d2-88ff-4e0e-b5e1-00f474139a6f\") " pod="kube-system/kube-proxy-stwx5" Jan 13 21:23:22.978802 kubelet[3305]: I0113 21:23:22.977397 3305 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62xsq\" (UniqueName: \"kubernetes.io/projected/3472d1d2-88ff-4e0e-b5e1-00f474139a6f-kube-api-access-62xsq\") pod \"kube-proxy-stwx5\" (UID: \"3472d1d2-88ff-4e0e-b5e1-00f474139a6f\") " pod="kube-system/kube-proxy-stwx5" Jan 13 21:23:22.978802 kubelet[3305]: I0113 21:23:22.977444 3305 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3472d1d2-88ff-4e0e-b5e1-00f474139a6f-xtables-lock\") pod \"kube-proxy-stwx5\" (UID: \"3472d1d2-88ff-4e0e-b5e1-00f474139a6f\") " pod="kube-system/kube-proxy-stwx5" Jan 13 21:23:23.012928 kubelet[3305]: I0113 21:23:23.011444 3305 topology_manager.go:215] "Topology Admit Handler" podUID="fe1918b0-232d-4c66-bb67-9bfea306f67a" podNamespace="kube-flannel" podName="kube-flannel-ds-p2cpq" Jan 13 21:23:23.026593 systemd[1]: Created slice kubepods-burstable-podfe1918b0_232d_4c66_bb67_9bfea306f67a.slice - libcontainer container kubepods-burstable-podfe1918b0_232d_4c66_bb67_9bfea306f67a.slice. Jan 13 21:23:23.079401 kubelet[3305]: I0113 21:23:23.078251 3305 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/fe1918b0-232d-4c66-bb67-9bfea306f67a-flannel-cfg\") pod \"kube-flannel-ds-p2cpq\" (UID: \"fe1918b0-232d-4c66-bb67-9bfea306f67a\") " pod="kube-flannel/kube-flannel-ds-p2cpq" Jan 13 21:23:23.079401 kubelet[3305]: I0113 21:23:23.078396 3305 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbt8v\" (UniqueName: \"kubernetes.io/projected/fe1918b0-232d-4c66-bb67-9bfea306f67a-kube-api-access-sbt8v\") pod \"kube-flannel-ds-p2cpq\" (UID: \"fe1918b0-232d-4c66-bb67-9bfea306f67a\") " pod="kube-flannel/kube-flannel-ds-p2cpq" Jan 13 21:23:23.079401 kubelet[3305]: I0113 21:23:23.078444 3305 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fe1918b0-232d-4c66-bb67-9bfea306f67a-xtables-lock\") pod \"kube-flannel-ds-p2cpq\" (UID: \"fe1918b0-232d-4c66-bb67-9bfea306f67a\") " pod="kube-flannel/kube-flannel-ds-p2cpq" Jan 13 21:23:23.079401 kubelet[3305]: I0113 21:23:23.078553 3305 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/fe1918b0-232d-4c66-bb67-9bfea306f67a-run\") pod \"kube-flannel-ds-p2cpq\" (UID: \"fe1918b0-232d-4c66-bb67-9bfea306f67a\") " pod="kube-flannel/kube-flannel-ds-p2cpq" Jan 13 21:23:23.079401 kubelet[3305]: I0113 21:23:23.078579 3305 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/fe1918b0-232d-4c66-bb67-9bfea306f67a-cni\") pod \"kube-flannel-ds-p2cpq\" (UID: \"fe1918b0-232d-4c66-bb67-9bfea306f67a\") " pod="kube-flannel/kube-flannel-ds-p2cpq" Jan 13 21:23:23.079995 kubelet[3305]: I0113 21:23:23.078603 3305 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/fe1918b0-232d-4c66-bb67-9bfea306f67a-cni-plugin\") pod \"kube-flannel-ds-p2cpq\" (UID: \"fe1918b0-232d-4c66-bb67-9bfea306f67a\") " pod="kube-flannel/kube-flannel-ds-p2cpq" Jan 13 21:23:23.333436 containerd[1877]: time="2025-01-13T21:23:23.332800318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-p2cpq,Uid:fe1918b0-232d-4c66-bb67-9bfea306f67a,Namespace:kube-flannel,Attempt:0,}" Jan 13 21:23:23.409623 containerd[1877]: time="2025-01-13T21:23:23.409262502Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:23:23.409867 containerd[1877]: time="2025-01-13T21:23:23.409616110Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:23:23.409867 containerd[1877]: time="2025-01-13T21:23:23.409643825Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:23:23.410764 containerd[1877]: time="2025-01-13T21:23:23.410624727Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:23:23.448063 systemd[1]: run-containerd-runc-k8s.io-6d1ea4c9655f9a5a7b5a512e33da2c4a19aaf29668bd35718504828cdcbdd025-runc.IL1bTh.mount: Deactivated successfully. Jan 13 21:23:23.456696 systemd[1]: Started cri-containerd-6d1ea4c9655f9a5a7b5a512e33da2c4a19aaf29668bd35718504828cdcbdd025.scope - libcontainer container 6d1ea4c9655f9a5a7b5a512e33da2c4a19aaf29668bd35718504828cdcbdd025. Jan 13 21:23:23.527955 containerd[1877]: time="2025-01-13T21:23:23.527139397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-p2cpq,Uid:fe1918b0-232d-4c66-bb67-9bfea306f67a,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"6d1ea4c9655f9a5a7b5a512e33da2c4a19aaf29668bd35718504828cdcbdd025\"" Jan 13 21:23:23.530888 containerd[1877]: time="2025-01-13T21:23:23.530521288Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Jan 13 21:23:24.091067 kubelet[3305]: E0113 21:23:24.091014 3305 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 13 21:23:24.091067 kubelet[3305]: E0113 21:23:24.091073 3305 projected.go:200] Error preparing data for projected volume kube-api-access-62xsq for pod kube-system/kube-proxy-stwx5: failed to sync configmap cache: timed out waiting for the condition Jan 13 21:23:24.091683 kubelet[3305]: E0113 21:23:24.091163 3305 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3472d1d2-88ff-4e0e-b5e1-00f474139a6f-kube-api-access-62xsq podName:3472d1d2-88ff-4e0e-b5e1-00f474139a6f nodeName:}" failed. No retries permitted until 2025-01-13 21:23:24.591137544 +0000 UTC m=+15.482536761 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-62xsq" (UniqueName: "kubernetes.io/projected/3472d1d2-88ff-4e0e-b5e1-00f474139a6f-kube-api-access-62xsq") pod "kube-proxy-stwx5" (UID: "3472d1d2-88ff-4e0e-b5e1-00f474139a6f") : failed to sync configmap cache: timed out waiting for the condition Jan 13 21:23:24.752658 containerd[1877]: time="2025-01-13T21:23:24.752609916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-stwx5,Uid:3472d1d2-88ff-4e0e-b5e1-00f474139a6f,Namespace:kube-system,Attempt:0,}" Jan 13 21:23:24.799718 containerd[1877]: time="2025-01-13T21:23:24.799469962Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:23:24.799718 containerd[1877]: time="2025-01-13T21:23:24.799651324Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:23:24.799718 containerd[1877]: time="2025-01-13T21:23:24.799667998Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:23:24.800215 containerd[1877]: time="2025-01-13T21:23:24.800027154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:23:24.832092 systemd[1]: run-containerd-runc-k8s.io-90c67470a4dc8d6f8a14512aa80590c68e32d0c450668c8f9b4f13b7b070c058-runc.AuKwtU.mount: Deactivated successfully. Jan 13 21:23:24.846866 systemd[1]: Started cri-containerd-90c67470a4dc8d6f8a14512aa80590c68e32d0c450668c8f9b4f13b7b070c058.scope - libcontainer container 90c67470a4dc8d6f8a14512aa80590c68e32d0c450668c8f9b4f13b7b070c058. Jan 13 21:23:24.908984 containerd[1877]: time="2025-01-13T21:23:24.908942204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-stwx5,Uid:3472d1d2-88ff-4e0e-b5e1-00f474139a6f,Namespace:kube-system,Attempt:0,} returns sandbox id \"90c67470a4dc8d6f8a14512aa80590c68e32d0c450668c8f9b4f13b7b070c058\"" Jan 13 21:23:24.913374 containerd[1877]: time="2025-01-13T21:23:24.913082879Z" level=info msg="CreateContainer within sandbox \"90c67470a4dc8d6f8a14512aa80590c68e32d0c450668c8f9b4f13b7b070c058\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 21:23:24.942973 containerd[1877]: time="2025-01-13T21:23:24.942924455Z" level=info msg="CreateContainer within sandbox \"90c67470a4dc8d6f8a14512aa80590c68e32d0c450668c8f9b4f13b7b070c058\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"694741a03984a5cc734368cae108a2b67405ced345f0e708c0f5773c20512e56\"" Jan 13 21:23:24.943865 containerd[1877]: time="2025-01-13T21:23:24.943830170Z" level=info msg="StartContainer for \"694741a03984a5cc734368cae108a2b67405ced345f0e708c0f5773c20512e56\"" Jan 13 21:23:24.993075 systemd[1]: Started cri-containerd-694741a03984a5cc734368cae108a2b67405ced345f0e708c0f5773c20512e56.scope - libcontainer container 694741a03984a5cc734368cae108a2b67405ced345f0e708c0f5773c20512e56. Jan 13 21:23:25.042762 containerd[1877]: time="2025-01-13T21:23:25.042637714Z" level=info msg="StartContainer for \"694741a03984a5cc734368cae108a2b67405ced345f0e708c0f5773c20512e56\" returns successfully" Jan 13 21:23:25.689750 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3629674261.mount: Deactivated successfully. Jan 13 21:23:25.815167 containerd[1877]: time="2025-01-13T21:23:25.815089567Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:23:25.818212 containerd[1877]: time="2025-01-13T21:23:25.818132369Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3852937" Jan 13 21:23:25.821701 containerd[1877]: time="2025-01-13T21:23:25.820335226Z" level=info msg="ImageCreate event name:\"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:23:25.827508 containerd[1877]: time="2025-01-13T21:23:25.826934601Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:23:25.829086 containerd[1877]: time="2025-01-13T21:23:25.829041096Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3842055\" in 2.298472952s" Jan 13 21:23:25.829263 containerd[1877]: time="2025-01-13T21:23:25.829107286Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" Jan 13 21:23:25.835174 containerd[1877]: time="2025-01-13T21:23:25.835132441Z" level=info msg="CreateContainer within sandbox \"6d1ea4c9655f9a5a7b5a512e33da2c4a19aaf29668bd35718504828cdcbdd025\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Jan 13 21:23:25.865880 containerd[1877]: time="2025-01-13T21:23:25.865820088Z" level=info msg="CreateContainer within sandbox \"6d1ea4c9655f9a5a7b5a512e33da2c4a19aaf29668bd35718504828cdcbdd025\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"7e8941b0e4429aa45bf2b92cf3c7185caea4d02238a14065c77be3f19f9d4e99\"" Jan 13 21:23:25.868172 containerd[1877]: time="2025-01-13T21:23:25.868134669Z" level=info msg="StartContainer for \"7e8941b0e4429aa45bf2b92cf3c7185caea4d02238a14065c77be3f19f9d4e99\"" Jan 13 21:23:25.942726 systemd[1]: Started cri-containerd-7e8941b0e4429aa45bf2b92cf3c7185caea4d02238a14065c77be3f19f9d4e99.scope - libcontainer container 7e8941b0e4429aa45bf2b92cf3c7185caea4d02238a14065c77be3f19f9d4e99. Jan 13 21:23:25.995917 containerd[1877]: time="2025-01-13T21:23:25.995321772Z" level=info msg="StartContainer for \"7e8941b0e4429aa45bf2b92cf3c7185caea4d02238a14065c77be3f19f9d4e99\" returns successfully" Jan 13 21:23:25.999034 systemd[1]: cri-containerd-7e8941b0e4429aa45bf2b92cf3c7185caea4d02238a14065c77be3f19f9d4e99.scope: Deactivated successfully. Jan 13 21:23:26.051992 containerd[1877]: time="2025-01-13T21:23:26.051925297Z" level=info msg="shim disconnected" id=7e8941b0e4429aa45bf2b92cf3c7185caea4d02238a14065c77be3f19f9d4e99 namespace=k8s.io Jan 13 21:23:26.051992 containerd[1877]: time="2025-01-13T21:23:26.051985714Z" level=warning msg="cleaning up after shim disconnected" id=7e8941b0e4429aa45bf2b92cf3c7185caea4d02238a14065c77be3f19f9d4e99 namespace=k8s.io Jan 13 21:23:26.051992 containerd[1877]: time="2025-01-13T21:23:26.051997310Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:23:26.470348 containerd[1877]: time="2025-01-13T21:23:26.470306262Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Jan 13 21:23:26.484094 kubelet[3305]: I0113 21:23:26.482972 3305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-stwx5" podStartSLOduration=4.482949586 podStartE2EDuration="4.482949586s" podCreationTimestamp="2025-01-13 21:23:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:23:25.480573269 +0000 UTC m=+16.371972492" watchObservedRunningTime="2025-01-13 21:23:26.482949586 +0000 UTC m=+17.374348811" Jan 13 21:23:26.609022 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7e8941b0e4429aa45bf2b92cf3c7185caea4d02238a14065c77be3f19f9d4e99-rootfs.mount: Deactivated successfully. Jan 13 21:23:28.625102 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount71712256.mount: Deactivated successfully. Jan 13 21:23:29.862643 containerd[1877]: time="2025-01-13T21:23:29.862594479Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:23:29.864568 containerd[1877]: time="2025-01-13T21:23:29.864510146Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26866357" Jan 13 21:23:29.868352 containerd[1877]: time="2025-01-13T21:23:29.866883683Z" level=info msg="ImageCreate event name:\"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:23:29.874132 containerd[1877]: time="2025-01-13T21:23:29.874087155Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:23:29.875713 containerd[1877]: time="2025-01-13T21:23:29.875659100Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26855532\" in 3.40530621s" Jan 13 21:23:29.875713 containerd[1877]: time="2025-01-13T21:23:29.875702318Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" Jan 13 21:23:29.881307 containerd[1877]: time="2025-01-13T21:23:29.881173566Z" level=info msg="CreateContainer within sandbox \"6d1ea4c9655f9a5a7b5a512e33da2c4a19aaf29668bd35718504828cdcbdd025\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 13 21:23:29.905935 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2034213705.mount: Deactivated successfully. Jan 13 21:23:29.911357 containerd[1877]: time="2025-01-13T21:23:29.911312028Z" level=info msg="CreateContainer within sandbox \"6d1ea4c9655f9a5a7b5a512e33da2c4a19aaf29668bd35718504828cdcbdd025\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3f0b97b94bb14403dfc3e38ca88e42aa06ffd7aef75d107879335d33f45c1579\"" Jan 13 21:23:29.913017 containerd[1877]: time="2025-01-13T21:23:29.912298429Z" level=info msg="StartContainer for \"3f0b97b94bb14403dfc3e38ca88e42aa06ffd7aef75d107879335d33f45c1579\"" Jan 13 21:23:29.965732 systemd[1]: Started cri-containerd-3f0b97b94bb14403dfc3e38ca88e42aa06ffd7aef75d107879335d33f45c1579.scope - libcontainer container 3f0b97b94bb14403dfc3e38ca88e42aa06ffd7aef75d107879335d33f45c1579. Jan 13 21:23:30.023221 systemd[1]: cri-containerd-3f0b97b94bb14403dfc3e38ca88e42aa06ffd7aef75d107879335d33f45c1579.scope: Deactivated successfully. Jan 13 21:23:30.030711 containerd[1877]: time="2025-01-13T21:23:30.030407018Z" level=info msg="StartContainer for \"3f0b97b94bb14403dfc3e38ca88e42aa06ffd7aef75d107879335d33f45c1579\" returns successfully" Jan 13 21:23:30.047218 kubelet[3305]: I0113 21:23:30.047021 3305 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 13 21:23:30.085930 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3f0b97b94bb14403dfc3e38ca88e42aa06ffd7aef75d107879335d33f45c1579-rootfs.mount: Deactivated successfully. Jan 13 21:23:30.106382 containerd[1877]: time="2025-01-13T21:23:30.106236087Z" level=info msg="shim disconnected" id=3f0b97b94bb14403dfc3e38ca88e42aa06ffd7aef75d107879335d33f45c1579 namespace=k8s.io Jan 13 21:23:30.106382 containerd[1877]: time="2025-01-13T21:23:30.106314349Z" level=warning msg="cleaning up after shim disconnected" id=3f0b97b94bb14403dfc3e38ca88e42aa06ffd7aef75d107879335d33f45c1579 namespace=k8s.io Jan 13 21:23:30.106382 containerd[1877]: time="2025-01-13T21:23:30.106330611Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:23:30.139062 kubelet[3305]: I0113 21:23:30.138908 3305 topology_manager.go:215] "Topology Admit Handler" podUID="1b1d2598-8e51-4ed1-8230-2c253bc7cbda" podNamespace="kube-system" podName="coredns-7db6d8ff4d-8ms7f" Jan 13 21:23:30.139669 kubelet[3305]: I0113 21:23:30.139645 3305 topology_manager.go:215] "Topology Admit Handler" podUID="372e3431-bfdf-4857-b3f3-cf3297dddc2b" podNamespace="kube-system" podName="coredns-7db6d8ff4d-frqp9" Jan 13 21:23:30.157765 systemd[1]: Created slice kubepods-burstable-pod372e3431_bfdf_4857_b3f3_cf3297dddc2b.slice - libcontainer container kubepods-burstable-pod372e3431_bfdf_4857_b3f3_cf3297dddc2b.slice. Jan 13 21:23:30.170933 systemd[1]: Created slice kubepods-burstable-pod1b1d2598_8e51_4ed1_8230_2c253bc7cbda.slice - libcontainer container kubepods-burstable-pod1b1d2598_8e51_4ed1_8230_2c253bc7cbda.slice. Jan 13 21:23:30.271707 kubelet[3305]: I0113 21:23:30.271655 3305 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/372e3431-bfdf-4857-b3f3-cf3297dddc2b-config-volume\") pod \"coredns-7db6d8ff4d-frqp9\" (UID: \"372e3431-bfdf-4857-b3f3-cf3297dddc2b\") " pod="kube-system/coredns-7db6d8ff4d-frqp9" Jan 13 21:23:30.271707 kubelet[3305]: I0113 21:23:30.271705 3305 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc8s8\" (UniqueName: \"kubernetes.io/projected/372e3431-bfdf-4857-b3f3-cf3297dddc2b-kube-api-access-zc8s8\") pod \"coredns-7db6d8ff4d-frqp9\" (UID: \"372e3431-bfdf-4857-b3f3-cf3297dddc2b\") " pod="kube-system/coredns-7db6d8ff4d-frqp9" Jan 13 21:23:30.271894 kubelet[3305]: I0113 21:23:30.271746 3305 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1b1d2598-8e51-4ed1-8230-2c253bc7cbda-config-volume\") pod \"coredns-7db6d8ff4d-8ms7f\" (UID: \"1b1d2598-8e51-4ed1-8230-2c253bc7cbda\") " pod="kube-system/coredns-7db6d8ff4d-8ms7f" Jan 13 21:23:30.271894 kubelet[3305]: I0113 21:23:30.271772 3305 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfq7w\" (UniqueName: \"kubernetes.io/projected/1b1d2598-8e51-4ed1-8230-2c253bc7cbda-kube-api-access-tfq7w\") pod \"coredns-7db6d8ff4d-8ms7f\" (UID: \"1b1d2598-8e51-4ed1-8230-2c253bc7cbda\") " pod="kube-system/coredns-7db6d8ff4d-8ms7f" Jan 13 21:23:30.468119 containerd[1877]: time="2025-01-13T21:23:30.467700846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-frqp9,Uid:372e3431-bfdf-4857-b3f3-cf3297dddc2b,Namespace:kube-system,Attempt:0,}" Jan 13 21:23:30.481759 containerd[1877]: time="2025-01-13T21:23:30.481459984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8ms7f,Uid:1b1d2598-8e51-4ed1-8230-2c253bc7cbda,Namespace:kube-system,Attempt:0,}" Jan 13 21:23:30.490700 containerd[1877]: time="2025-01-13T21:23:30.490254864Z" level=info msg="CreateContainer within sandbox \"6d1ea4c9655f9a5a7b5a512e33da2c4a19aaf29668bd35718504828cdcbdd025\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Jan 13 21:23:30.575969 containerd[1877]: time="2025-01-13T21:23:30.575922330Z" level=info msg="CreateContainer within sandbox \"6d1ea4c9655f9a5a7b5a512e33da2c4a19aaf29668bd35718504828cdcbdd025\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"a5486e5f3f01d3f8b21dbda5f19bd8bea19fc34aa2c7d8ba4476f8f1526014fb\"" Jan 13 21:23:30.578141 containerd[1877]: time="2025-01-13T21:23:30.576765489Z" level=info msg="StartContainer for \"a5486e5f3f01d3f8b21dbda5f19bd8bea19fc34aa2c7d8ba4476f8f1526014fb\"" Jan 13 21:23:30.601729 containerd[1877]: time="2025-01-13T21:23:30.601335605Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-frqp9,Uid:372e3431-bfdf-4857-b3f3-cf3297dddc2b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3edea5339d9d7977d2935375f65ae2fd2abe4f10f5ce5a33ee84ed1ba6a46039\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 13 21:23:30.602133 kubelet[3305]: E0113 21:23:30.602080 3305 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3edea5339d9d7977d2935375f65ae2fd2abe4f10f5ce5a33ee84ed1ba6a46039\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 13 21:23:30.602319 kubelet[3305]: E0113 21:23:30.602290 3305 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3edea5339d9d7977d2935375f65ae2fd2abe4f10f5ce5a33ee84ed1ba6a46039\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-frqp9" Jan 13 21:23:30.602438 kubelet[3305]: E0113 21:23:30.602415 3305 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3edea5339d9d7977d2935375f65ae2fd2abe4f10f5ce5a33ee84ed1ba6a46039\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-frqp9" Jan 13 21:23:30.603101 kubelet[3305]: E0113 21:23:30.602896 3305 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-frqp9_kube-system(372e3431-bfdf-4857-b3f3-cf3297dddc2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-frqp9_kube-system(372e3431-bfdf-4857-b3f3-cf3297dddc2b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3edea5339d9d7977d2935375f65ae2fd2abe4f10f5ce5a33ee84ed1ba6a46039\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-frqp9" podUID="372e3431-bfdf-4857-b3f3-cf3297dddc2b" Jan 13 21:23:30.606368 containerd[1877]: time="2025-01-13T21:23:30.606151439Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8ms7f,Uid:1b1d2598-8e51-4ed1-8230-2c253bc7cbda,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"729484b61d369ca9da76b45f07d4328ab1bda022ebad3fd7c87ebfef86876458\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 13 21:23:30.606750 kubelet[3305]: E0113 21:23:30.606643 3305 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"729484b61d369ca9da76b45f07d4328ab1bda022ebad3fd7c87ebfef86876458\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 13 21:23:30.606750 kubelet[3305]: E0113 21:23:30.606702 3305 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"729484b61d369ca9da76b45f07d4328ab1bda022ebad3fd7c87ebfef86876458\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-8ms7f" Jan 13 21:23:30.606750 kubelet[3305]: E0113 21:23:30.606731 3305 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"729484b61d369ca9da76b45f07d4328ab1bda022ebad3fd7c87ebfef86876458\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-8ms7f" Jan 13 21:23:30.606964 kubelet[3305]: E0113 21:23:30.606785 3305 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-8ms7f_kube-system(1b1d2598-8e51-4ed1-8230-2c253bc7cbda)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-8ms7f_kube-system(1b1d2598-8e51-4ed1-8230-2c253bc7cbda)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"729484b61d369ca9da76b45f07d4328ab1bda022ebad3fd7c87ebfef86876458\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-8ms7f" podUID="1b1d2598-8e51-4ed1-8230-2c253bc7cbda" Jan 13 21:23:30.636125 systemd[1]: Started cri-containerd-a5486e5f3f01d3f8b21dbda5f19bd8bea19fc34aa2c7d8ba4476f8f1526014fb.scope - libcontainer container a5486e5f3f01d3f8b21dbda5f19bd8bea19fc34aa2c7d8ba4476f8f1526014fb. Jan 13 21:23:30.680343 containerd[1877]: time="2025-01-13T21:23:30.680200966Z" level=info msg="StartContainer for \"a5486e5f3f01d3f8b21dbda5f19bd8bea19fc34aa2c7d8ba4476f8f1526014fb\" returns successfully" Jan 13 21:23:31.507225 kubelet[3305]: I0113 21:23:31.506146 3305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-p2cpq" podStartSLOduration=3.158506127 podStartE2EDuration="9.505955952s" podCreationTimestamp="2025-01-13 21:23:22 +0000 UTC" firstStartedPulling="2025-01-13 21:23:23.529765651 +0000 UTC m=+14.421164855" lastFinishedPulling="2025-01-13 21:23:29.87721547 +0000 UTC m=+20.768614680" observedRunningTime="2025-01-13 21:23:31.505442136 +0000 UTC m=+22.396841360" watchObservedRunningTime="2025-01-13 21:23:31.505955952 +0000 UTC m=+22.397355205" Jan 13 21:23:31.756472 (udev-worker)[3848]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:23:31.769533 systemd-networkd[1724]: flannel.1: Link UP Jan 13 21:23:31.769548 systemd-networkd[1724]: flannel.1: Gained carrier Jan 13 21:23:33.417180 systemd-networkd[1724]: flannel.1: Gained IPv6LL Jan 13 21:23:35.502675 ntpd[1859]: Listen normally on 7 flannel.1 192.168.0.0:123 Jan 13 21:23:35.503198 ntpd[1859]: 13 Jan 21:23:35 ntpd[1859]: Listen normally on 7 flannel.1 192.168.0.0:123 Jan 13 21:23:35.503198 ntpd[1859]: 13 Jan 21:23:35 ntpd[1859]: Listen normally on 8 flannel.1 [fe80::8c01:ceff:fe60:278b%4]:123 Jan 13 21:23:35.502789 ntpd[1859]: Listen normally on 8 flannel.1 [fe80::8c01:ceff:fe60:278b%4]:123 Jan 13 21:23:42.376407 containerd[1877]: time="2025-01-13T21:23:42.376364307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-frqp9,Uid:372e3431-bfdf-4857-b3f3-cf3297dddc2b,Namespace:kube-system,Attempt:0,}" Jan 13 21:23:42.469391 systemd-networkd[1724]: cni0: Link UP Jan 13 21:23:42.469405 systemd-networkd[1724]: cni0: Gained carrier Jan 13 21:23:42.474283 (udev-worker)[3986]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:23:42.475915 systemd-networkd[1724]: cni0: Lost carrier Jan 13 21:23:42.485381 systemd-networkd[1724]: vethcfa25da9: Link UP Jan 13 21:23:42.488408 kernel: cni0: port 1(vethcfa25da9) entered blocking state Jan 13 21:23:42.490125 kernel: cni0: port 1(vethcfa25da9) entered disabled state Jan 13 21:23:42.491916 kernel: vethcfa25da9: entered allmulticast mode Jan 13 21:23:42.491997 kernel: vethcfa25da9: entered promiscuous mode Jan 13 21:23:42.493701 kernel: cni0: port 1(vethcfa25da9) entered blocking state Jan 13 21:23:42.493755 kernel: cni0: port 1(vethcfa25da9) entered forwarding state Jan 13 21:23:42.493794 kernel: cni0: port 1(vethcfa25da9) entered disabled state Jan 13 21:23:42.494251 (udev-worker)[3989]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:23:42.502911 kernel: cni0: port 1(vethcfa25da9) entered blocking state Jan 13 21:23:42.503023 kernel: cni0: port 1(vethcfa25da9) entered forwarding state Jan 13 21:23:42.502952 systemd-networkd[1724]: vethcfa25da9: Gained carrier Jan 13 21:23:42.504055 systemd-networkd[1724]: cni0: Gained carrier Jan 13 21:23:42.514859 containerd[1877]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009a8e8), "name":"cbr0", "type":"bridge"} Jan 13 21:23:42.514859 containerd[1877]: delegateAdd: netconf sent to delegate plugin: Jan 13 21:23:42.558283 containerd[1877]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":8951,"name":"cbr0","type":"bridge"}time="2025-01-13T21:23:42.558163293Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:23:42.559997 containerd[1877]: time="2025-01-13T21:23:42.558997330Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:23:42.559997 containerd[1877]: time="2025-01-13T21:23:42.559029148Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:23:42.559997 containerd[1877]: time="2025-01-13T21:23:42.559131545Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:23:42.590900 systemd[1]: run-containerd-runc-k8s.io-5b0fbcfb27d4b1a769477eedb1f902680e794e2bfdc10f68b8b4d983f797cfba-runc.HSk4tt.mount: Deactivated successfully. Jan 13 21:23:42.599806 systemd[1]: Started cri-containerd-5b0fbcfb27d4b1a769477eedb1f902680e794e2bfdc10f68b8b4d983f797cfba.scope - libcontainer container 5b0fbcfb27d4b1a769477eedb1f902680e794e2bfdc10f68b8b4d983f797cfba. Jan 13 21:23:42.651717 containerd[1877]: time="2025-01-13T21:23:42.651553815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-frqp9,Uid:372e3431-bfdf-4857-b3f3-cf3297dddc2b,Namespace:kube-system,Attempt:0,} returns sandbox id \"5b0fbcfb27d4b1a769477eedb1f902680e794e2bfdc10f68b8b4d983f797cfba\"" Jan 13 21:23:42.657908 containerd[1877]: time="2025-01-13T21:23:42.657702811Z" level=info msg="CreateContainer within sandbox \"5b0fbcfb27d4b1a769477eedb1f902680e794e2bfdc10f68b8b4d983f797cfba\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 21:23:42.680619 containerd[1877]: time="2025-01-13T21:23:42.680570829Z" level=info msg="CreateContainer within sandbox \"5b0fbcfb27d4b1a769477eedb1f902680e794e2bfdc10f68b8b4d983f797cfba\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"29ba55ad705a199354d0102931db5f0dbdcffbd273b3def994fcb80a7bd3bde9\"" Jan 13 21:23:42.682014 containerd[1877]: time="2025-01-13T21:23:42.681204727Z" level=info msg="StartContainer for \"29ba55ad705a199354d0102931db5f0dbdcffbd273b3def994fcb80a7bd3bde9\"" Jan 13 21:23:42.716720 systemd[1]: Started cri-containerd-29ba55ad705a199354d0102931db5f0dbdcffbd273b3def994fcb80a7bd3bde9.scope - libcontainer container 29ba55ad705a199354d0102931db5f0dbdcffbd273b3def994fcb80a7bd3bde9. Jan 13 21:23:42.752617 containerd[1877]: time="2025-01-13T21:23:42.752566353Z" level=info msg="StartContainer for \"29ba55ad705a199354d0102931db5f0dbdcffbd273b3def994fcb80a7bd3bde9\" returns successfully" Jan 13 21:23:43.383518 containerd[1877]: time="2025-01-13T21:23:43.382838870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8ms7f,Uid:1b1d2598-8e51-4ed1-8230-2c253bc7cbda,Namespace:kube-system,Attempt:0,}" Jan 13 21:23:43.400407 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount74759732.mount: Deactivated successfully. Jan 13 21:23:43.459503 kernel: cni0: port 2(veth3838210e) entered blocking state Jan 13 21:23:43.459733 kernel: cni0: port 2(veth3838210e) entered disabled state Jan 13 21:23:43.462824 kernel: veth3838210e: entered allmulticast mode Jan 13 21:23:43.462914 kernel: veth3838210e: entered promiscuous mode Jan 13 21:23:43.464369 kernel: cni0: port 2(veth3838210e) entered blocking state Jan 13 21:23:43.464445 kernel: cni0: port 2(veth3838210e) entered forwarding state Jan 13 21:23:43.465311 systemd-networkd[1724]: veth3838210e: Link UP Jan 13 21:23:43.476585 kernel: cni0: port 2(veth3838210e) entered disabled state Jan 13 21:23:43.479968 kernel: cni0: port 2(veth3838210e) entered blocking state Jan 13 21:23:43.480053 kernel: cni0: port 2(veth3838210e) entered forwarding state Jan 13 21:23:43.480189 systemd-networkd[1724]: veth3838210e: Gained carrier Jan 13 21:23:43.532085 containerd[1877]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00008a8d8), "name":"cbr0", "type":"bridge"} Jan 13 21:23:43.532085 containerd[1877]: delegateAdd: netconf sent to delegate plugin: Jan 13 21:23:43.567121 containerd[1877]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":8951,"name":"cbr0","type":"bridge"}time="2025-01-13T21:23:43.566972180Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:23:43.567121 containerd[1877]: time="2025-01-13T21:23:43.567040165Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:23:43.567121 containerd[1877]: time="2025-01-13T21:23:43.567080074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:23:43.567462 containerd[1877]: time="2025-01-13T21:23:43.567211946Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:23:43.603535 kubelet[3305]: I0113 21:23:43.599299 3305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-frqp9" podStartSLOduration=21.599273574 podStartE2EDuration="21.599273574s" podCreationTimestamp="2025-01-13 21:23:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:23:43.555607602 +0000 UTC m=+34.447006824" watchObservedRunningTime="2025-01-13 21:23:43.599273574 +0000 UTC m=+34.490672795" Jan 13 21:23:43.613223 systemd[1]: Started cri-containerd-6b2d6608957a3f1c00a36962a5a105e111e2b4b228a13e5515eabd65b3e518ca.scope - libcontainer container 6b2d6608957a3f1c00a36962a5a105e111e2b4b228a13e5515eabd65b3e518ca. Jan 13 21:23:43.700335 containerd[1877]: time="2025-01-13T21:23:43.700293477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8ms7f,Uid:1b1d2598-8e51-4ed1-8230-2c253bc7cbda,Namespace:kube-system,Attempt:0,} returns sandbox id \"6b2d6608957a3f1c00a36962a5a105e111e2b4b228a13e5515eabd65b3e518ca\"" Jan 13 21:23:43.751356 containerd[1877]: time="2025-01-13T21:23:43.750737384Z" level=info msg="CreateContainer within sandbox \"6b2d6608957a3f1c00a36962a5a105e111e2b4b228a13e5515eabd65b3e518ca\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 21:23:43.774488 containerd[1877]: time="2025-01-13T21:23:43.774423883Z" level=info msg="CreateContainer within sandbox \"6b2d6608957a3f1c00a36962a5a105e111e2b4b228a13e5515eabd65b3e518ca\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"023dd44f963089621a49f06e4af1bba632f9798a262b71d8dbf5b0e55e11109d\"" Jan 13 21:23:43.780910 containerd[1877]: time="2025-01-13T21:23:43.780864701Z" level=info msg="StartContainer for \"023dd44f963089621a49f06e4af1bba632f9798a262b71d8dbf5b0e55e11109d\"" Jan 13 21:23:43.815714 systemd[1]: Started cri-containerd-023dd44f963089621a49f06e4af1bba632f9798a262b71d8dbf5b0e55e11109d.scope - libcontainer container 023dd44f963089621a49f06e4af1bba632f9798a262b71d8dbf5b0e55e11109d. Jan 13 21:23:43.849182 systemd-networkd[1724]: cni0: Gained IPv6LL Jan 13 21:23:43.852053 containerd[1877]: time="2025-01-13T21:23:43.850662047Z" level=info msg="StartContainer for \"023dd44f963089621a49f06e4af1bba632f9798a262b71d8dbf5b0e55e11109d\" returns successfully" Jan 13 21:23:44.166643 systemd-networkd[1724]: vethcfa25da9: Gained IPv6LL Jan 13 21:23:44.600061 kubelet[3305]: I0113 21:23:44.599917 3305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-8ms7f" podStartSLOduration=22.599896411 podStartE2EDuration="22.599896411s" podCreationTimestamp="2025-01-13 21:23:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:23:44.599334247 +0000 UTC m=+35.490733470" watchObservedRunningTime="2025-01-13 21:23:44.599896411 +0000 UTC m=+35.491295633" Jan 13 21:23:44.870731 systemd-networkd[1724]: veth3838210e: Gained IPv6LL Jan 13 21:23:47.502355 ntpd[1859]: Listen normally on 9 cni0 192.168.0.1:123 Jan 13 21:23:47.502451 ntpd[1859]: Listen normally on 10 cni0 [fe80::fc52:56ff:fe44:e6f0%5]:123 Jan 13 21:23:47.502897 ntpd[1859]: 13 Jan 21:23:47 ntpd[1859]: Listen normally on 9 cni0 192.168.0.1:123 Jan 13 21:23:47.502897 ntpd[1859]: 13 Jan 21:23:47 ntpd[1859]: Listen normally on 10 cni0 [fe80::fc52:56ff:fe44:e6f0%5]:123 Jan 13 21:23:47.502897 ntpd[1859]: 13 Jan 21:23:47 ntpd[1859]: Listen normally on 11 vethcfa25da9 [fe80::6022:26ff:fefe:be03%6]:123 Jan 13 21:23:47.502897 ntpd[1859]: 13 Jan 21:23:47 ntpd[1859]: Listen normally on 12 veth3838210e [fe80::78fe:e8ff:fe13:343%7]:123 Jan 13 21:23:47.502527 ntpd[1859]: Listen normally on 11 vethcfa25da9 [fe80::6022:26ff:fefe:be03%6]:123 Jan 13 21:23:47.502571 ntpd[1859]: Listen normally on 12 veth3838210e [fe80::78fe:e8ff:fe13:343%7]:123 Jan 13 21:23:52.245125 systemd[1]: Started sshd@7-172.31.23.220:22-147.75.109.163:53384.service - OpenSSH per-connection server daemon (147.75.109.163:53384). Jan 13 21:23:52.434051 sshd[4239]: Accepted publickey for core from 147.75.109.163 port 53384 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:23:52.436036 sshd[4239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:23:52.443027 systemd-logind[1866]: New session 8 of user core. Jan 13 21:23:52.450709 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 21:23:52.690275 sshd[4239]: pam_unix(sshd:session): session closed for user core Jan 13 21:23:52.694908 systemd[1]: sshd@7-172.31.23.220:22-147.75.109.163:53384.service: Deactivated successfully. Jan 13 21:23:52.697981 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 21:23:52.699897 systemd-logind[1866]: Session 8 logged out. Waiting for processes to exit. Jan 13 21:23:52.701369 systemd-logind[1866]: Removed session 8. Jan 13 21:23:57.725004 systemd[1]: Started sshd@8-172.31.23.220:22-147.75.109.163:45378.service - OpenSSH per-connection server daemon (147.75.109.163:45378). Jan 13 21:23:57.904105 sshd[4277]: Accepted publickey for core from 147.75.109.163 port 45378 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:23:57.905915 sshd[4277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:23:57.911997 systemd-logind[1866]: New session 9 of user core. Jan 13 21:23:57.919798 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 21:23:58.156113 sshd[4277]: pam_unix(sshd:session): session closed for user core Jan 13 21:23:58.161588 systemd-logind[1866]: Session 9 logged out. Waiting for processes to exit. Jan 13 21:23:58.162601 systemd[1]: sshd@8-172.31.23.220:22-147.75.109.163:45378.service: Deactivated successfully. Jan 13 21:23:58.165193 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 21:23:58.166270 systemd-logind[1866]: Removed session 9. Jan 13 21:24:03.202419 systemd[1]: Started sshd@9-172.31.23.220:22-147.75.109.163:45394.service - OpenSSH per-connection server daemon (147.75.109.163:45394). Jan 13 21:24:03.373430 sshd[4315]: Accepted publickey for core from 147.75.109.163 port 45394 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:24:03.375864 sshd[4315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:24:03.393498 systemd-logind[1866]: New session 10 of user core. Jan 13 21:24:03.404761 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 21:24:03.635602 sshd[4315]: pam_unix(sshd:session): session closed for user core Jan 13 21:24:03.642228 systemd[1]: sshd@9-172.31.23.220:22-147.75.109.163:45394.service: Deactivated successfully. Jan 13 21:24:03.645230 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 21:24:03.646417 systemd-logind[1866]: Session 10 logged out. Waiting for processes to exit. Jan 13 21:24:03.647888 systemd-logind[1866]: Removed session 10. Jan 13 21:24:03.668603 systemd[1]: Started sshd@10-172.31.23.220:22-147.75.109.163:45408.service - OpenSSH per-connection server daemon (147.75.109.163:45408). Jan 13 21:24:03.853266 sshd[4329]: Accepted publickey for core from 147.75.109.163 port 45408 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:24:03.856857 sshd[4329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:24:03.876584 systemd-logind[1866]: New session 11 of user core. Jan 13 21:24:03.884817 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 21:24:04.228288 sshd[4329]: pam_unix(sshd:session): session closed for user core Jan 13 21:24:04.233789 systemd-logind[1866]: Session 11 logged out. Waiting for processes to exit. Jan 13 21:24:04.235371 systemd[1]: sshd@10-172.31.23.220:22-147.75.109.163:45408.service: Deactivated successfully. Jan 13 21:24:04.240767 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 21:24:04.245606 systemd-logind[1866]: Removed session 11. Jan 13 21:24:04.262128 systemd[1]: Started sshd@11-172.31.23.220:22-147.75.109.163:45416.service - OpenSSH per-connection server daemon (147.75.109.163:45416). Jan 13 21:24:04.434424 sshd[4340]: Accepted publickey for core from 147.75.109.163 port 45416 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:24:04.436524 sshd[4340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:24:04.444855 systemd-logind[1866]: New session 12 of user core. Jan 13 21:24:04.452628 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 21:24:04.681844 sshd[4340]: pam_unix(sshd:session): session closed for user core Jan 13 21:24:04.685282 systemd[1]: sshd@11-172.31.23.220:22-147.75.109.163:45416.service: Deactivated successfully. Jan 13 21:24:04.687713 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 21:24:04.689502 systemd-logind[1866]: Session 12 logged out. Waiting for processes to exit. Jan 13 21:24:04.690905 systemd-logind[1866]: Removed session 12. Jan 13 21:24:09.723624 systemd[1]: Started sshd@12-172.31.23.220:22-147.75.109.163:47828.service - OpenSSH per-connection server daemon (147.75.109.163:47828). Jan 13 21:24:09.910415 sshd[4376]: Accepted publickey for core from 147.75.109.163 port 47828 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:24:09.912242 sshd[4376]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:24:09.920905 systemd-logind[1866]: New session 13 of user core. Jan 13 21:24:09.925793 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 21:24:10.144866 sshd[4376]: pam_unix(sshd:session): session closed for user core Jan 13 21:24:10.148827 systemd[1]: sshd@12-172.31.23.220:22-147.75.109.163:47828.service: Deactivated successfully. Jan 13 21:24:10.151995 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 21:24:10.153826 systemd-logind[1866]: Session 13 logged out. Waiting for processes to exit. Jan 13 21:24:10.155821 systemd-logind[1866]: Removed session 13. Jan 13 21:24:10.181966 systemd[1]: Started sshd@13-172.31.23.220:22-147.75.109.163:47838.service - OpenSSH per-connection server daemon (147.75.109.163:47838). Jan 13 21:24:10.409795 sshd[4390]: Accepted publickey for core from 147.75.109.163 port 47838 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:24:10.411817 sshd[4390]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:24:10.441997 systemd-logind[1866]: New session 14 of user core. Jan 13 21:24:10.450816 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 21:24:11.134049 sshd[4390]: pam_unix(sshd:session): session closed for user core Jan 13 21:24:11.138989 systemd-logind[1866]: Session 14 logged out. Waiting for processes to exit. Jan 13 21:24:11.141573 systemd[1]: sshd@13-172.31.23.220:22-147.75.109.163:47838.service: Deactivated successfully. Jan 13 21:24:11.144332 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 21:24:11.145472 systemd-logind[1866]: Removed session 14. Jan 13 21:24:11.175022 systemd[1]: Started sshd@14-172.31.23.220:22-147.75.109.163:47844.service - OpenSSH per-connection server daemon (147.75.109.163:47844). Jan 13 21:24:11.376461 sshd[4400]: Accepted publickey for core from 147.75.109.163 port 47844 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:24:11.378820 sshd[4400]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:24:11.385367 systemd-logind[1866]: New session 15 of user core. Jan 13 21:24:11.390679 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 21:24:13.302268 sshd[4400]: pam_unix(sshd:session): session closed for user core Jan 13 21:24:13.306146 systemd[1]: sshd@14-172.31.23.220:22-147.75.109.163:47844.service: Deactivated successfully. Jan 13 21:24:13.315522 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 21:24:13.318925 systemd-logind[1866]: Session 15 logged out. Waiting for processes to exit. Jan 13 21:24:13.320379 systemd-logind[1866]: Removed session 15. Jan 13 21:24:13.339253 systemd[1]: Started sshd@15-172.31.23.220:22-147.75.109.163:47850.service - OpenSSH per-connection server daemon (147.75.109.163:47850). Jan 13 21:24:13.500426 sshd[4439]: Accepted publickey for core from 147.75.109.163 port 47850 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:24:13.501199 sshd[4439]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:24:13.511576 systemd-logind[1866]: New session 16 of user core. Jan 13 21:24:13.520099 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 21:24:13.949736 sshd[4439]: pam_unix(sshd:session): session closed for user core Jan 13 21:24:13.955657 systemd-logind[1866]: Session 16 logged out. Waiting for processes to exit. Jan 13 21:24:13.957119 systemd[1]: sshd@15-172.31.23.220:22-147.75.109.163:47850.service: Deactivated successfully. Jan 13 21:24:13.960236 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 21:24:13.962082 systemd-logind[1866]: Removed session 16. Jan 13 21:24:13.984309 systemd[1]: Started sshd@16-172.31.23.220:22-147.75.109.163:47860.service - OpenSSH per-connection server daemon (147.75.109.163:47860). Jan 13 21:24:14.154060 sshd[4450]: Accepted publickey for core from 147.75.109.163 port 47860 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:24:14.155693 sshd[4450]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:24:14.161922 systemd-logind[1866]: New session 17 of user core. Jan 13 21:24:14.165699 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 21:24:14.407310 sshd[4450]: pam_unix(sshd:session): session closed for user core Jan 13 21:24:14.412985 systemd-logind[1866]: Session 17 logged out. Waiting for processes to exit. Jan 13 21:24:14.413738 systemd[1]: sshd@16-172.31.23.220:22-147.75.109.163:47860.service: Deactivated successfully. Jan 13 21:24:14.416172 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 21:24:14.418263 systemd-logind[1866]: Removed session 17. Jan 13 21:24:19.444016 systemd[1]: Started sshd@17-172.31.23.220:22-147.75.109.163:49840.service - OpenSSH per-connection server daemon (147.75.109.163:49840). Jan 13 21:24:19.611325 sshd[4483]: Accepted publickey for core from 147.75.109.163 port 49840 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:24:19.613141 sshd[4483]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:24:19.618836 systemd-logind[1866]: New session 18 of user core. Jan 13 21:24:19.626815 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 21:24:19.827923 sshd[4483]: pam_unix(sshd:session): session closed for user core Jan 13 21:24:19.833266 systemd[1]: sshd@17-172.31.23.220:22-147.75.109.163:49840.service: Deactivated successfully. Jan 13 21:24:19.836856 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 21:24:19.838117 systemd-logind[1866]: Session 18 logged out. Waiting for processes to exit. Jan 13 21:24:19.839857 systemd-logind[1866]: Removed session 18. Jan 13 21:24:24.870029 systemd[1]: Started sshd@18-172.31.23.220:22-147.75.109.163:49852.service - OpenSSH per-connection server daemon (147.75.109.163:49852). Jan 13 21:24:25.048773 sshd[4521]: Accepted publickey for core from 147.75.109.163 port 49852 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:24:25.052239 sshd[4521]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:24:25.066805 systemd-logind[1866]: New session 19 of user core. Jan 13 21:24:25.075586 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 21:24:25.285785 sshd[4521]: pam_unix(sshd:session): session closed for user core Jan 13 21:24:25.291275 systemd[1]: sshd@18-172.31.23.220:22-147.75.109.163:49852.service: Deactivated successfully. Jan 13 21:24:25.293853 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 21:24:25.295226 systemd-logind[1866]: Session 19 logged out. Waiting for processes to exit. Jan 13 21:24:25.296687 systemd-logind[1866]: Removed session 19. Jan 13 21:24:30.325773 systemd[1]: Started sshd@19-172.31.23.220:22-147.75.109.163:38000.service - OpenSSH per-connection server daemon (147.75.109.163:38000). Jan 13 21:24:30.485279 sshd[4557]: Accepted publickey for core from 147.75.109.163 port 38000 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:24:30.487846 sshd[4557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:24:30.502731 systemd-logind[1866]: New session 20 of user core. Jan 13 21:24:30.518851 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 13 21:24:30.713011 sshd[4557]: pam_unix(sshd:session): session closed for user core Jan 13 21:24:30.718060 systemd[1]: sshd@19-172.31.23.220:22-147.75.109.163:38000.service: Deactivated successfully. Jan 13 21:24:30.720310 systemd[1]: session-20.scope: Deactivated successfully. Jan 13 21:24:30.721545 systemd-logind[1866]: Session 20 logged out. Waiting for processes to exit. Jan 13 21:24:30.722824 systemd-logind[1866]: Removed session 20. Jan 13 21:24:35.757920 systemd[1]: Started sshd@20-172.31.23.220:22-147.75.109.163:38016.service - OpenSSH per-connection server daemon (147.75.109.163:38016). Jan 13 21:24:35.926442 sshd[4591]: Accepted publickey for core from 147.75.109.163 port 38016 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:24:35.928300 sshd[4591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:24:35.934610 systemd-logind[1866]: New session 21 of user core. Jan 13 21:24:35.937697 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 13 21:24:36.171581 sshd[4591]: pam_unix(sshd:session): session closed for user core Jan 13 21:24:36.175902 systemd[1]: sshd@20-172.31.23.220:22-147.75.109.163:38016.service: Deactivated successfully. Jan 13 21:24:36.180192 systemd[1]: session-21.scope: Deactivated successfully. Jan 13 21:24:36.181101 systemd-logind[1866]: Session 21 logged out. Waiting for processes to exit. Jan 13 21:24:36.183179 systemd-logind[1866]: Removed session 21. Jan 13 21:24:50.276348 systemd[1]: cri-containerd-31d934b8866c5367247397edd6439547e7219dfb355477b9614eed1fcd8d363e.scope: Deactivated successfully. Jan 13 21:24:50.277285 systemd[1]: cri-containerd-31d934b8866c5367247397edd6439547e7219dfb355477b9614eed1fcd8d363e.scope: Consumed 3.452s CPU time, 30.6M memory peak, 0B memory swap peak. Jan 13 21:24:50.348128 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-31d934b8866c5367247397edd6439547e7219dfb355477b9614eed1fcd8d363e-rootfs.mount: Deactivated successfully. Jan 13 21:24:50.361109 containerd[1877]: time="2025-01-13T21:24:50.360825965Z" level=info msg="shim disconnected" id=31d934b8866c5367247397edd6439547e7219dfb355477b9614eed1fcd8d363e namespace=k8s.io Jan 13 21:24:50.361109 containerd[1877]: time="2025-01-13T21:24:50.360913345Z" level=warning msg="cleaning up after shim disconnected" id=31d934b8866c5367247397edd6439547e7219dfb355477b9614eed1fcd8d363e namespace=k8s.io Jan 13 21:24:50.361109 containerd[1877]: time="2025-01-13T21:24:50.360923915Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:24:50.729413 kubelet[3305]: I0113 21:24:50.729377 3305 scope.go:117] "RemoveContainer" containerID="31d934b8866c5367247397edd6439547e7219dfb355477b9614eed1fcd8d363e" Jan 13 21:24:50.736316 containerd[1877]: time="2025-01-13T21:24:50.735529225Z" level=info msg="CreateContainer within sandbox \"cb88580c1fa6e2e54ad647b1a28775fdbdcb34a551dfa97814983e440679acf1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 13 21:24:50.771379 containerd[1877]: time="2025-01-13T21:24:50.771284301Z" level=info msg="CreateContainer within sandbox \"cb88580c1fa6e2e54ad647b1a28775fdbdcb34a551dfa97814983e440679acf1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"dcca394b6ed16e5615bde073e4a90151266922bac9a73a147d51339487d59c8c\"" Jan 13 21:24:50.772176 containerd[1877]: time="2025-01-13T21:24:50.772053362Z" level=info msg="StartContainer for \"dcca394b6ed16e5615bde073e4a90151266922bac9a73a147d51339487d59c8c\"" Jan 13 21:24:50.827870 systemd[1]: Started cri-containerd-dcca394b6ed16e5615bde073e4a90151266922bac9a73a147d51339487d59c8c.scope - libcontainer container dcca394b6ed16e5615bde073e4a90151266922bac9a73a147d51339487d59c8c. Jan 13 21:24:50.903183 containerd[1877]: time="2025-01-13T21:24:50.903002282Z" level=info msg="StartContainer for \"dcca394b6ed16e5615bde073e4a90151266922bac9a73a147d51339487d59c8c\" returns successfully" Jan 13 21:24:51.020509 kubelet[3305]: E0113 21:24:51.019973 3305 controller.go:195] "Failed to update lease" err="Put \"https://172.31.23.220:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-220?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 13 21:24:55.196045 systemd[1]: cri-containerd-9895b8fc9bdbda40c0e5b8535f3fa633eba05c5bfc8ddc09e6b3d035ef10b5dc.scope: Deactivated successfully. Jan 13 21:24:55.197191 systemd[1]: cri-containerd-9895b8fc9bdbda40c0e5b8535f3fa633eba05c5bfc8ddc09e6b3d035ef10b5dc.scope: Consumed 1.512s CPU time, 19.6M memory peak, 0B memory swap peak. Jan 13 21:24:55.234957 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9895b8fc9bdbda40c0e5b8535f3fa633eba05c5bfc8ddc09e6b3d035ef10b5dc-rootfs.mount: Deactivated successfully. Jan 13 21:24:55.248633 containerd[1877]: time="2025-01-13T21:24:55.248514820Z" level=info msg="shim disconnected" id=9895b8fc9bdbda40c0e5b8535f3fa633eba05c5bfc8ddc09e6b3d035ef10b5dc namespace=k8s.io Jan 13 21:24:55.248633 containerd[1877]: time="2025-01-13T21:24:55.248575192Z" level=warning msg="cleaning up after shim disconnected" id=9895b8fc9bdbda40c0e5b8535f3fa633eba05c5bfc8ddc09e6b3d035ef10b5dc namespace=k8s.io Jan 13 21:24:55.248633 containerd[1877]: time="2025-01-13T21:24:55.248588333Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:24:55.754120 kubelet[3305]: I0113 21:24:55.754070 3305 scope.go:117] "RemoveContainer" containerID="9895b8fc9bdbda40c0e5b8535f3fa633eba05c5bfc8ddc09e6b3d035ef10b5dc" Jan 13 21:24:55.760007 containerd[1877]: time="2025-01-13T21:24:55.759965680Z" level=info msg="CreateContainer within sandbox \"7a05d728fa28bd307b0c7737e1b49503f8ddcabe68c9f22c781b7277afb39177\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 13 21:24:55.797578 containerd[1877]: time="2025-01-13T21:24:55.797409918Z" level=info msg="CreateContainer within sandbox \"7a05d728fa28bd307b0c7737e1b49503f8ddcabe68c9f22c781b7277afb39177\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"2b58a2aa1192278b240dcebf5d73a5a898cf33c2d4c68e724307be7a387a41ac\"" Jan 13 21:24:55.798122 containerd[1877]: time="2025-01-13T21:24:55.798088285Z" level=info msg="StartContainer for \"2b58a2aa1192278b240dcebf5d73a5a898cf33c2d4c68e724307be7a387a41ac\"" Jan 13 21:24:55.845781 systemd[1]: Started cri-containerd-2b58a2aa1192278b240dcebf5d73a5a898cf33c2d4c68e724307be7a387a41ac.scope - libcontainer container 2b58a2aa1192278b240dcebf5d73a5a898cf33c2d4c68e724307be7a387a41ac. Jan 13 21:24:55.896654 containerd[1877]: time="2025-01-13T21:24:55.896592849Z" level=info msg="StartContainer for \"2b58a2aa1192278b240dcebf5d73a5a898cf33c2d4c68e724307be7a387a41ac\" returns successfully" Jan 13 21:24:56.235613 systemd[1]: run-containerd-runc-k8s.io-2b58a2aa1192278b240dcebf5d73a5a898cf33c2d4c68e724307be7a387a41ac-runc.HCCXTt.mount: Deactivated successfully.