Feb 13 15:31:07.049192 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 13:54:58 -00 2025 Feb 13 15:31:07.049233 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=cd73eba291b8356dfc2c39f651cabef9206685f772c8949188fd366788d672c2 Feb 13 15:31:07.049259 kernel: BIOS-provided physical RAM map: Feb 13 15:31:07.053475 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 13 15:31:07.053510 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 13 15:31:07.053522 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 13 15:31:07.053545 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Feb 13 15:31:07.053558 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Feb 13 15:31:07.053570 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Feb 13 15:31:07.053582 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 13 15:31:07.053594 kernel: NX (Execute Disable) protection: active Feb 13 15:31:07.053607 kernel: APIC: Static calls initialized Feb 13 15:31:07.053619 kernel: SMBIOS 2.7 present. Feb 13 15:31:07.053631 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Feb 13 15:31:07.053650 kernel: Hypervisor detected: KVM Feb 13 15:31:07.053675 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 15:31:07.053688 kernel: kvm-clock: using sched offset of 9157388807 cycles Feb 13 15:31:07.053704 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 15:31:07.053717 kernel: tsc: Detected 2499.996 MHz processor Feb 13 15:31:07.053731 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 15:31:07.053745 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 15:31:07.053762 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Feb 13 15:31:07.053776 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Feb 13 15:31:07.053790 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 15:31:07.053803 kernel: Using GB pages for direct mapping Feb 13 15:31:07.053817 kernel: ACPI: Early table checksum verification disabled Feb 13 15:31:07.053830 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Feb 13 15:31:07.053844 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Feb 13 15:31:07.053858 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Feb 13 15:31:07.053872 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Feb 13 15:31:07.053888 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Feb 13 15:31:07.053902 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Feb 13 15:31:07.053916 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Feb 13 15:31:07.053930 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Feb 13 15:31:07.053943 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Feb 13 15:31:07.053956 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Feb 13 15:31:07.053970 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Feb 13 15:31:07.053983 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Feb 13 15:31:07.053997 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Feb 13 15:31:07.054014 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Feb 13 15:31:07.054034 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Feb 13 15:31:07.054048 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Feb 13 15:31:07.054062 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Feb 13 15:31:07.054077 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Feb 13 15:31:07.054094 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Feb 13 15:31:07.054108 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Feb 13 15:31:07.054123 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Feb 13 15:31:07.054137 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Feb 13 15:31:07.054152 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 13 15:31:07.054166 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 13 15:31:07.054180 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Feb 13 15:31:07.054195 kernel: NUMA: Initialized distance table, cnt=1 Feb 13 15:31:07.054209 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Feb 13 15:31:07.054226 kernel: Zone ranges: Feb 13 15:31:07.054240 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 15:31:07.054255 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Feb 13 15:31:07.057402 kernel: Normal empty Feb 13 15:31:07.057443 kernel: Movable zone start for each node Feb 13 15:31:07.057459 kernel: Early memory node ranges Feb 13 15:31:07.057475 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 13 15:31:07.057490 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Feb 13 15:31:07.057506 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Feb 13 15:31:07.057527 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 15:31:07.057542 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 13 15:31:07.057556 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Feb 13 15:31:07.057570 kernel: ACPI: PM-Timer IO Port: 0xb008 Feb 13 15:31:07.057584 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 15:31:07.057599 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Feb 13 15:31:07.057613 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 15:31:07.057627 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 15:31:07.057642 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 15:31:07.057656 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 15:31:07.057673 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 15:31:07.057687 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 15:31:07.057700 kernel: TSC deadline timer available Feb 13 15:31:07.057714 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 13 15:31:07.057728 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 13 15:31:07.057742 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Feb 13 15:31:07.057756 kernel: Booting paravirtualized kernel on KVM Feb 13 15:31:07.057770 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 15:31:07.057785 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Feb 13 15:31:07.057802 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Feb 13 15:31:07.057816 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Feb 13 15:31:07.057830 kernel: pcpu-alloc: [0] 0 1 Feb 13 15:31:07.057843 kernel: kvm-guest: PV spinlocks enabled Feb 13 15:31:07.057857 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 15:31:07.057873 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=cd73eba291b8356dfc2c39f651cabef9206685f772c8949188fd366788d672c2 Feb 13 15:31:07.057888 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 15:31:07.057902 kernel: random: crng init done Feb 13 15:31:07.057918 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 15:31:07.057932 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 13 15:31:07.057946 kernel: Fallback order for Node 0: 0 Feb 13 15:31:07.057960 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Feb 13 15:31:07.057974 kernel: Policy zone: DMA32 Feb 13 15:31:07.057987 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 15:31:07.058002 kernel: Memory: 1932348K/2057760K available (12288K kernel code, 2299K rwdata, 22736K rodata, 42976K init, 2216K bss, 125152K reserved, 0K cma-reserved) Feb 13 15:31:07.058016 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 15:31:07.058033 kernel: Kernel/User page tables isolation: enabled Feb 13 15:31:07.058047 kernel: ftrace: allocating 37920 entries in 149 pages Feb 13 15:31:07.058060 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 15:31:07.058074 kernel: Dynamic Preempt: voluntary Feb 13 15:31:07.058088 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 15:31:07.058103 kernel: rcu: RCU event tracing is enabled. Feb 13 15:31:07.058117 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 15:31:07.058131 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 15:31:07.058145 kernel: Rude variant of Tasks RCU enabled. Feb 13 15:31:07.058158 kernel: Tracing variant of Tasks RCU enabled. Feb 13 15:31:07.058175 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 15:31:07.058189 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 15:31:07.058202 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 13 15:31:07.058216 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 15:31:07.058230 kernel: Console: colour VGA+ 80x25 Feb 13 15:31:07.058243 kernel: printk: console [ttyS0] enabled Feb 13 15:31:07.058257 kernel: ACPI: Core revision 20230628 Feb 13 15:31:07.058284 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Feb 13 15:31:07.058298 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 15:31:07.058315 kernel: x2apic enabled Feb 13 15:31:07.058330 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 15:31:07.058355 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Feb 13 15:31:07.058372 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Feb 13 15:31:07.058387 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 13 15:31:07.058402 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 13 15:31:07.058416 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 15:31:07.058430 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 15:31:07.058444 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 15:31:07.058459 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 15:31:07.058474 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Feb 13 15:31:07.058488 kernel: RETBleed: Vulnerable Feb 13 15:31:07.058506 kernel: Speculative Store Bypass: Vulnerable Feb 13 15:31:07.058520 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 15:31:07.058535 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 15:31:07.058549 kernel: GDS: Unknown: Dependent on hypervisor status Feb 13 15:31:07.058564 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 15:31:07.058578 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 15:31:07.058593 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 15:31:07.058610 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Feb 13 15:31:07.058625 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Feb 13 15:31:07.058639 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Feb 13 15:31:07.058654 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Feb 13 15:31:07.058668 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Feb 13 15:31:07.058683 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Feb 13 15:31:07.058697 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 15:31:07.058712 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Feb 13 15:31:07.058726 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Feb 13 15:31:07.058741 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Feb 13 15:31:07.058758 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Feb 13 15:31:07.058772 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Feb 13 15:31:07.058786 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Feb 13 15:31:07.058801 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Feb 13 15:31:07.058815 kernel: Freeing SMP alternatives memory: 32K Feb 13 15:31:07.058830 kernel: pid_max: default: 32768 minimum: 301 Feb 13 15:31:07.058844 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 15:31:07.058858 kernel: landlock: Up and running. Feb 13 15:31:07.058941 kernel: SELinux: Initializing. Feb 13 15:31:07.058960 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 15:31:07.058975 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 15:31:07.058990 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Feb 13 15:31:07.059009 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:31:07.059024 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:31:07.059039 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:31:07.059054 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Feb 13 15:31:07.059069 kernel: signal: max sigframe size: 3632 Feb 13 15:31:07.059084 kernel: rcu: Hierarchical SRCU implementation. Feb 13 15:31:07.059099 kernel: rcu: Max phase no-delay instances is 400. Feb 13 15:31:07.059113 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 13 15:31:07.059128 kernel: smp: Bringing up secondary CPUs ... Feb 13 15:31:07.059146 kernel: smpboot: x86: Booting SMP configuration: Feb 13 15:31:07.059161 kernel: .... node #0, CPUs: #1 Feb 13 15:31:07.059177 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Feb 13 15:31:07.059192 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 13 15:31:07.059207 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 15:31:07.059222 kernel: smpboot: Max logical packages: 1 Feb 13 15:31:07.059236 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Feb 13 15:31:07.059251 kernel: devtmpfs: initialized Feb 13 15:31:07.063470 kernel: x86/mm: Memory block size: 128MB Feb 13 15:31:07.063511 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 15:31:07.063530 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 15:31:07.063546 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 15:31:07.063564 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 15:31:07.063580 kernel: audit: initializing netlink subsys (disabled) Feb 13 15:31:07.063597 kernel: audit: type=2000 audit(1739460665.916:1): state=initialized audit_enabled=0 res=1 Feb 13 15:31:07.063613 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 15:31:07.063629 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 15:31:07.063652 kernel: cpuidle: using governor menu Feb 13 15:31:07.063668 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 15:31:07.063685 kernel: dca service started, version 1.12.1 Feb 13 15:31:07.063701 kernel: PCI: Using configuration type 1 for base access Feb 13 15:31:07.063718 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 15:31:07.063734 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 15:31:07.063750 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 15:31:07.063767 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 15:31:07.063784 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 15:31:07.063803 kernel: ACPI: Added _OSI(Module Device) Feb 13 15:31:07.063820 kernel: ACPI: Added _OSI(Processor Device) Feb 13 15:31:07.063836 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 15:31:07.063852 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 15:31:07.063869 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Feb 13 15:31:07.063894 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 15:31:07.063910 kernel: ACPI: Interpreter enabled Feb 13 15:31:07.063926 kernel: ACPI: PM: (supports S0 S5) Feb 13 15:31:07.063943 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 15:31:07.063962 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 15:31:07.063979 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 15:31:07.063996 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Feb 13 15:31:07.064012 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 15:31:07.064234 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 13 15:31:07.064391 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Feb 13 15:31:07.064522 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Feb 13 15:31:07.064542 kernel: acpiphp: Slot [3] registered Feb 13 15:31:07.064563 kernel: acpiphp: Slot [4] registered Feb 13 15:31:07.064580 kernel: acpiphp: Slot [5] registered Feb 13 15:31:07.064596 kernel: acpiphp: Slot [6] registered Feb 13 15:31:07.064613 kernel: acpiphp: Slot [7] registered Feb 13 15:31:07.064630 kernel: acpiphp: Slot [8] registered Feb 13 15:31:07.064646 kernel: acpiphp: Slot [9] registered Feb 13 15:31:07.064662 kernel: acpiphp: Slot [10] registered Feb 13 15:31:07.064679 kernel: acpiphp: Slot [11] registered Feb 13 15:31:07.064695 kernel: acpiphp: Slot [12] registered Feb 13 15:31:07.064822 kernel: acpiphp: Slot [13] registered Feb 13 15:31:07.064840 kernel: acpiphp: Slot [14] registered Feb 13 15:31:07.064856 kernel: acpiphp: Slot [15] registered Feb 13 15:31:07.064873 kernel: acpiphp: Slot [16] registered Feb 13 15:31:07.064889 kernel: acpiphp: Slot [17] registered Feb 13 15:31:07.064906 kernel: acpiphp: Slot [18] registered Feb 13 15:31:07.064924 kernel: acpiphp: Slot [19] registered Feb 13 15:31:07.064940 kernel: acpiphp: Slot [20] registered Feb 13 15:31:07.064957 kernel: acpiphp: Slot [21] registered Feb 13 15:31:07.064977 kernel: acpiphp: Slot [22] registered Feb 13 15:31:07.064993 kernel: acpiphp: Slot [23] registered Feb 13 15:31:07.065009 kernel: acpiphp: Slot [24] registered Feb 13 15:31:07.065025 kernel: acpiphp: Slot [25] registered Feb 13 15:31:07.065042 kernel: acpiphp: Slot [26] registered Feb 13 15:31:07.065116 kernel: acpiphp: Slot [27] registered Feb 13 15:31:07.065135 kernel: acpiphp: Slot [28] registered Feb 13 15:31:07.065152 kernel: acpiphp: Slot [29] registered Feb 13 15:31:07.065168 kernel: acpiphp: Slot [30] registered Feb 13 15:31:07.065185 kernel: acpiphp: Slot [31] registered Feb 13 15:31:07.065204 kernel: PCI host bridge to bus 0000:00 Feb 13 15:31:07.067425 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 15:31:07.067567 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 15:31:07.067690 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 15:31:07.067808 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Feb 13 15:31:07.067934 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 15:31:07.068088 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 13 15:31:07.068239 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 13 15:31:07.068482 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Feb 13 15:31:07.068629 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Feb 13 15:31:07.068763 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Feb 13 15:31:07.068897 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Feb 13 15:31:07.069030 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Feb 13 15:31:07.069162 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Feb 13 15:31:07.070349 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Feb 13 15:31:07.070499 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Feb 13 15:31:07.070634 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Feb 13 15:31:07.070771 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Feb 13 15:31:07.070975 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Feb 13 15:31:07.071107 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Feb 13 15:31:07.071236 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 15:31:07.071409 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Feb 13 15:31:07.071540 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Feb 13 15:31:07.071674 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Feb 13 15:31:07.071799 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Feb 13 15:31:07.071819 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 15:31:07.071833 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 15:31:07.071854 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 15:31:07.071871 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 15:31:07.071896 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 13 15:31:07.071910 kernel: iommu: Default domain type: Translated Feb 13 15:31:07.071924 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 15:31:07.071937 kernel: PCI: Using ACPI for IRQ routing Feb 13 15:31:07.071953 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 15:31:07.071969 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 13 15:31:07.071985 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Feb 13 15:31:07.072138 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Feb 13 15:31:07.074754 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Feb 13 15:31:07.075123 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 15:31:07.075150 kernel: vgaarb: loaded Feb 13 15:31:07.075166 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Feb 13 15:31:07.075180 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Feb 13 15:31:07.075194 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 15:31:07.075208 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 15:31:07.075229 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 15:31:07.075244 kernel: pnp: PnP ACPI init Feb 13 15:31:07.075258 kernel: pnp: PnP ACPI: found 5 devices Feb 13 15:31:07.075286 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 15:31:07.076377 kernel: NET: Registered PF_INET protocol family Feb 13 15:31:07.076399 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 15:31:07.076688 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 13 15:31:07.076712 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 15:31:07.076728 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 15:31:07.076780 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 15:31:07.076797 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 13 15:31:07.076812 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 15:31:07.076825 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 15:31:07.076870 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 15:31:07.076883 kernel: NET: Registered PF_XDP protocol family Feb 13 15:31:07.077179 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 15:31:07.079458 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 15:31:07.079598 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 15:31:07.079715 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Feb 13 15:31:07.079856 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 13 15:31:07.079883 kernel: PCI: CLS 0 bytes, default 64 Feb 13 15:31:07.079898 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 13 15:31:07.079913 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Feb 13 15:31:07.079927 kernel: clocksource: Switched to clocksource tsc Feb 13 15:31:07.079941 kernel: Initialise system trusted keyrings Feb 13 15:31:07.079956 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 13 15:31:07.079973 kernel: Key type asymmetric registered Feb 13 15:31:07.079987 kernel: Asymmetric key parser 'x509' registered Feb 13 15:31:07.080000 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 15:31:07.080014 kernel: io scheduler mq-deadline registered Feb 13 15:31:07.080028 kernel: io scheduler kyber registered Feb 13 15:31:07.080042 kernel: io scheduler bfq registered Feb 13 15:31:07.080056 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 15:31:07.080070 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 15:31:07.080084 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 15:31:07.080101 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 15:31:07.080115 kernel: i8042: Warning: Keylock active Feb 13 15:31:07.080129 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 15:31:07.080143 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 15:31:07.080317 kernel: rtc_cmos 00:00: RTC can wake from S4 Feb 13 15:31:07.080442 kernel: rtc_cmos 00:00: registered as rtc0 Feb 13 15:31:07.080561 kernel: rtc_cmos 00:00: setting system clock to 2025-02-13T15:31:06 UTC (1739460666) Feb 13 15:31:07.080682 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Feb 13 15:31:07.080703 kernel: intel_pstate: CPU model not supported Feb 13 15:31:07.080717 kernel: NET: Registered PF_INET6 protocol family Feb 13 15:31:07.080731 kernel: Segment Routing with IPv6 Feb 13 15:31:07.080745 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 15:31:07.080759 kernel: NET: Registered PF_PACKET protocol family Feb 13 15:31:07.080774 kernel: Key type dns_resolver registered Feb 13 15:31:07.080787 kernel: IPI shorthand broadcast: enabled Feb 13 15:31:07.080800 kernel: sched_clock: Marking stable (685020201, 252810972)->(1060049629, -122218456) Feb 13 15:31:07.080813 kernel: registered taskstats version 1 Feb 13 15:31:07.080831 kernel: Loading compiled-in X.509 certificates Feb 13 15:31:07.080845 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 9ec780e1db69d46be90bbba73ae62b0106e27ae0' Feb 13 15:31:07.080858 kernel: Key type .fscrypt registered Feb 13 15:31:07.080872 kernel: Key type fscrypt-provisioning registered Feb 13 15:31:07.080886 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 15:31:07.080900 kernel: ima: Allocated hash algorithm: sha1 Feb 13 15:31:07.080913 kernel: ima: No architecture policies found Feb 13 15:31:07.080927 kernel: clk: Disabling unused clocks Feb 13 15:31:07.080944 kernel: Freeing unused kernel image (initmem) memory: 42976K Feb 13 15:31:07.080958 kernel: Write protecting the kernel read-only data: 36864k Feb 13 15:31:07.080972 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Feb 13 15:31:07.080986 kernel: Run /init as init process Feb 13 15:31:07.081000 kernel: with arguments: Feb 13 15:31:07.081013 kernel: /init Feb 13 15:31:07.081026 kernel: with environment: Feb 13 15:31:07.081040 kernel: HOME=/ Feb 13 15:31:07.081099 kernel: TERM=linux Feb 13 15:31:07.081114 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 15:31:07.081138 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:31:07.081168 systemd[1]: Detected virtualization amazon. Feb 13 15:31:07.081186 systemd[1]: Detected architecture x86-64. Feb 13 15:31:07.081200 systemd[1]: Running in initrd. Feb 13 15:31:07.081217 systemd[1]: No hostname configured, using default hostname. Feb 13 15:31:07.081231 systemd[1]: Hostname set to . Feb 13 15:31:07.081247 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:31:07.081261 systemd[1]: Queued start job for default target initrd.target. Feb 13 15:31:07.082358 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:31:07.082380 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:31:07.082509 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 15:31:07.082527 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:31:07.082549 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 15:31:07.082566 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 15:31:07.082707 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 15:31:07.082724 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 15:31:07.082741 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:31:07.082756 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:31:07.082771 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:31:07.082791 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:31:07.082806 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:31:07.082821 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:31:07.082836 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:31:07.082851 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:31:07.082866 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 15:31:07.082881 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 15:31:07.082896 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:31:07.082911 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:31:07.082929 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:31:07.082944 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:31:07.082959 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 15:31:07.082974 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:31:07.082989 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 15:31:07.083004 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 15:31:07.083019 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:31:07.083037 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:31:07.083052 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:31:07.083068 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 15:31:07.083083 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 15:31:07.083103 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:31:07.083121 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 15:31:07.083140 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:31:07.083192 systemd-journald[179]: Collecting audit messages is disabled. Feb 13 15:31:07.083231 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 15:31:07.083246 kernel: Bridge firewalling registered Feb 13 15:31:07.083262 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:31:07.084369 systemd-journald[179]: Journal started Feb 13 15:31:07.084410 systemd-journald[179]: Runtime Journal (/run/log/journal/ec23dcddef181c8041e0d62a50b38b91) is 4.8M, max 38.6M, 33.7M free. Feb 13 15:31:07.023124 systemd-modules-load[180]: Inserted module 'overlay' Feb 13 15:31:07.193853 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:31:07.076210 systemd-modules-load[180]: Inserted module 'br_netfilter' Feb 13 15:31:07.195865 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:31:07.197988 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:31:07.206653 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:31:07.211502 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:31:07.215494 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:31:07.220584 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:31:07.251816 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:31:07.264092 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:31:07.267758 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:31:07.269941 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:31:07.277489 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 15:31:07.293806 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:31:07.321010 dracut-cmdline[213]: dracut-dracut-053 Feb 13 15:31:07.327746 dracut-cmdline[213]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=cd73eba291b8356dfc2c39f651cabef9206685f772c8949188fd366788d672c2 Feb 13 15:31:07.409369 systemd-resolved[216]: Positive Trust Anchors: Feb 13 15:31:07.411015 systemd-resolved[216]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:31:07.413577 systemd-resolved[216]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:31:07.428459 systemd-resolved[216]: Defaulting to hostname 'linux'. Feb 13 15:31:07.431551 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:31:07.433586 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:31:07.511462 kernel: SCSI subsystem initialized Feb 13 15:31:07.523302 kernel: Loading iSCSI transport class v2.0-870. Feb 13 15:31:07.536369 kernel: iscsi: registered transport (tcp) Feb 13 15:31:07.569584 kernel: iscsi: registered transport (qla4xxx) Feb 13 15:31:07.569668 kernel: QLogic iSCSI HBA Driver Feb 13 15:31:07.655747 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 15:31:07.668602 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 15:31:07.698579 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 15:31:07.698663 kernel: device-mapper: uevent: version 1.0.3 Feb 13 15:31:07.698685 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 15:31:07.746308 kernel: raid6: avx512x4 gen() 15629 MB/s Feb 13 15:31:07.768327 kernel: raid6: avx512x2 gen() 16170 MB/s Feb 13 15:31:07.786310 kernel: raid6: avx512x1 gen() 4735 MB/s Feb 13 15:31:07.803304 kernel: raid6: avx2x4 gen() 15175 MB/s Feb 13 15:31:07.820322 kernel: raid6: avx2x2 gen() 14947 MB/s Feb 13 15:31:07.838423 kernel: raid6: avx2x1 gen() 12446 MB/s Feb 13 15:31:07.838501 kernel: raid6: using algorithm avx512x2 gen() 16170 MB/s Feb 13 15:31:07.856470 kernel: raid6: .... xor() 16959 MB/s, rmw enabled Feb 13 15:31:07.856554 kernel: raid6: using avx512x2 recovery algorithm Feb 13 15:31:07.881308 kernel: xor: automatically using best checksumming function avx Feb 13 15:31:08.157315 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 15:31:08.168417 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:31:08.174498 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:31:08.192557 systemd-udevd[399]: Using default interface naming scheme 'v255'. Feb 13 15:31:08.198414 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:31:08.212545 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 15:31:08.231623 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Feb 13 15:31:08.265126 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:31:08.269452 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:31:08.407039 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:31:08.419616 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 15:31:08.465959 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 15:31:08.473198 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:31:08.475986 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:31:08.478598 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:31:08.494577 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 15:31:08.537421 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 15:31:08.539525 kernel: ena 0000:00:05.0: ENA device version: 0.10 Feb 13 15:31:08.581510 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Feb 13 15:31:08.581741 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Feb 13 15:31:08.581919 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 15:31:08.581940 kernel: AES CTR mode by8 optimization enabled Feb 13 15:31:08.582044 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:99:c1:2d:5d:77 Feb 13 15:31:08.541190 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:31:08.587520 (udev-worker)[448]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:31:08.612490 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:31:08.612672 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:31:08.618054 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:31:08.622120 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:31:08.622668 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:31:08.627971 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:31:08.639102 kernel: nvme nvme0: pci function 0000:00:04.0 Feb 13 15:31:08.639421 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Feb 13 15:31:08.641375 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:31:08.654359 kernel: nvme nvme0: 2/0/0 default/read/poll queues Feb 13 15:31:08.663443 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 15:31:08.663506 kernel: GPT:9289727 != 16777215 Feb 13 15:31:08.663532 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 15:31:08.664576 kernel: GPT:9289727 != 16777215 Feb 13 15:31:08.664618 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 15:31:08.665596 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 15:31:08.789110 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Feb 13 15:31:08.794886 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:31:08.806308 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (454) Feb 13 15:31:08.805267 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:31:08.860353 kernel: BTRFS: device fsid 966d6124-9067-4089-b000-5e99065fe7e2 devid 1 transid 37 /dev/nvme0n1p3 scanned by (udev-worker) (455) Feb 13 15:31:08.861099 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:31:08.879938 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 15:31:08.933187 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Feb 13 15:31:08.941483 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Feb 13 15:31:08.941696 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Feb 13 15:31:08.957852 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 15:31:08.971462 disk-uuid[632]: Primary Header is updated. Feb 13 15:31:08.971462 disk-uuid[632]: Secondary Entries is updated. Feb 13 15:31:08.971462 disk-uuid[632]: Secondary Header is updated. Feb 13 15:31:08.978301 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 15:31:09.992291 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 15:31:09.992673 disk-uuid[633]: The operation has completed successfully. Feb 13 15:31:10.196555 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 15:31:10.196681 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 15:31:10.240506 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 15:31:10.249839 sh[893]: Success Feb 13 15:31:10.283305 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 15:31:10.459259 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 15:31:10.474403 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 15:31:10.480301 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 15:31:10.521623 kernel: BTRFS info (device dm-0): first mount of filesystem 966d6124-9067-4089-b000-5e99065fe7e2 Feb 13 15:31:10.522028 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:31:10.522064 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 15:31:10.522923 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 15:31:10.523643 kernel: BTRFS info (device dm-0): using free space tree Feb 13 15:31:10.680304 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 15:31:10.684331 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 15:31:10.686936 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 15:31:10.694628 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 15:31:10.700483 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 15:31:10.721843 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:31:10.722147 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:31:10.722177 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 15:31:10.726328 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 15:31:10.760432 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:31:10.760770 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 15:31:10.771542 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 15:31:10.780745 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 15:31:10.883428 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:31:10.896077 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:31:10.941806 systemd-networkd[1085]: lo: Link UP Feb 13 15:31:10.941819 systemd-networkd[1085]: lo: Gained carrier Feb 13 15:31:10.946164 systemd-networkd[1085]: Enumeration completed Feb 13 15:31:10.946384 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:31:10.952779 systemd-networkd[1085]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:31:10.952784 systemd-networkd[1085]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:31:10.953079 systemd[1]: Reached target network.target - Network. Feb 13 15:31:10.965534 systemd-networkd[1085]: eth0: Link UP Feb 13 15:31:10.965544 systemd-networkd[1085]: eth0: Gained carrier Feb 13 15:31:10.965561 systemd-networkd[1085]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:31:10.990451 systemd-networkd[1085]: eth0: DHCPv4 address 172.31.29.23/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 15:31:11.105705 ignition[1000]: Ignition 2.20.0 Feb 13 15:31:11.105718 ignition[1000]: Stage: fetch-offline Feb 13 15:31:11.105958 ignition[1000]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:31:11.105971 ignition[1000]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:31:11.108350 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:31:11.106433 ignition[1000]: Ignition finished successfully Feb 13 15:31:11.119719 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 15:31:11.153920 ignition[1094]: Ignition 2.20.0 Feb 13 15:31:11.153936 ignition[1094]: Stage: fetch Feb 13 15:31:11.154527 ignition[1094]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:31:11.154542 ignition[1094]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:31:11.154662 ignition[1094]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:31:11.199668 ignition[1094]: PUT result: OK Feb 13 15:31:11.214766 ignition[1094]: parsed url from cmdline: "" Feb 13 15:31:11.214779 ignition[1094]: no config URL provided Feb 13 15:31:11.214789 ignition[1094]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:31:11.214804 ignition[1094]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:31:11.214829 ignition[1094]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:31:11.220611 ignition[1094]: PUT result: OK Feb 13 15:31:11.227145 ignition[1094]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Feb 13 15:31:11.233611 ignition[1094]: GET result: OK Feb 13 15:31:11.233759 ignition[1094]: parsing config with SHA512: aaaf73820541b995e3aa226c3d0a1959e827bf0000ad1b3e6704bb4d22de295ebef85b037e6feb3f5bffe27b64c93a971b19b7f86e485d7b720a1d27ddbf6088 Feb 13 15:31:11.248167 unknown[1094]: fetched base config from "system" Feb 13 15:31:11.249608 unknown[1094]: fetched base config from "system" Feb 13 15:31:11.249770 unknown[1094]: fetched user config from "aws" Feb 13 15:31:11.251023 ignition[1094]: fetch: fetch complete Feb 13 15:31:11.251030 ignition[1094]: fetch: fetch passed Feb 13 15:31:11.251101 ignition[1094]: Ignition finished successfully Feb 13 15:31:11.254921 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 15:31:11.260530 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 15:31:11.291362 ignition[1101]: Ignition 2.20.0 Feb 13 15:31:11.291376 ignition[1101]: Stage: kargs Feb 13 15:31:11.291792 ignition[1101]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:31:11.291805 ignition[1101]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:31:11.292180 ignition[1101]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:31:11.295708 ignition[1101]: PUT result: OK Feb 13 15:31:11.304044 ignition[1101]: kargs: kargs passed Feb 13 15:31:11.304133 ignition[1101]: Ignition finished successfully Feb 13 15:31:11.306547 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 15:31:11.313781 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 15:31:11.335602 ignition[1107]: Ignition 2.20.0 Feb 13 15:31:11.335616 ignition[1107]: Stage: disks Feb 13 15:31:11.336016 ignition[1107]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:31:11.336025 ignition[1107]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:31:11.336562 ignition[1107]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:31:11.337944 ignition[1107]: PUT result: OK Feb 13 15:31:11.352067 ignition[1107]: disks: disks passed Feb 13 15:31:11.352146 ignition[1107]: Ignition finished successfully Feb 13 15:31:11.356416 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 15:31:11.356977 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 15:31:11.363199 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 15:31:11.363332 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:31:11.368147 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:31:11.369415 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:31:11.378778 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 15:31:11.430544 systemd-fsck[1116]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 15:31:11.435159 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 15:31:11.449512 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 15:31:11.593301 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 85ed0b0d-7f0f-4eeb-80d8-6213e9fcc55d r/w with ordered data mode. Quota mode: none. Feb 13 15:31:11.594389 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 15:31:11.596697 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 15:31:11.617445 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:31:11.621413 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 15:31:11.624464 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 15:31:11.624537 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 15:31:11.624571 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:31:11.639301 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1135) Feb 13 15:31:11.639800 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 15:31:11.644258 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:31:11.644305 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:31:11.644325 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 15:31:11.647547 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 15:31:11.653311 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 15:31:11.656782 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:31:11.958862 initrd-setup-root[1159]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 15:31:11.992556 initrd-setup-root[1166]: cut: /sysroot/etc/group: No such file or directory Feb 13 15:31:12.004205 initrd-setup-root[1173]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 15:31:12.028295 initrd-setup-root[1180]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 15:31:12.328520 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 15:31:12.337408 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 15:31:12.340582 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 15:31:12.382302 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:31:12.382454 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 15:31:12.409774 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 15:31:12.440515 ignition[1248]: INFO : Ignition 2.20.0 Feb 13 15:31:12.440515 ignition[1248]: INFO : Stage: mount Feb 13 15:31:12.443548 ignition[1248]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:31:12.443548 ignition[1248]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:31:12.443548 ignition[1248]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:31:12.443548 ignition[1248]: INFO : PUT result: OK Feb 13 15:31:12.450558 ignition[1248]: INFO : mount: mount passed Feb 13 15:31:12.450558 ignition[1248]: INFO : Ignition finished successfully Feb 13 15:31:12.456119 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 15:31:12.473586 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 15:31:12.600566 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:31:12.649302 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1259) Feb 13 15:31:12.651818 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:31:12.651925 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:31:12.651947 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 15:31:12.659993 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 15:31:12.660767 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:31:12.711664 ignition[1275]: INFO : Ignition 2.20.0 Feb 13 15:31:12.711664 ignition[1275]: INFO : Stage: files Feb 13 15:31:12.711664 ignition[1275]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:31:12.711664 ignition[1275]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:31:12.711664 ignition[1275]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:31:12.719634 ignition[1275]: INFO : PUT result: OK Feb 13 15:31:12.723552 ignition[1275]: DEBUG : files: compiled without relabeling support, skipping Feb 13 15:31:12.726036 ignition[1275]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 15:31:12.726036 ignition[1275]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 15:31:12.745738 ignition[1275]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 15:31:12.747966 ignition[1275]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 15:31:12.749887 unknown[1275]: wrote ssh authorized keys file for user: core Feb 13 15:31:12.752315 ignition[1275]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 15:31:12.754217 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 15:31:12.754217 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 13 15:31:12.882427 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 15:31:12.944404 systemd-networkd[1085]: eth0: Gained IPv6LL Feb 13 15:31:13.022581 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 15:31:13.024949 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 15:31:13.024949 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 13 15:31:13.493248 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 15:31:13.650714 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 15:31:13.650714 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 15:31:13.650714 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 15:31:13.650714 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:31:13.650714 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:31:13.650714 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:31:13.650714 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:31:13.650714 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:31:13.650714 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:31:13.650714 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:31:13.650714 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:31:13.650714 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 15:31:13.650714 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 15:31:13.650714 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 15:31:13.650714 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Feb 13 15:31:14.106753 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 15:31:14.540360 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 15:31:14.540360 ignition[1275]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Feb 13 15:31:14.550574 ignition[1275]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:31:14.550574 ignition[1275]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:31:14.550574 ignition[1275]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Feb 13 15:31:14.550574 ignition[1275]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Feb 13 15:31:14.550574 ignition[1275]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 15:31:14.550574 ignition[1275]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:31:14.550574 ignition[1275]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:31:14.550574 ignition[1275]: INFO : files: files passed Feb 13 15:31:14.550574 ignition[1275]: INFO : Ignition finished successfully Feb 13 15:31:14.552218 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 15:31:14.571744 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 15:31:14.587509 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 15:31:14.590237 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 15:31:14.590434 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 15:31:14.613756 initrd-setup-root-after-ignition[1304]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:31:14.613756 initrd-setup-root-after-ignition[1304]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:31:14.620684 initrd-setup-root-after-ignition[1308]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:31:14.638460 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:31:14.638923 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 15:31:14.661545 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 15:31:14.785483 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 15:31:14.785616 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 15:31:14.791266 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 15:31:14.800408 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 15:31:14.809705 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 15:31:14.834190 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 15:31:14.855798 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:31:14.863540 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 15:31:14.888547 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:31:14.888780 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:31:14.893141 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 15:31:14.897197 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 15:31:14.899242 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:31:14.903967 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 15:31:14.908297 systemd[1]: Stopped target basic.target - Basic System. Feb 13 15:31:14.912122 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 15:31:14.918836 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:31:14.924241 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 15:31:14.928443 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 15:31:14.929918 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:31:14.934363 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 15:31:14.942037 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 15:31:14.948247 systemd[1]: Stopped target swap.target - Swaps. Feb 13 15:31:14.953536 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 15:31:14.955800 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:31:14.962155 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:31:14.963820 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:31:14.969946 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 15:31:14.971231 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:31:14.973176 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 15:31:14.973418 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 15:31:14.977380 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 15:31:14.977513 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:31:14.981336 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 15:31:14.981524 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 15:31:14.997597 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 15:31:15.017193 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 15:31:15.029594 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 15:31:15.029864 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:31:15.033358 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 15:31:15.033528 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:31:15.066531 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 15:31:15.066928 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 15:31:15.091698 ignition[1328]: INFO : Ignition 2.20.0 Feb 13 15:31:15.093841 ignition[1328]: INFO : Stage: umount Feb 13 15:31:15.093841 ignition[1328]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:31:15.093841 ignition[1328]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:31:15.093841 ignition[1328]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:31:15.103243 ignition[1328]: INFO : PUT result: OK Feb 13 15:31:15.107857 ignition[1328]: INFO : umount: umount passed Feb 13 15:31:15.107857 ignition[1328]: INFO : Ignition finished successfully Feb 13 15:31:15.111954 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 15:31:15.113564 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 15:31:15.118017 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 15:31:15.121799 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 15:31:15.122519 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 15:31:15.125739 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 15:31:15.125816 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 15:31:15.128621 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 15:31:15.128679 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 15:31:15.130545 systemd[1]: Stopped target network.target - Network. Feb 13 15:31:15.133168 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 15:31:15.133242 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:31:15.135906 systemd[1]: Stopped target paths.target - Path Units. Feb 13 15:31:15.139672 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 15:31:15.143828 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:31:15.148709 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 15:31:15.150902 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 15:31:15.154434 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 15:31:15.154489 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:31:15.157366 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 15:31:15.157411 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:31:15.159791 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 15:31:15.159857 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 15:31:15.161964 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 15:31:15.162016 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 15:31:15.165119 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 15:31:15.167022 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 15:31:15.178635 systemd-networkd[1085]: eth0: DHCPv6 lease lost Feb 13 15:31:15.179214 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 15:31:15.179337 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 15:31:15.182236 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 15:31:15.182430 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 15:31:15.188331 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 15:31:15.188392 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:31:15.199782 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 15:31:15.201956 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 15:31:15.202024 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:31:15.204311 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:31:15.204362 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:31:15.208537 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 15:31:15.208597 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 15:31:15.210341 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 15:31:15.210395 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:31:15.212148 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:31:15.255305 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 15:31:15.258240 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:31:15.265780 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 15:31:15.265884 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 15:31:15.276369 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 15:31:15.278008 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:31:15.282439 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 15:31:15.283789 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:31:15.286138 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 15:31:15.286214 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 15:31:15.289406 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:31:15.289488 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:31:15.294053 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 15:31:15.296371 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 15:31:15.296438 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:31:15.299234 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 15:31:15.300595 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:31:15.305709 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 15:31:15.305792 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:31:15.313119 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:31:15.313250 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:31:15.317574 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 15:31:15.317685 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 15:31:15.322361 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 15:31:15.322461 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 15:31:15.325008 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 15:31:15.325557 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 15:31:15.337514 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 15:31:15.337693 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 15:31:15.337990 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 15:31:15.359644 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 15:31:15.397197 systemd[1]: Switching root. Feb 13 15:31:15.439360 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). Feb 13 15:31:15.439456 systemd-journald[179]: Journal stopped Feb 13 15:31:17.851970 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 15:31:17.852062 kernel: SELinux: policy capability open_perms=1 Feb 13 15:31:17.852099 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 15:31:17.852118 kernel: SELinux: policy capability always_check_network=0 Feb 13 15:31:17.852136 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 15:31:17.852158 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 15:31:17.852177 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 15:31:17.852195 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 15:31:17.852214 kernel: audit: type=1403 audit(1739460676.259:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 15:31:17.852241 systemd[1]: Successfully loaded SELinux policy in 45.939ms. Feb 13 15:31:17.855335 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 31.590ms. Feb 13 15:31:17.855515 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:31:17.855543 systemd[1]: Detected virtualization amazon. Feb 13 15:31:17.855565 systemd[1]: Detected architecture x86-64. Feb 13 15:31:17.855585 systemd[1]: Detected first boot. Feb 13 15:31:17.855605 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:31:17.855626 zram_generator::config[1372]: No configuration found. Feb 13 15:31:17.855649 systemd[1]: Populated /etc with preset unit settings. Feb 13 15:31:17.855670 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 15:31:17.855694 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 15:31:17.855718 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 15:31:17.855739 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 15:31:17.855760 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 15:31:17.855781 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 15:31:17.855801 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 15:31:17.855822 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 15:31:17.855843 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 15:31:17.855875 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 15:31:17.855896 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 15:31:17.855917 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:31:17.855938 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:31:17.855959 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 15:31:17.855982 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 15:31:17.856009 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 15:31:17.856031 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:31:17.856052 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 15:31:17.856075 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:31:17.856099 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 15:31:17.856122 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 15:31:17.856144 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 15:31:17.856173 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 15:31:17.856196 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:31:17.856219 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:31:17.856242 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:31:17.856283 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:31:17.856302 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 15:31:17.856320 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 15:31:17.856338 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:31:17.856356 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:31:17.856374 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:31:17.856393 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 15:31:17.856411 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 15:31:17.856434 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 15:31:17.856457 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 15:31:17.856546 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:31:17.856569 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 15:31:17.856589 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 15:31:17.856608 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 15:31:17.856628 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 15:31:17.856647 systemd[1]: Reached target machines.target - Containers. Feb 13 15:31:17.856665 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 15:31:17.856781 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:31:17.856806 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:31:17.856826 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 15:31:17.856845 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:31:17.856865 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:31:17.856884 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:31:17.856903 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 15:31:17.856921 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:31:17.856939 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 15:31:17.856961 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 15:31:17.856980 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 15:31:17.856998 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 15:31:17.857017 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 15:31:17.857036 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:31:17.857147 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:31:17.857170 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 15:31:17.857190 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 15:31:17.857210 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:31:17.857232 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 15:31:17.857251 systemd[1]: Stopped verity-setup.service. Feb 13 15:31:17.862334 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:31:17.862475 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 15:31:17.862515 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 15:31:17.862551 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 15:31:17.862585 kernel: loop: module loaded Feb 13 15:31:17.862821 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 15:31:17.862852 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 15:31:17.862879 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 15:31:17.862904 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:31:17.862929 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 15:31:17.862954 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 15:31:17.862986 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:31:17.863011 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:31:17.863035 kernel: fuse: init (API version 7.39) Feb 13 15:31:17.863060 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:31:17.863084 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:31:17.863110 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:31:17.863135 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:31:17.863172 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 15:31:17.863194 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 15:31:17.863217 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:31:17.863239 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 15:31:17.869368 systemd-journald[1454]: Collecting audit messages is disabled. Feb 13 15:31:17.869466 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 15:31:17.869492 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 15:31:17.869515 systemd-journald[1454]: Journal started Feb 13 15:31:17.869556 systemd-journald[1454]: Runtime Journal (/run/log/journal/ec23dcddef181c8041e0d62a50b38b91) is 4.8M, max 38.6M, 33.7M free. Feb 13 15:31:17.281784 systemd[1]: Queued start job for default target multi-user.target. Feb 13 15:31:17.302155 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Feb 13 15:31:17.302665 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 15:31:17.911550 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 15:31:17.920799 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:31:17.950477 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:31:17.956684 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:31:17.967390 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:31:17.967848 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 15:31:17.969670 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 15:31:17.978969 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 15:31:18.042717 kernel: ACPI: bus type drm_connector registered Feb 13 15:31:18.042073 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:31:18.043949 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:31:18.054045 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 15:31:18.077652 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 15:31:18.077998 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:31:18.083992 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 15:31:18.101555 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 15:31:18.110622 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 15:31:18.112254 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:31:18.118399 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 15:31:18.123597 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 15:31:18.125648 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:31:18.127694 systemd-tmpfiles[1476]: ACLs are not supported, ignoring. Feb 13 15:31:18.127718 systemd-tmpfiles[1476]: ACLs are not supported, ignoring. Feb 13 15:31:18.136513 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 15:31:18.144499 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 15:31:18.148407 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:31:18.152305 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 15:31:18.156821 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:31:18.178819 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 15:31:18.206780 systemd-journald[1454]: Time spent on flushing to /var/log/journal/ec23dcddef181c8041e0d62a50b38b91 is 105.200ms for 968 entries. Feb 13 15:31:18.206780 systemd-journald[1454]: System Journal (/var/log/journal/ec23dcddef181c8041e0d62a50b38b91) is 8.0M, max 195.6M, 187.6M free. Feb 13 15:31:18.348067 systemd-journald[1454]: Received client request to flush runtime journal. Feb 13 15:31:18.348130 kernel: loop0: detected capacity change from 0 to 138184 Feb 13 15:31:18.348156 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 15:31:18.223480 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 15:31:18.228476 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 15:31:18.252037 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 15:31:18.272459 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:31:18.288161 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 15:31:18.352699 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 15:31:18.375495 udevadm[1513]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 15:31:18.382870 kernel: loop1: detected capacity change from 0 to 62848 Feb 13 15:31:18.382578 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 15:31:18.402530 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:31:18.440399 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 15:31:18.441529 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 15:31:18.455260 systemd-tmpfiles[1519]: ACLs are not supported, ignoring. Feb 13 15:31:18.455695 systemd-tmpfiles[1519]: ACLs are not supported, ignoring. Feb 13 15:31:18.480425 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:31:18.499304 kernel: loop2: detected capacity change from 0 to 140992 Feb 13 15:31:18.561951 kernel: loop3: detected capacity change from 0 to 211296 Feb 13 15:31:18.663215 kernel: loop4: detected capacity change from 0 to 138184 Feb 13 15:31:18.722515 kernel: loop5: detected capacity change from 0 to 62848 Feb 13 15:31:18.771342 kernel: loop6: detected capacity change from 0 to 140992 Feb 13 15:31:18.835366 kernel: loop7: detected capacity change from 0 to 211296 Feb 13 15:31:18.851946 (sd-merge)[1526]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Feb 13 15:31:18.854786 (sd-merge)[1526]: Merged extensions into '/usr'. Feb 13 15:31:18.871133 systemd[1]: Reloading requested from client PID 1504 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 15:31:18.871706 systemd[1]: Reloading... Feb 13 15:31:19.025388 zram_generator::config[1551]: No configuration found. Feb 13 15:31:19.435066 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:31:19.528300 ldconfig[1500]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 15:31:19.615765 systemd[1]: Reloading finished in 743 ms. Feb 13 15:31:19.645526 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 15:31:19.647613 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 15:31:19.666835 systemd[1]: Starting ensure-sysext.service... Feb 13 15:31:19.686795 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:31:19.732093 systemd-tmpfiles[1602]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 15:31:19.732621 systemd[1]: Reloading requested from client PID 1601 ('systemctl') (unit ensure-sysext.service)... Feb 13 15:31:19.732634 systemd[1]: Reloading... Feb 13 15:31:19.733054 systemd-tmpfiles[1602]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 15:31:19.737185 systemd-tmpfiles[1602]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 15:31:19.737896 systemd-tmpfiles[1602]: ACLs are not supported, ignoring. Feb 13 15:31:19.738001 systemd-tmpfiles[1602]: ACLs are not supported, ignoring. Feb 13 15:31:19.754553 systemd-tmpfiles[1602]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:31:19.754571 systemd-tmpfiles[1602]: Skipping /boot Feb 13 15:31:19.804210 systemd-tmpfiles[1602]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:31:19.804228 systemd-tmpfiles[1602]: Skipping /boot Feb 13 15:31:19.976325 zram_generator::config[1638]: No configuration found. Feb 13 15:31:20.144068 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:31:20.240724 systemd[1]: Reloading finished in 507 ms. Feb 13 15:31:20.259645 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 15:31:20.266866 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:31:20.275653 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:31:20.291525 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 15:31:20.296914 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 15:31:20.309879 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:31:20.319196 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:31:20.326608 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 15:31:20.340040 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:31:20.340850 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:31:20.350777 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:31:20.365335 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:31:20.387254 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:31:20.389859 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:31:20.390258 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:31:20.396617 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:31:20.399745 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:31:20.436833 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 15:31:20.442234 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:31:20.442507 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:31:20.448615 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:31:20.449186 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:31:20.463087 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:31:20.467645 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:31:20.470181 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:31:20.470426 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:31:20.489784 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:31:20.490671 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:31:20.503673 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:31:20.505337 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:31:20.505771 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 15:31:20.508672 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:31:20.514860 systemd[1]: Finished ensure-sysext.service. Feb 13 15:31:20.517384 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 15:31:20.563890 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:31:20.564186 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:31:20.568143 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:31:20.568589 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:31:20.588115 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:31:20.590424 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:31:20.593907 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:31:20.598992 systemd-udevd[1685]: Using default interface naming scheme 'v255'. Feb 13 15:31:20.602390 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 15:31:20.613020 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 15:31:20.637122 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:31:20.637366 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:31:20.644236 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 15:31:20.647751 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:31:20.647821 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 15:31:20.650508 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 15:31:20.668372 augenrules[1727]: No rules Feb 13 15:31:20.678702 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:31:20.678964 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:31:20.685835 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 15:31:20.705227 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:31:20.720665 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:31:20.911449 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 15:31:20.994981 (udev-worker)[1738]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:31:21.017398 systemd-resolved[1683]: Positive Trust Anchors: Feb 13 15:31:21.017433 systemd-resolved[1683]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:31:21.017500 systemd-resolved[1683]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:31:21.031026 systemd-networkd[1740]: lo: Link UP Feb 13 15:31:21.031041 systemd-networkd[1740]: lo: Gained carrier Feb 13 15:31:21.034855 systemd-networkd[1740]: Enumeration completed Feb 13 15:31:21.035109 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:31:21.043298 systemd-resolved[1683]: Defaulting to hostname 'linux'. Feb 13 15:31:21.044667 systemd-networkd[1740]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:31:21.044673 systemd-networkd[1740]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:31:21.050715 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 15:31:21.052628 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:31:21.054599 systemd[1]: Reached target network.target - Network. Feb 13 15:31:21.055712 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:31:21.058654 systemd-networkd[1740]: eth0: Link UP Feb 13 15:31:21.059564 systemd-networkd[1740]: eth0: Gained carrier Feb 13 15:31:21.060037 systemd-networkd[1740]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:31:21.074780 systemd-networkd[1740]: eth0: DHCPv4 address 172.31.29.23/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 15:31:21.126300 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 13 15:31:21.134304 kernel: ACPI: button: Power Button [PWRF] Feb 13 15:31:21.136303 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Feb 13 15:31:21.139311 kernel: ACPI: button: Sleep Button [SLPF] Feb 13 15:31:21.144304 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Feb 13 15:31:21.163266 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input5 Feb 13 15:31:21.145184 systemd-networkd[1740]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:31:21.193300 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (1747) Feb 13 15:31:21.256300 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 15:31:21.272945 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:31:21.410879 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 15:31:21.528467 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 15:31:21.539490 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 15:31:21.545499 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 15:31:21.549450 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:31:21.560214 lvm[1852]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:31:21.585552 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 15:31:21.597246 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 15:31:21.599229 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:31:21.601021 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:31:21.602292 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 15:31:21.603731 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 15:31:21.605389 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 15:31:21.606722 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 15:31:21.609415 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 15:31:21.611019 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 15:31:21.611061 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:31:21.612212 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:31:21.614816 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 15:31:21.617926 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 15:31:21.634482 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 15:31:21.648892 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 15:31:21.655475 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 15:31:21.660131 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:31:21.663205 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:31:21.665448 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:31:21.665669 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:31:21.682500 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 15:31:21.688813 lvm[1860]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:31:21.703542 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 15:31:21.713498 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 15:31:21.726955 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 15:31:21.747802 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 15:31:21.749065 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 15:31:21.772090 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 15:31:21.780201 systemd[1]: Started ntpd.service - Network Time Service. Feb 13 15:31:21.789654 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 15:31:21.795084 jq[1864]: false Feb 13 15:31:21.802440 systemd[1]: Starting setup-oem.service - Setup OEM... Feb 13 15:31:21.812649 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 15:31:21.822660 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 15:31:21.835485 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 15:31:21.837235 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 15:31:21.837873 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 15:31:21.846494 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 15:31:21.853129 extend-filesystems[1865]: Found loop4 Feb 13 15:31:21.854417 extend-filesystems[1865]: Found loop5 Feb 13 15:31:21.854417 extend-filesystems[1865]: Found loop6 Feb 13 15:31:21.854417 extend-filesystems[1865]: Found loop7 Feb 13 15:31:21.854417 extend-filesystems[1865]: Found nvme0n1 Feb 13 15:31:21.859885 extend-filesystems[1865]: Found nvme0n1p1 Feb 13 15:31:21.859885 extend-filesystems[1865]: Found nvme0n1p2 Feb 13 15:31:21.859885 extend-filesystems[1865]: Found nvme0n1p3 Feb 13 15:31:21.859885 extend-filesystems[1865]: Found usr Feb 13 15:31:21.859885 extend-filesystems[1865]: Found nvme0n1p4 Feb 13 15:31:21.859885 extend-filesystems[1865]: Found nvme0n1p6 Feb 13 15:31:21.859885 extend-filesystems[1865]: Found nvme0n1p7 Feb 13 15:31:21.859885 extend-filesystems[1865]: Found nvme0n1p9 Feb 13 15:31:21.859885 extend-filesystems[1865]: Checking size of /dev/nvme0n1p9 Feb 13 15:31:21.875519 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 15:31:21.879223 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 15:31:21.893393 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 15:31:21.895345 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 15:31:21.922304 jq[1880]: true Feb 13 15:31:21.932908 ntpd[1867]: 13 Feb 15:31:21 ntpd[1867]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 13:21:05 UTC 2025 (1): Starting Feb 13 15:31:21.932908 ntpd[1867]: 13 Feb 15:31:21 ntpd[1867]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 15:31:21.932908 ntpd[1867]: 13 Feb 15:31:21 ntpd[1867]: ---------------------------------------------------- Feb 13 15:31:21.932908 ntpd[1867]: 13 Feb 15:31:21 ntpd[1867]: ntp-4 is maintained by Network Time Foundation, Feb 13 15:31:21.932908 ntpd[1867]: 13 Feb 15:31:21 ntpd[1867]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 15:31:21.932908 ntpd[1867]: 13 Feb 15:31:21 ntpd[1867]: corporation. Support and training for ntp-4 are Feb 13 15:31:21.932908 ntpd[1867]: 13 Feb 15:31:21 ntpd[1867]: available at https://www.nwtime.org/support Feb 13 15:31:21.932908 ntpd[1867]: 13 Feb 15:31:21 ntpd[1867]: ---------------------------------------------------- Feb 13 15:31:21.929315 ntpd[1867]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 13:21:05 UTC 2025 (1): Starting Feb 13 15:31:21.929346 ntpd[1867]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 15:31:21.929357 ntpd[1867]: ---------------------------------------------------- Feb 13 15:31:21.929367 ntpd[1867]: ntp-4 is maintained by Network Time Foundation, Feb 13 15:31:21.929378 ntpd[1867]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 15:31:21.929388 ntpd[1867]: corporation. Support and training for ntp-4 are Feb 13 15:31:21.929397 ntpd[1867]: available at https://www.nwtime.org/support Feb 13 15:31:21.929407 ntpd[1867]: ---------------------------------------------------- Feb 13 15:31:21.949474 extend-filesystems[1865]: Resized partition /dev/nvme0n1p9 Feb 13 15:31:21.951526 coreos-metadata[1862]: Feb 13 15:31:21.947 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 15:31:21.953579 extend-filesystems[1901]: resize2fs 1.47.1 (20-May-2024) Feb 13 15:31:21.957951 coreos-metadata[1862]: Feb 13 15:31:21.957 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Feb 13 15:31:21.962943 update_engine[1876]: I20250213 15:31:21.962836 1876 main.cc:92] Flatcar Update Engine starting Feb 13 15:31:21.964795 ntpd[1867]: proto: precision = 0.091 usec (-23) Feb 13 15:31:21.965112 ntpd[1867]: 13 Feb 15:31:21 ntpd[1867]: proto: precision = 0.091 usec (-23) Feb 13 15:31:21.965155 coreos-metadata[1862]: Feb 13 15:31:21.965 INFO Fetch successful Feb 13 15:31:21.965155 coreos-metadata[1862]: Feb 13 15:31:21.965 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Feb 13 15:31:21.965866 ntpd[1867]: basedate set to 2025-02-01 Feb 13 15:31:21.965892 ntpd[1867]: gps base set to 2025-02-02 (week 2352) Feb 13 15:31:21.966029 ntpd[1867]: 13 Feb 15:31:21 ntpd[1867]: basedate set to 2025-02-01 Feb 13 15:31:21.966029 ntpd[1867]: 13 Feb 15:31:21 ntpd[1867]: gps base set to 2025-02-02 (week 2352) Feb 13 15:31:21.970299 coreos-metadata[1862]: Feb 13 15:31:21.968 INFO Fetch successful Feb 13 15:31:21.970299 coreos-metadata[1862]: Feb 13 15:31:21.968 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Feb 13 15:31:21.970299 coreos-metadata[1862]: Feb 13 15:31:21.969 INFO Fetch successful Feb 13 15:31:21.970299 coreos-metadata[1862]: Feb 13 15:31:21.969 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Feb 13 15:31:21.970633 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 15:31:21.970901 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 15:31:21.975130 coreos-metadata[1862]: Feb 13 15:31:21.975 INFO Fetch successful Feb 13 15:31:21.975130 coreos-metadata[1862]: Feb 13 15:31:21.975 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Feb 13 15:31:21.979333 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Feb 13 15:31:21.980944 coreos-metadata[1862]: Feb 13 15:31:21.979 INFO Fetch failed with 404: resource not found Feb 13 15:31:21.980944 coreos-metadata[1862]: Feb 13 15:31:21.979 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Feb 13 15:31:21.982086 coreos-metadata[1862]: Feb 13 15:31:21.982 INFO Fetch successful Feb 13 15:31:21.982190 coreos-metadata[1862]: Feb 13 15:31:21.982 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Feb 13 15:31:21.983900 dbus-daemon[1863]: [system] SELinux support is enabled Feb 13 15:31:21.984118 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 15:31:21.991125 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 15:31:21.992153 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 15:31:21.994844 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 15:31:21.994902 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 15:31:21.995831 ntpd[1867]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 15:31:21.996057 ntpd[1867]: 13 Feb 15:31:21 ntpd[1867]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 15:31:21.996057 ntpd[1867]: 13 Feb 15:31:21 ntpd[1867]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 15:31:21.995903 ntpd[1867]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 15:31:21.996326 ntpd[1867]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 15:31:21.998499 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 15:31:21.998396 ntpd[1867]: Listen normally on 3 eth0 172.31.29.23:123 Feb 13 15:31:21.999906 ntpd[1867]: 13 Feb 15:31:21 ntpd[1867]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 15:31:21.999906 ntpd[1867]: 13 Feb 15:31:21 ntpd[1867]: Listen normally on 3 eth0 172.31.29.23:123 Feb 13 15:31:21.999906 ntpd[1867]: 13 Feb 15:31:21 ntpd[1867]: Listen normally on 4 lo [::1]:123 Feb 13 15:31:21.999906 ntpd[1867]: 13 Feb 15:31:21 ntpd[1867]: bind(21) AF_INET6 fe80::499:c1ff:fe2d:5d77%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 15:31:21.999906 ntpd[1867]: 13 Feb 15:31:21 ntpd[1867]: unable to create socket on eth0 (5) for fe80::499:c1ff:fe2d:5d77%2#123 Feb 13 15:31:21.999906 ntpd[1867]: 13 Feb 15:31:21 ntpd[1867]: failed to init interface for address fe80::499:c1ff:fe2d:5d77%2 Feb 13 15:31:21.999906 ntpd[1867]: 13 Feb 15:31:21 ntpd[1867]: Listening on routing socket on fd #21 for interface updates Feb 13 15:31:21.998533 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 15:31:21.998446 ntpd[1867]: Listen normally on 4 lo [::1]:123 Feb 13 15:31:21.998502 ntpd[1867]: bind(21) AF_INET6 fe80::499:c1ff:fe2d:5d77%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 15:31:21.998525 ntpd[1867]: unable to create socket on eth0 (5) for fe80::499:c1ff:fe2d:5d77%2#123 Feb 13 15:31:21.998541 ntpd[1867]: failed to init interface for address fe80::499:c1ff:fe2d:5d77%2 Feb 13 15:31:21.998578 ntpd[1867]: Listening on routing socket on fd #21 for interface updates Feb 13 15:31:22.005948 coreos-metadata[1862]: Feb 13 15:31:22.005 INFO Fetch successful Feb 13 15:31:22.005948 coreos-metadata[1862]: Feb 13 15:31:22.005 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Feb 13 15:31:22.006176 dbus-daemon[1863]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1740 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 13 15:31:22.009800 (ntainerd)[1898]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 15:31:22.010331 systemd[1]: Started update-engine.service - Update Engine. Feb 13 15:31:22.015370 update_engine[1876]: I20250213 15:31:22.012542 1876 update_check_scheduler.cc:74] Next update check in 6m54s Feb 13 15:31:22.016626 ntpd[1867]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:31:22.017031 ntpd[1867]: 13 Feb 15:31:22 ntpd[1867]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:31:22.017031 ntpd[1867]: 13 Feb 15:31:22 ntpd[1867]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:31:22.016659 ntpd[1867]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:31:22.023911 coreos-metadata[1862]: Feb 13 15:31:22.021 INFO Fetch successful Feb 13 15:31:22.023911 coreos-metadata[1862]: Feb 13 15:31:22.021 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Feb 13 15:31:22.023911 coreos-metadata[1862]: Feb 13 15:31:22.022 INFO Fetch successful Feb 13 15:31:22.023911 coreos-metadata[1862]: Feb 13 15:31:22.022 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Feb 13 15:31:22.023911 coreos-metadata[1862]: Feb 13 15:31:22.023 INFO Fetch successful Feb 13 15:31:22.028568 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Feb 13 15:31:22.033168 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 15:31:22.068199 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (1741) Feb 13 15:31:22.073318 jq[1896]: true Feb 13 15:31:22.100678 tar[1882]: linux-amd64/helm Feb 13 15:31:22.111332 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Feb 13 15:31:22.159735 extend-filesystems[1901]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Feb 13 15:31:22.159735 extend-filesystems[1901]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 15:31:22.159735 extend-filesystems[1901]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Feb 13 15:31:22.175312 extend-filesystems[1865]: Resized filesystem in /dev/nvme0n1p9 Feb 13 15:31:22.172404 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 15:31:22.172955 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 15:31:22.186453 systemd[1]: Finished setup-oem.service - Setup OEM. Feb 13 15:31:22.201907 systemd-logind[1875]: Watching system buttons on /dev/input/event1 (Power Button) Feb 13 15:31:22.201938 systemd-logind[1875]: Watching system buttons on /dev/input/event2 (Sleep Button) Feb 13 15:31:22.201960 systemd-logind[1875]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 15:31:22.203569 systemd-logind[1875]: New seat seat0. Feb 13 15:31:22.204753 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 15:31:22.232485 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 15:31:22.234830 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 15:31:22.359576 locksmithd[1919]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 15:31:22.417552 bash[1976]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:31:22.418848 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 15:31:22.438719 systemd[1]: Starting sshkeys.service... Feb 13 15:31:22.483546 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 15:31:22.497813 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 15:31:22.746428 systemd-networkd[1740]: eth0: Gained IPv6LL Feb 13 15:31:22.761860 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 15:31:22.800986 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 15:31:22.816812 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Feb 13 15:31:22.831797 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:31:22.844856 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 15:31:22.929197 dbus-daemon[1863]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 13 15:31:22.932611 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Feb 13 15:31:22.941937 dbus-daemon[1863]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1912 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 13 15:31:22.974094 systemd[1]: Starting polkit.service - Authorization Manager... Feb 13 15:31:23.000985 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 15:31:23.096509 coreos-metadata[2006]: Feb 13 15:31:23.096 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 15:31:23.096509 coreos-metadata[2006]: Feb 13 15:31:23.096 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Feb 13 15:31:23.096509 coreos-metadata[2006]: Feb 13 15:31:23.096 INFO Fetch successful Feb 13 15:31:23.096509 coreos-metadata[2006]: Feb 13 15:31:23.096 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 13 15:31:23.096509 coreos-metadata[2006]: Feb 13 15:31:23.096 INFO Fetch successful Feb 13 15:31:23.107786 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 15:31:23.115102 unknown[2006]: wrote ssh authorized keys file for user: core Feb 13 15:31:23.147433 polkitd[2060]: Started polkitd version 121 Feb 13 15:31:23.162159 polkitd[2060]: Loading rules from directory /etc/polkit-1/rules.d Feb 13 15:31:23.162250 polkitd[2060]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 13 15:31:23.162936 polkitd[2060]: Finished loading, compiling and executing 2 rules Feb 13 15:31:23.163780 systemd[1]: Started polkit.service - Authorization Manager. Feb 13 15:31:23.163556 dbus-daemon[1863]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 13 15:31:23.164420 polkitd[2060]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 13 15:31:23.172574 amazon-ssm-agent[2050]: Initializing new seelog logger Feb 13 15:31:23.172574 amazon-ssm-agent[2050]: New Seelog Logger Creation Complete Feb 13 15:31:23.174866 amazon-ssm-agent[2050]: 2025/02/13 15:31:23 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:31:23.174866 amazon-ssm-agent[2050]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:31:23.174866 amazon-ssm-agent[2050]: 2025/02/13 15:31:23 processing appconfig overrides Feb 13 15:31:23.175981 amazon-ssm-agent[2050]: 2025/02/13 15:31:23 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:31:23.176149 amazon-ssm-agent[2050]: 2025-02-13 15:31:23 INFO Proxy environment variables: Feb 13 15:31:23.176382 amazon-ssm-agent[2050]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:31:23.177158 amazon-ssm-agent[2050]: 2025/02/13 15:31:23 processing appconfig overrides Feb 13 15:31:23.177746 update-ssh-keys[2071]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:31:23.179594 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 15:31:23.183785 systemd[1]: Finished sshkeys.service. Feb 13 15:31:23.196816 systemd-hostnamed[1912]: Hostname set to (transient) Feb 13 15:31:23.196938 systemd-resolved[1683]: System hostname changed to 'ip-172-31-29-23'. Feb 13 15:31:23.199559 amazon-ssm-agent[2050]: 2025/02/13 15:31:23 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:31:23.199559 amazon-ssm-agent[2050]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:31:23.199559 amazon-ssm-agent[2050]: 2025/02/13 15:31:23 processing appconfig overrides Feb 13 15:31:23.212301 amazon-ssm-agent[2050]: 2025/02/13 15:31:23 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:31:23.212301 amazon-ssm-agent[2050]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:31:23.212301 amazon-ssm-agent[2050]: 2025/02/13 15:31:23 processing appconfig overrides Feb 13 15:31:23.300447 amazon-ssm-agent[2050]: 2025-02-13 15:31:23 INFO https_proxy: Feb 13 15:31:23.364577 containerd[1898]: time="2025-02-13T15:31:23.364476748Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 15:31:23.401626 amazon-ssm-agent[2050]: 2025-02-13 15:31:23 INFO http_proxy: Feb 13 15:31:23.507749 amazon-ssm-agent[2050]: 2025-02-13 15:31:23 INFO no_proxy: Feb 13 15:31:23.508832 containerd[1898]: time="2025-02-13T15:31:23.508782316Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:31:23.518612 containerd[1898]: time="2025-02-13T15:31:23.518548073Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:31:23.518612 containerd[1898]: time="2025-02-13T15:31:23.518607255Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 15:31:23.518983 containerd[1898]: time="2025-02-13T15:31:23.518633669Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 15:31:23.521512 containerd[1898]: time="2025-02-13T15:31:23.519143955Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 15:31:23.521512 containerd[1898]: time="2025-02-13T15:31:23.519182999Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 15:31:23.521512 containerd[1898]: time="2025-02-13T15:31:23.519264212Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:31:23.521512 containerd[1898]: time="2025-02-13T15:31:23.519301699Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:31:23.521512 containerd[1898]: time="2025-02-13T15:31:23.519529280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:31:23.521512 containerd[1898]: time="2025-02-13T15:31:23.519551069Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 15:31:23.521512 containerd[1898]: time="2025-02-13T15:31:23.519571847Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:31:23.521512 containerd[1898]: time="2025-02-13T15:31:23.519586844Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 15:31:23.521512 containerd[1898]: time="2025-02-13T15:31:23.519676245Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:31:23.522428 containerd[1898]: time="2025-02-13T15:31:23.522394733Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:31:23.522627 containerd[1898]: time="2025-02-13T15:31:23.522594809Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:31:23.522692 containerd[1898]: time="2025-02-13T15:31:23.522625934Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 15:31:23.522842 containerd[1898]: time="2025-02-13T15:31:23.522749741Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 15:31:23.522961 containerd[1898]: time="2025-02-13T15:31:23.522903323Z" level=info msg="metadata content store policy set" policy=shared Feb 13 15:31:23.536302 containerd[1898]: time="2025-02-13T15:31:23.533326664Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 15:31:23.536302 containerd[1898]: time="2025-02-13T15:31:23.533405671Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 15:31:23.536302 containerd[1898]: time="2025-02-13T15:31:23.533431041Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 15:31:23.536302 containerd[1898]: time="2025-02-13T15:31:23.533458038Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 15:31:23.536302 containerd[1898]: time="2025-02-13T15:31:23.533478227Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 15:31:23.536302 containerd[1898]: time="2025-02-13T15:31:23.534101068Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 15:31:23.536302 containerd[1898]: time="2025-02-13T15:31:23.535607412Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 15:31:23.536302 containerd[1898]: time="2025-02-13T15:31:23.536050091Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 15:31:23.536302 containerd[1898]: time="2025-02-13T15:31:23.536079611Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 15:31:23.536302 containerd[1898]: time="2025-02-13T15:31:23.536101932Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 15:31:23.536302 containerd[1898]: time="2025-02-13T15:31:23.536123287Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 15:31:23.536302 containerd[1898]: time="2025-02-13T15:31:23.536154171Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 15:31:23.536302 containerd[1898]: time="2025-02-13T15:31:23.536174337Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 15:31:23.536302 containerd[1898]: time="2025-02-13T15:31:23.536195645Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 15:31:23.537028 containerd[1898]: time="2025-02-13T15:31:23.536219513Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 15:31:23.537028 containerd[1898]: time="2025-02-13T15:31:23.536238378Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 15:31:23.537028 containerd[1898]: time="2025-02-13T15:31:23.536259069Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 15:31:23.537028 containerd[1898]: time="2025-02-13T15:31:23.536291969Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 15:31:23.537028 containerd[1898]: time="2025-02-13T15:31:23.536358812Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 15:31:23.537028 containerd[1898]: time="2025-02-13T15:31:23.536379634Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 15:31:23.537028 containerd[1898]: time="2025-02-13T15:31:23.536396739Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 15:31:23.537028 containerd[1898]: time="2025-02-13T15:31:23.536413626Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 15:31:23.537028 containerd[1898]: time="2025-02-13T15:31:23.536429056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 15:31:23.537028 containerd[1898]: time="2025-02-13T15:31:23.536446216Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 15:31:23.537028 containerd[1898]: time="2025-02-13T15:31:23.536461856Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 15:31:23.537028 containerd[1898]: time="2025-02-13T15:31:23.536480656Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 15:31:23.537028 containerd[1898]: time="2025-02-13T15:31:23.536502637Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 15:31:23.537028 containerd[1898]: time="2025-02-13T15:31:23.536526426Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 15:31:23.540148 containerd[1898]: time="2025-02-13T15:31:23.536606673Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 15:31:23.540148 containerd[1898]: time="2025-02-13T15:31:23.536628995Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 15:31:23.540148 containerd[1898]: time="2025-02-13T15:31:23.536648824Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 15:31:23.540148 containerd[1898]: time="2025-02-13T15:31:23.536674432Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 15:31:23.540148 containerd[1898]: time="2025-02-13T15:31:23.536711841Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 15:31:23.540148 containerd[1898]: time="2025-02-13T15:31:23.536733697Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 15:31:23.540148 containerd[1898]: time="2025-02-13T15:31:23.536751023Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 15:31:23.540148 containerd[1898]: time="2025-02-13T15:31:23.536806419Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 15:31:23.540148 containerd[1898]: time="2025-02-13T15:31:23.536831209Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 15:31:23.540148 containerd[1898]: time="2025-02-13T15:31:23.536847398Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 15:31:23.540148 containerd[1898]: time="2025-02-13T15:31:23.536865786Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 15:31:23.540148 containerd[1898]: time="2025-02-13T15:31:23.536880457Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 15:31:23.540148 containerd[1898]: time="2025-02-13T15:31:23.536957956Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 15:31:23.540148 containerd[1898]: time="2025-02-13T15:31:23.536976311Z" level=info msg="NRI interface is disabled by configuration." Feb 13 15:31:23.546905 containerd[1898]: time="2025-02-13T15:31:23.536991848Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 15:31:23.546960 containerd[1898]: time="2025-02-13T15:31:23.541175547Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 15:31:23.546960 containerd[1898]: time="2025-02-13T15:31:23.541327484Z" level=info msg="Connect containerd service" Feb 13 15:31:23.546960 containerd[1898]: time="2025-02-13T15:31:23.541380794Z" level=info msg="using legacy CRI server" Feb 13 15:31:23.546960 containerd[1898]: time="2025-02-13T15:31:23.541392079Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 15:31:23.546960 containerd[1898]: time="2025-02-13T15:31:23.541565361Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 15:31:23.546960 containerd[1898]: time="2025-02-13T15:31:23.544585304Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:31:23.546960 containerd[1898]: time="2025-02-13T15:31:23.545831060Z" level=info msg="Start subscribing containerd event" Feb 13 15:31:23.546960 containerd[1898]: time="2025-02-13T15:31:23.545895126Z" level=info msg="Start recovering state" Feb 13 15:31:23.546960 containerd[1898]: time="2025-02-13T15:31:23.545981169Z" level=info msg="Start event monitor" Feb 13 15:31:23.546960 containerd[1898]: time="2025-02-13T15:31:23.546005642Z" level=info msg="Start snapshots syncer" Feb 13 15:31:23.546960 containerd[1898]: time="2025-02-13T15:31:23.546018317Z" level=info msg="Start cni network conf syncer for default" Feb 13 15:31:23.546960 containerd[1898]: time="2025-02-13T15:31:23.546029064Z" level=info msg="Start streaming server" Feb 13 15:31:23.551961 containerd[1898]: time="2025-02-13T15:31:23.548599875Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 15:31:23.551961 containerd[1898]: time="2025-02-13T15:31:23.548671489Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 15:31:23.549470 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 15:31:23.553433 containerd[1898]: time="2025-02-13T15:31:23.552611287Z" level=info msg="containerd successfully booted in 0.191826s" Feb 13 15:31:23.619583 amazon-ssm-agent[2050]: 2025-02-13 15:31:23 INFO Checking if agent identity type OnPrem can be assumed Feb 13 15:31:23.724075 amazon-ssm-agent[2050]: 2025-02-13 15:31:23 INFO Checking if agent identity type EC2 can be assumed Feb 13 15:31:23.827139 amazon-ssm-agent[2050]: 2025-02-13 15:31:23 INFO Agent will take identity from EC2 Feb 13 15:31:23.927046 amazon-ssm-agent[2050]: 2025-02-13 15:31:23 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 15:31:24.030290 amazon-ssm-agent[2050]: 2025-02-13 15:31:23 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 15:31:24.127587 amazon-ssm-agent[2050]: 2025-02-13 15:31:23 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 15:31:24.229489 amazon-ssm-agent[2050]: 2025-02-13 15:31:23 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Feb 13 15:31:24.332574 amazon-ssm-agent[2050]: 2025-02-13 15:31:23 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Feb 13 15:31:24.437066 amazon-ssm-agent[2050]: 2025-02-13 15:31:23 INFO [amazon-ssm-agent] Starting Core Agent Feb 13 15:31:24.462486 tar[1882]: linux-amd64/LICENSE Feb 13 15:31:24.462486 tar[1882]: linux-amd64/README.md Feb 13 15:31:24.503929 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 15:31:24.538294 amazon-ssm-agent[2050]: 2025-02-13 15:31:23 INFO [amazon-ssm-agent] registrar detected. Attempting registration Feb 13 15:31:24.635783 amazon-ssm-agent[2050]: 2025-02-13 15:31:23 INFO [Registrar] Starting registrar module Feb 13 15:31:24.736210 amazon-ssm-agent[2050]: 2025-02-13 15:31:23 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Feb 13 15:31:24.767427 amazon-ssm-agent[2050]: 2025-02-13 15:31:24 INFO [EC2Identity] EC2 registration was successful. Feb 13 15:31:24.767427 amazon-ssm-agent[2050]: 2025-02-13 15:31:24 INFO [CredentialRefresher] credentialRefresher has started Feb 13 15:31:24.767427 amazon-ssm-agent[2050]: 2025-02-13 15:31:24 INFO [CredentialRefresher] Starting credentials refresher loop Feb 13 15:31:24.767427 amazon-ssm-agent[2050]: 2025-02-13 15:31:24 INFO EC2RoleProvider Successfully connected with instance profile role credentials Feb 13 15:31:24.836142 amazon-ssm-agent[2050]: 2025-02-13 15:31:24 INFO [CredentialRefresher] Next credential rotation will be in 30.866660463583333 minutes Feb 13 15:31:24.929897 ntpd[1867]: Listen normally on 6 eth0 [fe80::499:c1ff:fe2d:5d77%2]:123 Feb 13 15:31:24.931483 ntpd[1867]: 13 Feb 15:31:24 ntpd[1867]: Listen normally on 6 eth0 [fe80::499:c1ff:fe2d:5d77%2]:123 Feb 13 15:31:24.991571 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:31:25.004837 (kubelet)[2094]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:31:25.698902 sshd_keygen[1907]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 15:31:25.738335 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 15:31:25.750462 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 15:31:25.758766 systemd[1]: Started sshd@0-172.31.29.23:22-139.178.89.65:37404.service - OpenSSH per-connection server daemon (139.178.89.65:37404). Feb 13 15:31:25.774682 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 15:31:25.775150 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 15:31:25.789670 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 15:31:25.808143 amazon-ssm-agent[2050]: 2025-02-13 15:31:25 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Feb 13 15:31:25.827958 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 15:31:25.836700 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 15:31:25.847809 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 15:31:25.849424 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 15:31:25.850905 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 15:31:25.853048 systemd[1]: Startup finished in 836ms (kernel) + 9.544s (initrd) + 9.637s (userspace) = 20.018s. Feb 13 15:31:25.912615 amazon-ssm-agent[2050]: 2025-02-13 15:31:25 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2118) started Feb 13 15:31:26.012827 amazon-ssm-agent[2050]: 2025-02-13 15:31:25 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Feb 13 15:31:26.118620 sshd[2110]: Accepted publickey for core from 139.178.89.65 port 37404 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:31:26.121602 sshd-session[2110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:31:26.141887 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 15:31:26.150702 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 15:31:26.167742 systemd-logind[1875]: New session 1 of user core. Feb 13 15:31:26.181046 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 15:31:26.191780 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 15:31:26.199178 (systemd)[2138]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 15:31:26.215833 kubelet[2094]: E0213 15:31:26.215730 2094 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:31:26.228191 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:31:26.228435 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:31:26.228789 systemd[1]: kubelet.service: Consumed 1.234s CPU time. Feb 13 15:31:26.352409 systemd[2138]: Queued start job for default target default.target. Feb 13 15:31:26.359895 systemd[2138]: Created slice app.slice - User Application Slice. Feb 13 15:31:26.359940 systemd[2138]: Reached target paths.target - Paths. Feb 13 15:31:26.359960 systemd[2138]: Reached target timers.target - Timers. Feb 13 15:31:26.361467 systemd[2138]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 15:31:26.392311 systemd[2138]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 15:31:26.392482 systemd[2138]: Reached target sockets.target - Sockets. Feb 13 15:31:26.392511 systemd[2138]: Reached target basic.target - Basic System. Feb 13 15:31:26.392568 systemd[2138]: Reached target default.target - Main User Target. Feb 13 15:31:26.392611 systemd[2138]: Startup finished in 182ms. Feb 13 15:31:26.393190 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 15:31:26.404533 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 15:31:26.563427 systemd[1]: Started sshd@1-172.31.29.23:22-139.178.89.65:51252.service - OpenSSH per-connection server daemon (139.178.89.65:51252). Feb 13 15:31:26.766310 sshd[2150]: Accepted publickey for core from 139.178.89.65 port 51252 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:31:26.767853 sshd-session[2150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:31:26.777289 systemd-logind[1875]: New session 2 of user core. Feb 13 15:31:26.784504 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 15:31:26.911234 sshd[2153]: Connection closed by 139.178.89.65 port 51252 Feb 13 15:31:26.912628 sshd-session[2150]: pam_unix(sshd:session): session closed for user core Feb 13 15:31:26.916000 systemd[1]: sshd@1-172.31.29.23:22-139.178.89.65:51252.service: Deactivated successfully. Feb 13 15:31:26.918940 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 15:31:26.920239 systemd-logind[1875]: Session 2 logged out. Waiting for processes to exit. Feb 13 15:31:26.921384 systemd-logind[1875]: Removed session 2. Feb 13 15:31:26.941945 systemd[1]: Started sshd@2-172.31.29.23:22-139.178.89.65:51254.service - OpenSSH per-connection server daemon (139.178.89.65:51254). Feb 13 15:31:27.110385 sshd[2158]: Accepted publickey for core from 139.178.89.65 port 51254 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:31:27.113017 sshd-session[2158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:31:27.120759 systemd-logind[1875]: New session 3 of user core. Feb 13 15:31:27.129631 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 15:31:27.255891 sshd[2160]: Connection closed by 139.178.89.65 port 51254 Feb 13 15:31:27.256701 sshd-session[2158]: pam_unix(sshd:session): session closed for user core Feb 13 15:31:27.268609 systemd[1]: sshd@2-172.31.29.23:22-139.178.89.65:51254.service: Deactivated successfully. Feb 13 15:31:27.273245 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 15:31:27.274487 systemd-logind[1875]: Session 3 logged out. Waiting for processes to exit. Feb 13 15:31:27.277784 systemd-logind[1875]: Removed session 3. Feb 13 15:31:27.312293 systemd[1]: Started sshd@3-172.31.29.23:22-139.178.89.65:51264.service - OpenSSH per-connection server daemon (139.178.89.65:51264). Feb 13 15:31:27.530188 sshd[2165]: Accepted publickey for core from 139.178.89.65 port 51264 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:31:27.533053 sshd-session[2165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:31:27.543723 systemd-logind[1875]: New session 4 of user core. Feb 13 15:31:27.550809 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 15:31:27.689424 sshd[2167]: Connection closed by 139.178.89.65 port 51264 Feb 13 15:31:27.690189 sshd-session[2165]: pam_unix(sshd:session): session closed for user core Feb 13 15:31:27.698689 systemd[1]: sshd@3-172.31.29.23:22-139.178.89.65:51264.service: Deactivated successfully. Feb 13 15:31:27.701591 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 15:31:27.704139 systemd-logind[1875]: Session 4 logged out. Waiting for processes to exit. Feb 13 15:31:27.706463 systemd-logind[1875]: Removed session 4. Feb 13 15:31:27.732884 systemd[1]: Started sshd@4-172.31.29.23:22-139.178.89.65:51274.service - OpenSSH per-connection server daemon (139.178.89.65:51274). Feb 13 15:31:27.915931 sshd[2172]: Accepted publickey for core from 139.178.89.65 port 51274 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:31:27.916647 sshd-session[2172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:31:27.926662 systemd-logind[1875]: New session 5 of user core. Feb 13 15:31:27.940220 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 15:31:28.056206 sudo[2175]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 15:31:28.056627 sudo[2175]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:31:28.072184 sudo[2175]: pam_unix(sudo:session): session closed for user root Feb 13 15:31:28.095198 sshd[2174]: Connection closed by 139.178.89.65 port 51274 Feb 13 15:31:28.097071 sshd-session[2172]: pam_unix(sshd:session): session closed for user core Feb 13 15:31:28.102768 systemd-logind[1875]: Session 5 logged out. Waiting for processes to exit. Feb 13 15:31:28.103074 systemd[1]: sshd@4-172.31.29.23:22-139.178.89.65:51274.service: Deactivated successfully. Feb 13 15:31:28.105474 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 15:31:28.106688 systemd-logind[1875]: Removed session 5. Feb 13 15:31:28.131834 systemd[1]: Started sshd@5-172.31.29.23:22-139.178.89.65:51286.service - OpenSSH per-connection server daemon (139.178.89.65:51286). Feb 13 15:31:28.370079 sshd[2180]: Accepted publickey for core from 139.178.89.65 port 51286 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:31:28.371539 sshd-session[2180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:31:28.381064 systemd-logind[1875]: New session 6 of user core. Feb 13 15:31:28.386513 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 15:31:28.502798 sudo[2184]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 15:31:28.503197 sudo[2184]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:31:28.511054 sudo[2184]: pam_unix(sudo:session): session closed for user root Feb 13 15:31:28.517782 sudo[2183]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 15:31:28.518185 sudo[2183]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:31:28.535823 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:31:28.575161 augenrules[2206]: No rules Feb 13 15:31:28.576625 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:31:28.576811 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:31:28.579087 sudo[2183]: pam_unix(sudo:session): session closed for user root Feb 13 15:31:28.602496 sshd[2182]: Connection closed by 139.178.89.65 port 51286 Feb 13 15:31:28.605795 sshd-session[2180]: pam_unix(sshd:session): session closed for user core Feb 13 15:31:28.621253 systemd[1]: sshd@5-172.31.29.23:22-139.178.89.65:51286.service: Deactivated successfully. Feb 13 15:31:28.624961 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 15:31:28.625996 systemd-logind[1875]: Session 6 logged out. Waiting for processes to exit. Feb 13 15:31:28.650122 systemd[1]: Started sshd@6-172.31.29.23:22-139.178.89.65:51292.service - OpenSSH per-connection server daemon (139.178.89.65:51292). Feb 13 15:31:28.654487 systemd-logind[1875]: Removed session 6. Feb 13 15:31:28.827326 sshd[2214]: Accepted publickey for core from 139.178.89.65 port 51292 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:31:28.829291 sshd-session[2214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:31:28.834322 systemd-logind[1875]: New session 7 of user core. Feb 13 15:31:28.848551 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 15:31:29.327750 systemd-resolved[1683]: Clock change detected. Flushing caches. Feb 13 15:31:29.352347 sudo[2217]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 15:31:29.353104 sudo[2217]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:31:29.899720 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 15:31:29.900645 (dockerd)[2236]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 15:31:30.537735 dockerd[2236]: time="2025-02-13T15:31:30.536812165Z" level=info msg="Starting up" Feb 13 15:31:30.713125 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1487495991-merged.mount: Deactivated successfully. Feb 13 15:31:30.931890 dockerd[2236]: time="2025-02-13T15:31:30.931435264Z" level=info msg="Loading containers: start." Feb 13 15:31:31.307284 kernel: Initializing XFRM netlink socket Feb 13 15:31:31.390581 (udev-worker)[2257]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:31:31.482719 systemd-networkd[1740]: docker0: Link UP Feb 13 15:31:31.518845 dockerd[2236]: time="2025-02-13T15:31:31.518794158Z" level=info msg="Loading containers: done." Feb 13 15:31:31.541221 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck447095098-merged.mount: Deactivated successfully. Feb 13 15:31:31.554867 dockerd[2236]: time="2025-02-13T15:31:31.554819531Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 15:31:31.555398 dockerd[2236]: time="2025-02-13T15:31:31.554943249Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Feb 13 15:31:31.555398 dockerd[2236]: time="2025-02-13T15:31:31.555089087Z" level=info msg="Daemon has completed initialization" Feb 13 15:31:31.610457 dockerd[2236]: time="2025-02-13T15:31:31.607780089Z" level=info msg="API listen on /run/docker.sock" Feb 13 15:31:31.607958 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 15:31:33.067118 containerd[1898]: time="2025-02-13T15:31:33.066722779Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.14\"" Feb 13 15:31:33.772538 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1913125025.mount: Deactivated successfully. Feb 13 15:31:36.623523 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 15:31:36.632344 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:31:37.025669 containerd[1898]: time="2025-02-13T15:31:37.021106628Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:37.034761 containerd[1898]: time="2025-02-13T15:31:37.034680135Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.14: active requests=0, bytes read=35142283" Feb 13 15:31:37.040658 containerd[1898]: time="2025-02-13T15:31:37.040265221Z" level=info msg="ImageCreate event name:\"sha256:41955df92b2799aec2c2840b2fc079945d248b6c88ab18062545d8065a0cd2ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:37.053207 containerd[1898]: time="2025-02-13T15:31:37.053114885Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1432b456b21015c99783d2b3a2010873fb67bf946c89d45e6d356449e083dcfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:37.054978 containerd[1898]: time="2025-02-13T15:31:37.054608474Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.14\" with image id \"sha256:41955df92b2799aec2c2840b2fc079945d248b6c88ab18062545d8065a0cd2ce\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.14\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1432b456b21015c99783d2b3a2010873fb67bf946c89d45e6d356449e083dcfb\", size \"35139083\" in 3.987836399s" Feb 13 15:31:37.054978 containerd[1898]: time="2025-02-13T15:31:37.054676379Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.14\" returns image reference \"sha256:41955df92b2799aec2c2840b2fc079945d248b6c88ab18062545d8065a0cd2ce\"" Feb 13 15:31:37.121144 containerd[1898]: time="2025-02-13T15:31:37.121082914Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.14\"" Feb 13 15:31:37.129330 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:31:37.147258 (kubelet)[2494]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:31:37.226344 kubelet[2494]: E0213 15:31:37.226282 2494 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:31:37.247591 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:31:37.249088 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:31:40.737668 containerd[1898]: time="2025-02-13T15:31:40.737609004Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:40.739157 containerd[1898]: time="2025-02-13T15:31:40.739025107Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.14: active requests=0, bytes read=32213164" Feb 13 15:31:40.740692 containerd[1898]: time="2025-02-13T15:31:40.740626850Z" level=info msg="ImageCreate event name:\"sha256:2c6e411a187e5df0e7d583a21e7ace20746e47cec95bf4cd597e0617e47f328b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:40.744839 containerd[1898]: time="2025-02-13T15:31:40.744171438Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:23ccdb5e7e2c317f5727652ef7e64ef91ead34a3c73dfa9c3ab23b3a5028e280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:40.745822 containerd[1898]: time="2025-02-13T15:31:40.745778807Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.14\" with image id \"sha256:2c6e411a187e5df0e7d583a21e7ace20746e47cec95bf4cd597e0617e47f328b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.14\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:23ccdb5e7e2c317f5727652ef7e64ef91ead34a3c73dfa9c3ab23b3a5028e280\", size \"33659710\" in 3.624640669s" Feb 13 15:31:40.745921 containerd[1898]: time="2025-02-13T15:31:40.745828419Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.14\" returns image reference \"sha256:2c6e411a187e5df0e7d583a21e7ace20746e47cec95bf4cd597e0617e47f328b\"" Feb 13 15:31:40.778835 containerd[1898]: time="2025-02-13T15:31:40.778788875Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.14\"" Feb 13 15:31:43.295303 containerd[1898]: time="2025-02-13T15:31:43.294784382Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:43.307023 containerd[1898]: time="2025-02-13T15:31:43.306959120Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.14: active requests=0, bytes read=17334056" Feb 13 15:31:43.321933 containerd[1898]: time="2025-02-13T15:31:43.321864301Z" level=info msg="ImageCreate event name:\"sha256:94dd66cb984e2a4209d2cb2cad88e199b7efb440fc198324ab2e12642de735fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:43.342824 containerd[1898]: time="2025-02-13T15:31:43.342747738Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf0046be3eb6c4831b6b2a1b3e24f18e27778663890144478f11a82622b48c48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:43.344653 containerd[1898]: time="2025-02-13T15:31:43.344455205Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.14\" with image id \"sha256:94dd66cb984e2a4209d2cb2cad88e199b7efb440fc198324ab2e12642de735fc\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.14\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf0046be3eb6c4831b6b2a1b3e24f18e27778663890144478f11a82622b48c48\", size \"18780620\" in 2.565625511s" Feb 13 15:31:43.344653 containerd[1898]: time="2025-02-13T15:31:43.344500935Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.14\" returns image reference \"sha256:94dd66cb984e2a4209d2cb2cad88e199b7efb440fc198324ab2e12642de735fc\"" Feb 13 15:31:43.378816 containerd[1898]: time="2025-02-13T15:31:43.378779707Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.14\"" Feb 13 15:31:44.895616 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount185991.mount: Deactivated successfully. Feb 13 15:31:45.695730 containerd[1898]: time="2025-02-13T15:31:45.695671050Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:45.698724 containerd[1898]: time="2025-02-13T15:31:45.698493823Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.14: active requests=0, bytes read=28620592" Feb 13 15:31:45.703071 containerd[1898]: time="2025-02-13T15:31:45.700383814Z" level=info msg="ImageCreate event name:\"sha256:609f2866f1e52a5f0d2651e1206db6aeb38e8c3f91175abcfaf7e87381e5cce2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:45.704566 containerd[1898]: time="2025-02-13T15:31:45.704523564Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:197988595a902751e4e570a5e4d74182f12d83c1d175c1e79aa020f358f6535b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:45.706701 containerd[1898]: time="2025-02-13T15:31:45.706597517Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.14\" with image id \"sha256:609f2866f1e52a5f0d2651e1206db6aeb38e8c3f91175abcfaf7e87381e5cce2\", repo tag \"registry.k8s.io/kube-proxy:v1.29.14\", repo digest \"registry.k8s.io/kube-proxy@sha256:197988595a902751e4e570a5e4d74182f12d83c1d175c1e79aa020f358f6535b\", size \"28619611\" in 2.327776736s" Feb 13 15:31:45.707766 containerd[1898]: time="2025-02-13T15:31:45.707736411Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.14\" returns image reference \"sha256:609f2866f1e52a5f0d2651e1206db6aeb38e8c3f91175abcfaf7e87381e5cce2\"" Feb 13 15:31:45.760512 containerd[1898]: time="2025-02-13T15:31:45.760466973Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 15:31:46.328741 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3187347551.mount: Deactivated successfully. Feb 13 15:31:47.374415 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 15:31:47.382636 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:31:47.926665 containerd[1898]: time="2025-02-13T15:31:47.926599955Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:47.938154 containerd[1898]: time="2025-02-13T15:31:47.936696107Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Feb 13 15:31:47.941229 containerd[1898]: time="2025-02-13T15:31:47.941182093Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:47.955052 containerd[1898]: time="2025-02-13T15:31:47.955003378Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:47.956885 containerd[1898]: time="2025-02-13T15:31:47.956843902Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.196340049s" Feb 13 15:31:47.956885 containerd[1898]: time="2025-02-13T15:31:47.956883378Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Feb 13 15:31:48.000126 containerd[1898]: time="2025-02-13T15:31:48.000086406Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 15:31:48.081419 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:31:48.101084 (kubelet)[2591]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:31:48.204881 kubelet[2591]: E0213 15:31:48.204698 2591 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:31:48.208504 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:31:48.208751 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:31:48.602417 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2505952672.mount: Deactivated successfully. Feb 13 15:31:48.617361 containerd[1898]: time="2025-02-13T15:31:48.617308013Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:48.619248 containerd[1898]: time="2025-02-13T15:31:48.619167928Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Feb 13 15:31:48.621226 containerd[1898]: time="2025-02-13T15:31:48.621154812Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:48.625413 containerd[1898]: time="2025-02-13T15:31:48.624395645Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:48.625413 containerd[1898]: time="2025-02-13T15:31:48.625251029Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 625.122826ms" Feb 13 15:31:48.625413 containerd[1898]: time="2025-02-13T15:31:48.625288032Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 13 15:31:48.652509 containerd[1898]: time="2025-02-13T15:31:48.652463912Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Feb 13 15:31:49.189911 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1661513029.mount: Deactivated successfully. Feb 13 15:31:53.630700 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 13 15:31:53.852231 containerd[1898]: time="2025-02-13T15:31:53.852172084Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:53.854303 containerd[1898]: time="2025-02-13T15:31:53.854236962Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Feb 13 15:31:53.856240 containerd[1898]: time="2025-02-13T15:31:53.856201013Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:53.863031 containerd[1898]: time="2025-02-13T15:31:53.862961991Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:53.864362 containerd[1898]: time="2025-02-13T15:31:53.864194871Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 5.211692343s" Feb 13 15:31:53.864362 containerd[1898]: time="2025-02-13T15:31:53.864235280Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Feb 13 15:31:57.875365 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:31:57.884599 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:31:57.923177 systemd[1]: Reloading requested from client PID 2720 ('systemctl') (unit session-7.scope)... Feb 13 15:31:57.923195 systemd[1]: Reloading... Feb 13 15:31:58.070215 zram_generator::config[2763]: No configuration found. Feb 13 15:31:58.273077 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:31:58.385613 systemd[1]: Reloading finished in 461 ms. Feb 13 15:31:58.445841 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:31:58.453835 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:31:58.457567 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:31:58.457815 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:31:58.466679 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:31:58.733864 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:31:58.747895 (kubelet)[2822]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:31:58.807791 kubelet[2822]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:31:58.807791 kubelet[2822]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:31:58.807791 kubelet[2822]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:31:58.809606 kubelet[2822]: I0213 15:31:58.809548 2822 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:31:59.226187 kubelet[2822]: I0213 15:31:59.226127 2822 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Feb 13 15:31:59.226187 kubelet[2822]: I0213 15:31:59.226185 2822 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:31:59.226476 kubelet[2822]: I0213 15:31:59.226455 2822 server.go:919] "Client rotation is on, will bootstrap in background" Feb 13 15:31:59.258333 kubelet[2822]: E0213 15:31:59.258248 2822 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.29.23:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.29.23:6443: connect: connection refused Feb 13 15:31:59.258824 kubelet[2822]: I0213 15:31:59.258605 2822 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:31:59.276804 kubelet[2822]: I0213 15:31:59.276772 2822 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:31:59.277882 kubelet[2822]: I0213 15:31:59.277759 2822 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:31:59.279065 kubelet[2822]: I0213 15:31:59.279032 2822 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:31:59.279719 kubelet[2822]: I0213 15:31:59.279696 2822 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:31:59.279788 kubelet[2822]: I0213 15:31:59.279724 2822 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:31:59.279876 kubelet[2822]: I0213 15:31:59.279858 2822 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:31:59.280005 kubelet[2822]: I0213 15:31:59.279990 2822 kubelet.go:396] "Attempting to sync node with API server" Feb 13 15:31:59.280451 kubelet[2822]: I0213 15:31:59.280014 2822 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:31:59.280679 kubelet[2822]: W0213 15:31:59.280635 2822 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.29.23:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-23&limit=500&resourceVersion=0": dial tcp 172.31.29.23:6443: connect: connection refused Feb 13 15:31:59.280793 kubelet[2822]: E0213 15:31:59.280782 2822 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.29.23:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-23&limit=500&resourceVersion=0": dial tcp 172.31.29.23:6443: connect: connection refused Feb 13 15:31:59.280867 kubelet[2822]: I0213 15:31:59.280818 2822 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:31:59.280938 kubelet[2822]: I0213 15:31:59.280929 2822 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:31:59.283177 kubelet[2822]: W0213 15:31:59.283117 2822 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.29.23:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.29.23:6443: connect: connection refused Feb 13 15:31:59.283619 kubelet[2822]: E0213 15:31:59.283569 2822 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.29.23:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.29.23:6443: connect: connection refused Feb 13 15:31:59.284118 kubelet[2822]: I0213 15:31:59.283793 2822 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:31:59.298591 kubelet[2822]: I0213 15:31:59.297957 2822 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:31:59.298591 kubelet[2822]: W0213 15:31:59.298200 2822 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 15:31:59.300034 kubelet[2822]: I0213 15:31:59.300005 2822 server.go:1256] "Started kubelet" Feb 13 15:31:59.303153 kubelet[2822]: I0213 15:31:59.301617 2822 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:31:59.308026 kubelet[2822]: I0213 15:31:59.305338 2822 server.go:461] "Adding debug handlers to kubelet server" Feb 13 15:31:59.311513 kubelet[2822]: I0213 15:31:59.311476 2822 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:31:59.315650 kubelet[2822]: I0213 15:31:59.315040 2822 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:31:59.315650 kubelet[2822]: I0213 15:31:59.315318 2822 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:31:59.320933 kubelet[2822]: E0213 15:31:59.320905 2822 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.29.23:6443/api/v1/namespaces/default/events\": dial tcp 172.31.29.23:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-29-23.1823ce4ee29e40d6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-29-23,UID:ip-172-31-29-23,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-29-23,},FirstTimestamp:2025-02-13 15:31:59.29993647 +0000 UTC m=+0.545675149,LastTimestamp:2025-02-13 15:31:59.29993647 +0000 UTC m=+0.545675149,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-29-23,}" Feb 13 15:31:59.322944 kubelet[2822]: I0213 15:31:59.322922 2822 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:31:59.331291 kubelet[2822]: I0213 15:31:59.331254 2822 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 13 15:31:59.332874 kubelet[2822]: I0213 15:31:59.332782 2822 reconciler_new.go:29] "Reconciler: start to sync state" Feb 13 15:31:59.335011 kubelet[2822]: E0213 15:31:59.334977 2822 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-23?timeout=10s\": dial tcp 172.31.29.23:6443: connect: connection refused" interval="200ms" Feb 13 15:31:59.341767 kubelet[2822]: I0213 15:31:59.340523 2822 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:31:59.341767 kubelet[2822]: I0213 15:31:59.340642 2822 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:31:59.341767 kubelet[2822]: I0213 15:31:59.340784 2822 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:31:59.343652 kubelet[2822]: W0213 15:31:59.343596 2822 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.29.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.23:6443: connect: connection refused Feb 13 15:31:59.343725 kubelet[2822]: E0213 15:31:59.343665 2822 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.29.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.23:6443: connect: connection refused Feb 13 15:31:59.344606 kubelet[2822]: I0213 15:31:59.344575 2822 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:31:59.345456 kubelet[2822]: I0213 15:31:59.345413 2822 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:31:59.345456 kubelet[2822]: I0213 15:31:59.345451 2822 kubelet.go:2329] "Starting kubelet main sync loop" Feb 13 15:31:59.345666 kubelet[2822]: E0213 15:31:59.345621 2822 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:31:59.355474 kubelet[2822]: I0213 15:31:59.355354 2822 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:31:59.357743 kubelet[2822]: W0213 15:31:59.357271 2822 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.29.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.23:6443: connect: connection refused Feb 13 15:31:59.357743 kubelet[2822]: E0213 15:31:59.357340 2822 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.29.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.23:6443: connect: connection refused Feb 13 15:31:59.357743 kubelet[2822]: E0213 15:31:59.357576 2822 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:31:59.375445 kubelet[2822]: I0213 15:31:59.375415 2822 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:31:59.375445 kubelet[2822]: I0213 15:31:59.375437 2822 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:31:59.375445 kubelet[2822]: I0213 15:31:59.375455 2822 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:31:59.381196 kubelet[2822]: I0213 15:31:59.381160 2822 policy_none.go:49] "None policy: Start" Feb 13 15:31:59.382262 kubelet[2822]: I0213 15:31:59.382206 2822 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:31:59.382378 kubelet[2822]: I0213 15:31:59.382284 2822 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:31:59.400035 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 15:31:59.411814 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 15:31:59.416999 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 15:31:59.424860 kubelet[2822]: I0213 15:31:59.424780 2822 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:31:59.425690 kubelet[2822]: I0213 15:31:59.425242 2822 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:31:59.429280 kubelet[2822]: I0213 15:31:59.428915 2822 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-29-23" Feb 13 15:31:59.430294 kubelet[2822]: E0213 15:31:59.430274 2822 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.29.23:6443/api/v1/nodes\": dial tcp 172.31.29.23:6443: connect: connection refused" node="ip-172-31-29-23" Feb 13 15:31:59.430294 kubelet[2822]: E0213 15:31:59.430385 2822 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-29-23\" not found" Feb 13 15:31:59.446033 kubelet[2822]: I0213 15:31:59.445982 2822 topology_manager.go:215] "Topology Admit Handler" podUID="a876b9fcb11e0ea65472654010730c60" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-29-23" Feb 13 15:31:59.450569 kubelet[2822]: I0213 15:31:59.450531 2822 topology_manager.go:215] "Topology Admit Handler" podUID="fe9fdc5493f28cae5b2a842eb31f98f1" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-29-23" Feb 13 15:31:59.452150 kubelet[2822]: I0213 15:31:59.452109 2822 topology_manager.go:215] "Topology Admit Handler" podUID="396144689781154b10488501b9790b14" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-29-23" Feb 13 15:31:59.463372 systemd[1]: Created slice kubepods-burstable-poda876b9fcb11e0ea65472654010730c60.slice - libcontainer container kubepods-burstable-poda876b9fcb11e0ea65472654010730c60.slice. Feb 13 15:31:59.474427 systemd[1]: Created slice kubepods-burstable-podfe9fdc5493f28cae5b2a842eb31f98f1.slice - libcontainer container kubepods-burstable-podfe9fdc5493f28cae5b2a842eb31f98f1.slice. Feb 13 15:31:59.486171 systemd[1]: Created slice kubepods-burstable-pod396144689781154b10488501b9790b14.slice - libcontainer container kubepods-burstable-pod396144689781154b10488501b9790b14.slice. Feb 13 15:31:59.533969 kubelet[2822]: I0213 15:31:59.533932 2822 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a876b9fcb11e0ea65472654010730c60-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-29-23\" (UID: \"a876b9fcb11e0ea65472654010730c60\") " pod="kube-system/kube-apiserver-ip-172-31-29-23" Feb 13 15:31:59.534391 kubelet[2822]: I0213 15:31:59.533996 2822 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fe9fdc5493f28cae5b2a842eb31f98f1-ca-certs\") pod \"kube-controller-manager-ip-172-31-29-23\" (UID: \"fe9fdc5493f28cae5b2a842eb31f98f1\") " pod="kube-system/kube-controller-manager-ip-172-31-29-23" Feb 13 15:31:59.534391 kubelet[2822]: I0213 15:31:59.534026 2822 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fe9fdc5493f28cae5b2a842eb31f98f1-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-29-23\" (UID: \"fe9fdc5493f28cae5b2a842eb31f98f1\") " pod="kube-system/kube-controller-manager-ip-172-31-29-23" Feb 13 15:31:59.534391 kubelet[2822]: I0213 15:31:59.534055 2822 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe9fdc5493f28cae5b2a842eb31f98f1-kubeconfig\") pod \"kube-controller-manager-ip-172-31-29-23\" (UID: \"fe9fdc5493f28cae5b2a842eb31f98f1\") " pod="kube-system/kube-controller-manager-ip-172-31-29-23" Feb 13 15:31:59.534391 kubelet[2822]: I0213 15:31:59.534092 2822 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fe9fdc5493f28cae5b2a842eb31f98f1-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-29-23\" (UID: \"fe9fdc5493f28cae5b2a842eb31f98f1\") " pod="kube-system/kube-controller-manager-ip-172-31-29-23" Feb 13 15:31:59.534391 kubelet[2822]: I0213 15:31:59.534123 2822 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/396144689781154b10488501b9790b14-kubeconfig\") pod \"kube-scheduler-ip-172-31-29-23\" (UID: \"396144689781154b10488501b9790b14\") " pod="kube-system/kube-scheduler-ip-172-31-29-23" Feb 13 15:31:59.536060 kubelet[2822]: I0213 15:31:59.534164 2822 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a876b9fcb11e0ea65472654010730c60-ca-certs\") pod \"kube-apiserver-ip-172-31-29-23\" (UID: \"a876b9fcb11e0ea65472654010730c60\") " pod="kube-system/kube-apiserver-ip-172-31-29-23" Feb 13 15:31:59.536060 kubelet[2822]: I0213 15:31:59.534191 2822 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a876b9fcb11e0ea65472654010730c60-k8s-certs\") pod \"kube-apiserver-ip-172-31-29-23\" (UID: \"a876b9fcb11e0ea65472654010730c60\") " pod="kube-system/kube-apiserver-ip-172-31-29-23" Feb 13 15:31:59.536060 kubelet[2822]: I0213 15:31:59.534223 2822 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fe9fdc5493f28cae5b2a842eb31f98f1-k8s-certs\") pod \"kube-controller-manager-ip-172-31-29-23\" (UID: \"fe9fdc5493f28cae5b2a842eb31f98f1\") " pod="kube-system/kube-controller-manager-ip-172-31-29-23" Feb 13 15:31:59.536060 kubelet[2822]: E0213 15:31:59.535865 2822 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-23?timeout=10s\": dial tcp 172.31.29.23:6443: connect: connection refused" interval="400ms" Feb 13 15:31:59.632859 kubelet[2822]: I0213 15:31:59.632830 2822 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-29-23" Feb 13 15:31:59.633239 kubelet[2822]: E0213 15:31:59.633214 2822 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.29.23:6443/api/v1/nodes\": dial tcp 172.31.29.23:6443: connect: connection refused" node="ip-172-31-29-23" Feb 13 15:31:59.773786 containerd[1898]: time="2025-02-13T15:31:59.773662439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-29-23,Uid:a876b9fcb11e0ea65472654010730c60,Namespace:kube-system,Attempt:0,}" Feb 13 15:31:59.779727 containerd[1898]: time="2025-02-13T15:31:59.779310535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-29-23,Uid:fe9fdc5493f28cae5b2a842eb31f98f1,Namespace:kube-system,Attempt:0,}" Feb 13 15:31:59.792273 containerd[1898]: time="2025-02-13T15:31:59.792231274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-29-23,Uid:396144689781154b10488501b9790b14,Namespace:kube-system,Attempt:0,}" Feb 13 15:31:59.936932 kubelet[2822]: E0213 15:31:59.936895 2822 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-23?timeout=10s\": dial tcp 172.31.29.23:6443: connect: connection refused" interval="800ms" Feb 13 15:32:00.035805 kubelet[2822]: I0213 15:32:00.035690 2822 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-29-23" Feb 13 15:32:00.036227 kubelet[2822]: E0213 15:32:00.036203 2822 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.29.23:6443/api/v1/nodes\": dial tcp 172.31.29.23:6443: connect: connection refused" node="ip-172-31-29-23" Feb 13 15:32:00.116917 kubelet[2822]: W0213 15:32:00.116842 2822 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.29.23:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-23&limit=500&resourceVersion=0": dial tcp 172.31.29.23:6443: connect: connection refused Feb 13 15:32:00.116917 kubelet[2822]: E0213 15:32:00.116919 2822 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.29.23:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-23&limit=500&resourceVersion=0": dial tcp 172.31.29.23:6443: connect: connection refused Feb 13 15:32:00.166009 kubelet[2822]: W0213 15:32:00.165946 2822 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.29.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.23:6443: connect: connection refused Feb 13 15:32:00.166009 kubelet[2822]: E0213 15:32:00.166013 2822 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.29.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.23:6443: connect: connection refused Feb 13 15:32:00.309089 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2068589499.mount: Deactivated successfully. Feb 13 15:32:00.314677 kubelet[2822]: W0213 15:32:00.314606 2822 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.29.23:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.29.23:6443: connect: connection refused Feb 13 15:32:00.314677 kubelet[2822]: E0213 15:32:00.314672 2822 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.29.23:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.29.23:6443: connect: connection refused Feb 13 15:32:00.322188 containerd[1898]: time="2025-02-13T15:32:00.322108870Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:32:00.327030 containerd[1898]: time="2025-02-13T15:32:00.326602929Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Feb 13 15:32:00.328362 containerd[1898]: time="2025-02-13T15:32:00.328314060Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:32:00.330252 containerd[1898]: time="2025-02-13T15:32:00.330207427Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:32:00.333342 containerd[1898]: time="2025-02-13T15:32:00.333186154Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:32:00.334843 containerd[1898]: time="2025-02-13T15:32:00.334796661Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:32:00.336414 containerd[1898]: time="2025-02-13T15:32:00.336233350Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:32:00.340168 containerd[1898]: time="2025-02-13T15:32:00.339229036Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:32:00.343964 containerd[1898]: time="2025-02-13T15:32:00.341978045Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 549.637954ms" Feb 13 15:32:00.345822 containerd[1898]: time="2025-02-13T15:32:00.345771548Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 566.359223ms" Feb 13 15:32:00.359219 containerd[1898]: time="2025-02-13T15:32:00.359164027Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 581.941515ms" Feb 13 15:32:00.496987 kubelet[2822]: W0213 15:32:00.495622 2822 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.29.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.23:6443: connect: connection refused Feb 13 15:32:00.496987 kubelet[2822]: E0213 15:32:00.495679 2822 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.29.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.23:6443: connect: connection refused Feb 13 15:32:00.634408 containerd[1898]: time="2025-02-13T15:32:00.634066962Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:32:00.634408 containerd[1898]: time="2025-02-13T15:32:00.634181520Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:32:00.634408 containerd[1898]: time="2025-02-13T15:32:00.634209443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:32:00.635623 containerd[1898]: time="2025-02-13T15:32:00.628827131Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:32:00.637475 containerd[1898]: time="2025-02-13T15:32:00.636525021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:32:00.637609 containerd[1898]: time="2025-02-13T15:32:00.636184087Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:32:00.637609 containerd[1898]: time="2025-02-13T15:32:00.636214890Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:32:00.637894 containerd[1898]: time="2025-02-13T15:32:00.637727283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:32:00.640784 containerd[1898]: time="2025-02-13T15:32:00.640622681Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:32:00.641147 containerd[1898]: time="2025-02-13T15:32:00.640761743Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:32:00.641147 containerd[1898]: time="2025-02-13T15:32:00.640803815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:32:00.642003 containerd[1898]: time="2025-02-13T15:32:00.641952096Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:32:00.679826 systemd[1]: Started cri-containerd-82e755cc6aaef522fc4afd0ddaee50cf22213df8b1f2640e815691cac81f2c15.scope - libcontainer container 82e755cc6aaef522fc4afd0ddaee50cf22213df8b1f2640e815691cac81f2c15. Feb 13 15:32:00.703376 systemd[1]: Started cri-containerd-9e9f89b94fe2b7006ff18f82a72c4bf77cb6c1bfde656926fca03d656cb0274d.scope - libcontainer container 9e9f89b94fe2b7006ff18f82a72c4bf77cb6c1bfde656926fca03d656cb0274d. Feb 13 15:32:00.712208 systemd[1]: Started cri-containerd-a4b384eb9923a04862ef550f3ea3ee4aa653bd6adcee89e850c9aa4a4476e6c1.scope - libcontainer container a4b384eb9923a04862ef550f3ea3ee4aa653bd6adcee89e850c9aa4a4476e6c1. Feb 13 15:32:00.738827 kubelet[2822]: E0213 15:32:00.738686 2822 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-23?timeout=10s\": dial tcp 172.31.29.23:6443: connect: connection refused" interval="1.6s" Feb 13 15:32:00.841309 kubelet[2822]: I0213 15:32:00.841244 2822 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-29-23" Feb 13 15:32:00.841901 kubelet[2822]: E0213 15:32:00.841604 2822 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.29.23:6443/api/v1/nodes\": dial tcp 172.31.29.23:6443: connect: connection refused" node="ip-172-31-29-23" Feb 13 15:32:00.852765 containerd[1898]: time="2025-02-13T15:32:00.852458491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-29-23,Uid:a876b9fcb11e0ea65472654010730c60,Namespace:kube-system,Attempt:0,} returns sandbox id \"82e755cc6aaef522fc4afd0ddaee50cf22213df8b1f2640e815691cac81f2c15\"" Feb 13 15:32:00.862938 containerd[1898]: time="2025-02-13T15:32:00.862627146Z" level=info msg="CreateContainer within sandbox \"82e755cc6aaef522fc4afd0ddaee50cf22213df8b1f2640e815691cac81f2c15\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 15:32:00.869536 containerd[1898]: time="2025-02-13T15:32:00.869295641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-29-23,Uid:396144689781154b10488501b9790b14,Namespace:kube-system,Attempt:0,} returns sandbox id \"9e9f89b94fe2b7006ff18f82a72c4bf77cb6c1bfde656926fca03d656cb0274d\"" Feb 13 15:32:00.880592 containerd[1898]: time="2025-02-13T15:32:00.880520506Z" level=info msg="CreateContainer within sandbox \"9e9f89b94fe2b7006ff18f82a72c4bf77cb6c1bfde656926fca03d656cb0274d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 15:32:00.883589 containerd[1898]: time="2025-02-13T15:32:00.883552910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-29-23,Uid:fe9fdc5493f28cae5b2a842eb31f98f1,Namespace:kube-system,Attempt:0,} returns sandbox id \"a4b384eb9923a04862ef550f3ea3ee4aa653bd6adcee89e850c9aa4a4476e6c1\"" Feb 13 15:32:00.888920 containerd[1898]: time="2025-02-13T15:32:00.888368402Z" level=info msg="CreateContainer within sandbox \"a4b384eb9923a04862ef550f3ea3ee4aa653bd6adcee89e850c9aa4a4476e6c1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 15:32:00.909739 kubelet[2822]: E0213 15:32:00.909708 2822 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.29.23:6443/api/v1/namespaces/default/events\": dial tcp 172.31.29.23:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-29-23.1823ce4ee29e40d6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-29-23,UID:ip-172-31-29-23,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-29-23,},FirstTimestamp:2025-02-13 15:31:59.29993647 +0000 UTC m=+0.545675149,LastTimestamp:2025-02-13 15:31:59.29993647 +0000 UTC m=+0.545675149,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-29-23,}" Feb 13 15:32:00.924770 containerd[1898]: time="2025-02-13T15:32:00.924681577Z" level=info msg="CreateContainer within sandbox \"82e755cc6aaef522fc4afd0ddaee50cf22213df8b1f2640e815691cac81f2c15\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c44f00951f0cddfad7e88cca5a871498195490bd07be98d9b4e2c0871eb42268\"" Feb 13 15:32:00.925906 containerd[1898]: time="2025-02-13T15:32:00.925872155Z" level=info msg="StartContainer for \"c44f00951f0cddfad7e88cca5a871498195490bd07be98d9b4e2c0871eb42268\"" Feb 13 15:32:00.933440 containerd[1898]: time="2025-02-13T15:32:00.933300860Z" level=info msg="CreateContainer within sandbox \"9e9f89b94fe2b7006ff18f82a72c4bf77cb6c1bfde656926fca03d656cb0274d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e049a2ceacfb3461ec2753b3bc7a548adad5a4021906b98b00ae190f59c0a106\"" Feb 13 15:32:00.933854 containerd[1898]: time="2025-02-13T15:32:00.933819073Z" level=info msg="StartContainer for \"e049a2ceacfb3461ec2753b3bc7a548adad5a4021906b98b00ae190f59c0a106\"" Feb 13 15:32:00.975072 containerd[1898]: time="2025-02-13T15:32:00.974946730Z" level=info msg="CreateContainer within sandbox \"a4b384eb9923a04862ef550f3ea3ee4aa653bd6adcee89e850c9aa4a4476e6c1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1defedcc2442d81cec43745e6d854f19ffbfffcddd6e4de3ffdd2be8e7c1093c\"" Feb 13 15:32:00.976015 containerd[1898]: time="2025-02-13T15:32:00.975970483Z" level=info msg="StartContainer for \"1defedcc2442d81cec43745e6d854f19ffbfffcddd6e4de3ffdd2be8e7c1093c\"" Feb 13 15:32:01.046827 systemd[1]: Started cri-containerd-c44f00951f0cddfad7e88cca5a871498195490bd07be98d9b4e2c0871eb42268.scope - libcontainer container c44f00951f0cddfad7e88cca5a871498195490bd07be98d9b4e2c0871eb42268. Feb 13 15:32:01.111180 systemd[1]: Started cri-containerd-e049a2ceacfb3461ec2753b3bc7a548adad5a4021906b98b00ae190f59c0a106.scope - libcontainer container e049a2ceacfb3461ec2753b3bc7a548adad5a4021906b98b00ae190f59c0a106. Feb 13 15:32:01.147958 systemd[1]: Started cri-containerd-1defedcc2442d81cec43745e6d854f19ffbfffcddd6e4de3ffdd2be8e7c1093c.scope - libcontainer container 1defedcc2442d81cec43745e6d854f19ffbfffcddd6e4de3ffdd2be8e7c1093c. Feb 13 15:32:01.232249 containerd[1898]: time="2025-02-13T15:32:01.232203270Z" level=info msg="StartContainer for \"c44f00951f0cddfad7e88cca5a871498195490bd07be98d9b4e2c0871eb42268\" returns successfully" Feb 13 15:32:01.258108 containerd[1898]: time="2025-02-13T15:32:01.257995422Z" level=info msg="StartContainer for \"e049a2ceacfb3461ec2753b3bc7a548adad5a4021906b98b00ae190f59c0a106\" returns successfully" Feb 13 15:32:01.280377 containerd[1898]: time="2025-02-13T15:32:01.280316545Z" level=info msg="StartContainer for \"1defedcc2442d81cec43745e6d854f19ffbfffcddd6e4de3ffdd2be8e7c1093c\" returns successfully" Feb 13 15:32:01.282208 kubelet[2822]: E0213 15:32:01.282058 2822 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.29.23:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.29.23:6443: connect: connection refused Feb 13 15:32:02.063311 kubelet[2822]: W0213 15:32:02.063229 2822 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.29.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.23:6443: connect: connection refused Feb 13 15:32:02.063311 kubelet[2822]: E0213 15:32:02.063318 2822 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.29.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.23:6443: connect: connection refused Feb 13 15:32:02.444412 kubelet[2822]: I0213 15:32:02.443860 2822 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-29-23" Feb 13 15:32:04.428476 kubelet[2822]: E0213 15:32:04.428344 2822 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-29-23\" not found" node="ip-172-31-29-23" Feb 13 15:32:04.461677 kubelet[2822]: I0213 15:32:04.461642 2822 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-29-23" Feb 13 15:32:05.292152 kubelet[2822]: I0213 15:32:05.291929 2822 apiserver.go:52] "Watching apiserver" Feb 13 15:32:05.332121 kubelet[2822]: I0213 15:32:05.332076 2822 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 13 15:32:07.237863 update_engine[1876]: I20250213 15:32:07.237747 1876 update_attempter.cc:509] Updating boot flags... Feb 13 15:32:07.302185 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3111) Feb 13 15:32:07.951530 systemd[1]: Reloading requested from client PID 3195 ('systemctl') (unit session-7.scope)... Feb 13 15:32:07.951550 systemd[1]: Reloading... Feb 13 15:32:08.152295 zram_generator::config[3235]: No configuration found. Feb 13 15:32:08.351923 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:32:08.466961 systemd[1]: Reloading finished in 514 ms. Feb 13 15:32:08.518744 kubelet[2822]: I0213 15:32:08.518668 2822 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:32:08.519384 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:32:08.536659 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:32:08.536929 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:32:08.546910 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:32:08.862020 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:32:08.875774 (kubelet)[3292]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:32:08.996630 kubelet[3292]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:32:08.997020 kubelet[3292]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:32:08.997121 kubelet[3292]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:32:08.997368 kubelet[3292]: I0213 15:32:08.997332 3292 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:32:09.004214 kubelet[3292]: I0213 15:32:09.004180 3292 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Feb 13 15:32:09.004214 kubelet[3292]: I0213 15:32:09.004209 3292 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:32:09.005328 kubelet[3292]: I0213 15:32:09.005297 3292 server.go:919] "Client rotation is on, will bootstrap in background" Feb 13 15:32:09.007563 kubelet[3292]: I0213 15:32:09.007536 3292 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 15:32:09.017118 kubelet[3292]: I0213 15:32:09.016299 3292 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:32:09.030436 sudo[3306]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 13 15:32:09.030884 sudo[3306]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Feb 13 15:32:09.035494 kubelet[3292]: I0213 15:32:09.035422 3292 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:32:09.036026 kubelet[3292]: I0213 15:32:09.035696 3292 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:32:09.036026 kubelet[3292]: I0213 15:32:09.035980 3292 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:32:09.036026 kubelet[3292]: I0213 15:32:09.036010 3292 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:32:09.036026 kubelet[3292]: I0213 15:32:09.036024 3292 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:32:09.036352 kubelet[3292]: I0213 15:32:09.036065 3292 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:32:09.036352 kubelet[3292]: I0213 15:32:09.036190 3292 kubelet.go:396] "Attempting to sync node with API server" Feb 13 15:32:09.036352 kubelet[3292]: I0213 15:32:09.036209 3292 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:32:09.036352 kubelet[3292]: I0213 15:32:09.036241 3292 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:32:09.036352 kubelet[3292]: I0213 15:32:09.036261 3292 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:32:09.042720 kubelet[3292]: I0213 15:32:09.039377 3292 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:32:09.042720 kubelet[3292]: I0213 15:32:09.039601 3292 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:32:09.042720 kubelet[3292]: I0213 15:32:09.040158 3292 server.go:1256] "Started kubelet" Feb 13 15:32:09.045985 kubelet[3292]: I0213 15:32:09.045957 3292 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:32:09.063498 kubelet[3292]: I0213 15:32:09.062407 3292 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:32:09.064151 kubelet[3292]: I0213 15:32:09.063727 3292 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:32:09.065955 kubelet[3292]: I0213 15:32:09.064827 3292 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 13 15:32:09.066701 kubelet[3292]: I0213 15:32:09.066676 3292 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:32:09.067349 kubelet[3292]: I0213 15:32:09.066913 3292 reconciler_new.go:29] "Reconciler: start to sync state" Feb 13 15:32:09.067458 kubelet[3292]: I0213 15:32:09.066993 3292 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:32:09.068328 kubelet[3292]: I0213 15:32:09.066614 3292 server.go:461] "Adding debug handlers to kubelet server" Feb 13 15:32:09.090857 kubelet[3292]: I0213 15:32:09.088826 3292 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:32:09.090857 kubelet[3292]: I0213 15:32:09.089195 3292 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:32:09.116207 kubelet[3292]: I0213 15:32:09.115780 3292 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:32:09.123878 kubelet[3292]: I0213 15:32:09.123849 3292 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:32:09.124458 kubelet[3292]: I0213 15:32:09.124041 3292 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:32:09.124458 kubelet[3292]: I0213 15:32:09.124069 3292 kubelet.go:2329] "Starting kubelet main sync loop" Feb 13 15:32:09.124458 kubelet[3292]: E0213 15:32:09.124127 3292 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:32:09.127369 kubelet[3292]: I0213 15:32:09.125971 3292 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:32:09.182080 kubelet[3292]: I0213 15:32:09.182036 3292 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-29-23" Feb 13 15:32:09.212923 kubelet[3292]: I0213 15:32:09.212892 3292 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-29-23" Feb 13 15:32:09.213543 kubelet[3292]: I0213 15:32:09.213450 3292 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-29-23" Feb 13 15:32:09.227160 kubelet[3292]: E0213 15:32:09.224509 3292 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 15:32:09.257414 kubelet[3292]: I0213 15:32:09.257381 3292 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:32:09.257636 kubelet[3292]: I0213 15:32:09.257569 3292 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:32:09.257636 kubelet[3292]: I0213 15:32:09.257592 3292 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:32:09.258338 kubelet[3292]: I0213 15:32:09.258211 3292 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 15:32:09.258338 kubelet[3292]: I0213 15:32:09.258240 3292 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 15:32:09.258338 kubelet[3292]: I0213 15:32:09.258250 3292 policy_none.go:49] "None policy: Start" Feb 13 15:32:09.260484 kubelet[3292]: I0213 15:32:09.260457 3292 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:32:09.260484 kubelet[3292]: I0213 15:32:09.260486 3292 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:32:09.260733 kubelet[3292]: I0213 15:32:09.260709 3292 state_mem.go:75] "Updated machine memory state" Feb 13 15:32:09.271705 kubelet[3292]: I0213 15:32:09.271586 3292 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:32:09.274709 kubelet[3292]: I0213 15:32:09.274680 3292 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:32:09.426181 kubelet[3292]: I0213 15:32:09.425493 3292 topology_manager.go:215] "Topology Admit Handler" podUID="a876b9fcb11e0ea65472654010730c60" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-29-23" Feb 13 15:32:09.426181 kubelet[3292]: I0213 15:32:09.425623 3292 topology_manager.go:215] "Topology Admit Handler" podUID="fe9fdc5493f28cae5b2a842eb31f98f1" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-29-23" Feb 13 15:32:09.426181 kubelet[3292]: I0213 15:32:09.425717 3292 topology_manager.go:215] "Topology Admit Handler" podUID="396144689781154b10488501b9790b14" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-29-23" Feb 13 15:32:09.448943 kubelet[3292]: E0213 15:32:09.448910 3292 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-29-23\" already exists" pod="kube-system/kube-apiserver-ip-172-31-29-23" Feb 13 15:32:09.473509 kubelet[3292]: I0213 15:32:09.473469 3292 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a876b9fcb11e0ea65472654010730c60-k8s-certs\") pod \"kube-apiserver-ip-172-31-29-23\" (UID: \"a876b9fcb11e0ea65472654010730c60\") " pod="kube-system/kube-apiserver-ip-172-31-29-23" Feb 13 15:32:09.473661 kubelet[3292]: I0213 15:32:09.473527 3292 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a876b9fcb11e0ea65472654010730c60-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-29-23\" (UID: \"a876b9fcb11e0ea65472654010730c60\") " pod="kube-system/kube-apiserver-ip-172-31-29-23" Feb 13 15:32:09.473661 kubelet[3292]: I0213 15:32:09.473564 3292 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fe9fdc5493f28cae5b2a842eb31f98f1-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-29-23\" (UID: \"fe9fdc5493f28cae5b2a842eb31f98f1\") " pod="kube-system/kube-controller-manager-ip-172-31-29-23" Feb 13 15:32:09.473661 kubelet[3292]: I0213 15:32:09.473594 3292 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fe9fdc5493f28cae5b2a842eb31f98f1-k8s-certs\") pod \"kube-controller-manager-ip-172-31-29-23\" (UID: \"fe9fdc5493f28cae5b2a842eb31f98f1\") " pod="kube-system/kube-controller-manager-ip-172-31-29-23" Feb 13 15:32:09.473661 kubelet[3292]: I0213 15:32:09.473622 3292 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe9fdc5493f28cae5b2a842eb31f98f1-kubeconfig\") pod \"kube-controller-manager-ip-172-31-29-23\" (UID: \"fe9fdc5493f28cae5b2a842eb31f98f1\") " pod="kube-system/kube-controller-manager-ip-172-31-29-23" Feb 13 15:32:09.473838 kubelet[3292]: I0213 15:32:09.473666 3292 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fe9fdc5493f28cae5b2a842eb31f98f1-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-29-23\" (UID: \"fe9fdc5493f28cae5b2a842eb31f98f1\") " pod="kube-system/kube-controller-manager-ip-172-31-29-23" Feb 13 15:32:09.473838 kubelet[3292]: I0213 15:32:09.473699 3292 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/396144689781154b10488501b9790b14-kubeconfig\") pod \"kube-scheduler-ip-172-31-29-23\" (UID: \"396144689781154b10488501b9790b14\") " pod="kube-system/kube-scheduler-ip-172-31-29-23" Feb 13 15:32:09.473838 kubelet[3292]: I0213 15:32:09.473749 3292 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fe9fdc5493f28cae5b2a842eb31f98f1-ca-certs\") pod \"kube-controller-manager-ip-172-31-29-23\" (UID: \"fe9fdc5493f28cae5b2a842eb31f98f1\") " pod="kube-system/kube-controller-manager-ip-172-31-29-23" Feb 13 15:32:09.473838 kubelet[3292]: I0213 15:32:09.473800 3292 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a876b9fcb11e0ea65472654010730c60-ca-certs\") pod \"kube-apiserver-ip-172-31-29-23\" (UID: \"a876b9fcb11e0ea65472654010730c60\") " pod="kube-system/kube-apiserver-ip-172-31-29-23" Feb 13 15:32:09.954506 sudo[3306]: pam_unix(sudo:session): session closed for user root Feb 13 15:32:10.039092 kubelet[3292]: I0213 15:32:10.039050 3292 apiserver.go:52] "Watching apiserver" Feb 13 15:32:10.069171 kubelet[3292]: I0213 15:32:10.067252 3292 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 13 15:32:10.225378 kubelet[3292]: E0213 15:32:10.224747 3292 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-29-23\" already exists" pod="kube-system/kube-apiserver-ip-172-31-29-23" Feb 13 15:32:10.296140 kubelet[3292]: I0213 15:32:10.295832 3292 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-29-23" podStartSLOduration=4.295753979 podStartE2EDuration="4.295753979s" podCreationTimestamp="2025-02-13 15:32:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:32:10.291977801 +0000 UTC m=+1.397947319" watchObservedRunningTime="2025-02-13 15:32:10.295753979 +0000 UTC m=+1.401723491" Feb 13 15:32:10.296140 kubelet[3292]: I0213 15:32:10.296011 3292 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-29-23" podStartSLOduration=1.295970579 podStartE2EDuration="1.295970579s" podCreationTimestamp="2025-02-13 15:32:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:32:10.278562015 +0000 UTC m=+1.384531547" watchObservedRunningTime="2025-02-13 15:32:10.295970579 +0000 UTC m=+1.401940098" Feb 13 15:32:12.260829 sudo[2217]: pam_unix(sudo:session): session closed for user root Feb 13 15:32:12.284311 sshd[2216]: Connection closed by 139.178.89.65 port 51292 Feb 13 15:32:12.286074 sshd-session[2214]: pam_unix(sshd:session): session closed for user core Feb 13 15:32:12.292001 systemd[1]: sshd@6-172.31.29.23:22-139.178.89.65:51292.service: Deactivated successfully. Feb 13 15:32:12.296611 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 15:32:12.297258 systemd[1]: session-7.scope: Consumed 5.719s CPU time, 183.4M memory peak, 0B memory swap peak. Feb 13 15:32:12.300386 systemd-logind[1875]: Session 7 logged out. Waiting for processes to exit. Feb 13 15:32:12.302960 systemd-logind[1875]: Removed session 7. Feb 13 15:32:12.385998 kubelet[3292]: I0213 15:32:12.385934 3292 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-29-23" podStartSLOduration=3.385884605 podStartE2EDuration="3.385884605s" podCreationTimestamp="2025-02-13 15:32:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:32:10.309078535 +0000 UTC m=+1.415048056" watchObservedRunningTime="2025-02-13 15:32:12.385884605 +0000 UTC m=+3.491854126" Feb 13 15:32:21.243917 kubelet[3292]: I0213 15:32:21.243692 3292 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 15:32:21.245903 kubelet[3292]: I0213 15:32:21.245246 3292 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 15:32:21.245972 containerd[1898]: time="2025-02-13T15:32:21.244634687Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 15:32:22.268385 kubelet[3292]: I0213 15:32:22.268318 3292 topology_manager.go:215] "Topology Admit Handler" podUID="3a98ad8e-a894-455a-86a6-e5d849738f43" podNamespace="kube-system" podName="kube-proxy-kh297" Feb 13 15:32:22.279416 kubelet[3292]: I0213 15:32:22.277926 3292 topology_manager.go:215] "Topology Admit Handler" podUID="ee2a8f05-f891-43fc-8410-d4394031cfae" podNamespace="kube-system" podName="cilium-jxxls" Feb 13 15:32:22.282695 kubelet[3292]: W0213 15:32:22.282562 3292 reflector.go:539] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-29-23" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-29-23' and this object Feb 13 15:32:22.283251 kubelet[3292]: E0213 15:32:22.282934 3292 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-29-23" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-29-23' and this object Feb 13 15:32:22.290474 systemd[1]: Created slice kubepods-besteffort-pod3a98ad8e_a894_455a_86a6_e5d849738f43.slice - libcontainer container kubepods-besteffort-pod3a98ad8e_a894_455a_86a6_e5d849738f43.slice. Feb 13 15:32:22.317186 systemd[1]: Created slice kubepods-burstable-podee2a8f05_f891_43fc_8410_d4394031cfae.slice - libcontainer container kubepods-burstable-podee2a8f05_f891_43fc_8410_d4394031cfae.slice. Feb 13 15:32:22.358142 kubelet[3292]: I0213 15:32:22.358084 3292 topology_manager.go:215] "Topology Admit Handler" podUID="169fe880-db03-415c-8e71-809acabe91f6" podNamespace="kube-system" podName="cilium-operator-5cc964979-r5s5z" Feb 13 15:32:22.378088 systemd[1]: Created slice kubepods-besteffort-pod169fe880_db03_415c_8e71_809acabe91f6.slice - libcontainer container kubepods-besteffort-pod169fe880_db03_415c_8e71_809acabe91f6.slice. Feb 13 15:32:22.382086 kubelet[3292]: I0213 15:32:22.379306 3292 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3a98ad8e-a894-455a-86a6-e5d849738f43-kube-proxy\") pod \"kube-proxy-kh297\" (UID: \"3a98ad8e-a894-455a-86a6-e5d849738f43\") " pod="kube-system/kube-proxy-kh297" Feb 13 15:32:22.382086 kubelet[3292]: I0213 15:32:22.379355 3292 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ee2a8f05-f891-43fc-8410-d4394031cfae-etc-cni-netd\") pod \"cilium-jxxls\" (UID: \"ee2a8f05-f891-43fc-8410-d4394031cfae\") " pod="kube-system/cilium-jxxls" Feb 13 15:32:22.382086 kubelet[3292]: I0213 15:32:22.379384 3292 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ee2a8f05-f891-43fc-8410-d4394031cfae-xtables-lock\") pod \"cilium-jxxls\" (UID: \"ee2a8f05-f891-43fc-8410-d4394031cfae\") " pod="kube-system/cilium-jxxls" Feb 13 15:32:22.382086 kubelet[3292]: I0213 15:32:22.379412 3292 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ee2a8f05-f891-43fc-8410-d4394031cfae-host-proc-sys-kernel\") pod \"cilium-jxxls\" (UID: \"ee2a8f05-f891-43fc-8410-d4394031cfae\") " pod="kube-system/cilium-jxxls" Feb 13 15:32:22.382086 kubelet[3292]: I0213 15:32:22.379440 3292 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gt95t\" (UniqueName: \"kubernetes.io/projected/ee2a8f05-f891-43fc-8410-d4394031cfae-kube-api-access-gt95t\") pod \"cilium-jxxls\" (UID: \"ee2a8f05-f891-43fc-8410-d4394031cfae\") " pod="kube-system/cilium-jxxls" Feb 13 15:32:22.382489 kubelet[3292]: I0213 15:32:22.379469 3292 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3a98ad8e-a894-455a-86a6-e5d849738f43-xtables-lock\") pod \"kube-proxy-kh297\" (UID: \"3a98ad8e-a894-455a-86a6-e5d849738f43\") " pod="kube-system/kube-proxy-kh297" Feb 13 15:32:22.382489 kubelet[3292]: I0213 15:32:22.379499 3292 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvqht\" (UniqueName: \"kubernetes.io/projected/3a98ad8e-a894-455a-86a6-e5d849738f43-kube-api-access-kvqht\") pod \"kube-proxy-kh297\" (UID: \"3a98ad8e-a894-455a-86a6-e5d849738f43\") " pod="kube-system/kube-proxy-kh297" Feb 13 15:32:22.382489 kubelet[3292]: I0213 15:32:22.379618 3292 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ee2a8f05-f891-43fc-8410-d4394031cfae-clustermesh-secrets\") pod \"cilium-jxxls\" (UID: \"ee2a8f05-f891-43fc-8410-d4394031cfae\") " pod="kube-system/cilium-jxxls" Feb 13 15:32:22.382489 kubelet[3292]: I0213 15:32:22.379664 3292 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ee2a8f05-f891-43fc-8410-d4394031cfae-host-proc-sys-net\") pod \"cilium-jxxls\" (UID: \"ee2a8f05-f891-43fc-8410-d4394031cfae\") " pod="kube-system/cilium-jxxls" Feb 13 15:32:22.382489 kubelet[3292]: I0213 15:32:22.380016 3292 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ee2a8f05-f891-43fc-8410-d4394031cfae-hubble-tls\") pod \"cilium-jxxls\" (UID: \"ee2a8f05-f891-43fc-8410-d4394031cfae\") " pod="kube-system/cilium-jxxls" Feb 13 15:32:22.382711 kubelet[3292]: I0213 15:32:22.380063 3292 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6p7ld\" (UniqueName: \"kubernetes.io/projected/169fe880-db03-415c-8e71-809acabe91f6-kube-api-access-6p7ld\") pod \"cilium-operator-5cc964979-r5s5z\" (UID: \"169fe880-db03-415c-8e71-809acabe91f6\") " pod="kube-system/cilium-operator-5cc964979-r5s5z" Feb 13 15:32:22.382711 kubelet[3292]: I0213 15:32:22.380093 3292 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ee2a8f05-f891-43fc-8410-d4394031cfae-cilium-cgroup\") pod \"cilium-jxxls\" (UID: \"ee2a8f05-f891-43fc-8410-d4394031cfae\") " pod="kube-system/cilium-jxxls" Feb 13 15:32:22.382711 kubelet[3292]: I0213 15:32:22.380120 3292 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ee2a8f05-f891-43fc-8410-d4394031cfae-lib-modules\") pod \"cilium-jxxls\" (UID: \"ee2a8f05-f891-43fc-8410-d4394031cfae\") " pod="kube-system/cilium-jxxls" Feb 13 15:32:22.382711 kubelet[3292]: I0213 15:32:22.380321 3292 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3a98ad8e-a894-455a-86a6-e5d849738f43-lib-modules\") pod \"kube-proxy-kh297\" (UID: \"3a98ad8e-a894-455a-86a6-e5d849738f43\") " pod="kube-system/kube-proxy-kh297" Feb 13 15:32:22.382711 kubelet[3292]: I0213 15:32:22.380348 3292 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ee2a8f05-f891-43fc-8410-d4394031cfae-cilium-config-path\") pod \"cilium-jxxls\" (UID: \"ee2a8f05-f891-43fc-8410-d4394031cfae\") " pod="kube-system/cilium-jxxls" Feb 13 15:32:22.383405 kubelet[3292]: I0213 15:32:22.380376 3292 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/169fe880-db03-415c-8e71-809acabe91f6-cilium-config-path\") pod \"cilium-operator-5cc964979-r5s5z\" (UID: \"169fe880-db03-415c-8e71-809acabe91f6\") " pod="kube-system/cilium-operator-5cc964979-r5s5z" Feb 13 15:32:22.383405 kubelet[3292]: I0213 15:32:22.380413 3292 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ee2a8f05-f891-43fc-8410-d4394031cfae-bpf-maps\") pod \"cilium-jxxls\" (UID: \"ee2a8f05-f891-43fc-8410-d4394031cfae\") " pod="kube-system/cilium-jxxls" Feb 13 15:32:22.383405 kubelet[3292]: I0213 15:32:22.380442 3292 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ee2a8f05-f891-43fc-8410-d4394031cfae-cni-path\") pod \"cilium-jxxls\" (UID: \"ee2a8f05-f891-43fc-8410-d4394031cfae\") " pod="kube-system/cilium-jxxls" Feb 13 15:32:22.383405 kubelet[3292]: I0213 15:32:22.380469 3292 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ee2a8f05-f891-43fc-8410-d4394031cfae-cilium-run\") pod \"cilium-jxxls\" (UID: \"ee2a8f05-f891-43fc-8410-d4394031cfae\") " pod="kube-system/cilium-jxxls" Feb 13 15:32:22.383405 kubelet[3292]: I0213 15:32:22.380499 3292 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ee2a8f05-f891-43fc-8410-d4394031cfae-hostproc\") pod \"cilium-jxxls\" (UID: \"ee2a8f05-f891-43fc-8410-d4394031cfae\") " pod="kube-system/cilium-jxxls" Feb 13 15:32:22.624074 containerd[1898]: time="2025-02-13T15:32:22.624041131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jxxls,Uid:ee2a8f05-f891-43fc-8410-d4394031cfae,Namespace:kube-system,Attempt:0,}" Feb 13 15:32:22.669677 containerd[1898]: time="2025-02-13T15:32:22.669549284Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:32:22.669677 containerd[1898]: time="2025-02-13T15:32:22.669605426Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:32:22.669677 containerd[1898]: time="2025-02-13T15:32:22.669621444Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:32:22.670162 containerd[1898]: time="2025-02-13T15:32:22.669710931Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:32:22.690053 containerd[1898]: time="2025-02-13T15:32:22.690021216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-r5s5z,Uid:169fe880-db03-415c-8e71-809acabe91f6,Namespace:kube-system,Attempt:0,}" Feb 13 15:32:22.691354 systemd[1]: Started cri-containerd-1fd223449b8590f2f94194b7932a74baf731185aaa5097de4850cc27be74b0df.scope - libcontainer container 1fd223449b8590f2f94194b7932a74baf731185aaa5097de4850cc27be74b0df. Feb 13 15:32:22.728802 containerd[1898]: time="2025-02-13T15:32:22.728688912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jxxls,Uid:ee2a8f05-f891-43fc-8410-d4394031cfae,Namespace:kube-system,Attempt:0,} returns sandbox id \"1fd223449b8590f2f94194b7932a74baf731185aaa5097de4850cc27be74b0df\"" Feb 13 15:32:22.733052 containerd[1898]: time="2025-02-13T15:32:22.732834062Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 15:32:22.736505 containerd[1898]: time="2025-02-13T15:32:22.736383456Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:32:22.737126 containerd[1898]: time="2025-02-13T15:32:22.737041135Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:32:22.737533 containerd[1898]: time="2025-02-13T15:32:22.737235520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:32:22.740440 containerd[1898]: time="2025-02-13T15:32:22.738112914Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:32:22.763479 systemd[1]: Started cri-containerd-abeeff2366480d8c3ced95cb7171fd1d01e1bdbbd23b91586ddb3bc4784dd1a1.scope - libcontainer container abeeff2366480d8c3ced95cb7171fd1d01e1bdbbd23b91586ddb3bc4784dd1a1. Feb 13 15:32:22.811464 containerd[1898]: time="2025-02-13T15:32:22.811339351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-r5s5z,Uid:169fe880-db03-415c-8e71-809acabe91f6,Namespace:kube-system,Attempt:0,} returns sandbox id \"abeeff2366480d8c3ced95cb7171fd1d01e1bdbbd23b91586ddb3bc4784dd1a1\"" Feb 13 15:32:23.513833 containerd[1898]: time="2025-02-13T15:32:23.513413235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kh297,Uid:3a98ad8e-a894-455a-86a6-e5d849738f43,Namespace:kube-system,Attempt:0,}" Feb 13 15:32:23.574891 containerd[1898]: time="2025-02-13T15:32:23.574790287Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:32:23.574891 containerd[1898]: time="2025-02-13T15:32:23.574849157Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:32:23.574891 containerd[1898]: time="2025-02-13T15:32:23.574864404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:32:23.575610 containerd[1898]: time="2025-02-13T15:32:23.575399560Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:32:23.604346 systemd[1]: Started cri-containerd-933377fcd8a381c089cf21324bb8d67dd3d0532b3d28cb88da9a0361c3470a98.scope - libcontainer container 933377fcd8a381c089cf21324bb8d67dd3d0532b3d28cb88da9a0361c3470a98. Feb 13 15:32:23.635779 containerd[1898]: time="2025-02-13T15:32:23.635633024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kh297,Uid:3a98ad8e-a894-455a-86a6-e5d849738f43,Namespace:kube-system,Attempt:0,} returns sandbox id \"933377fcd8a381c089cf21324bb8d67dd3d0532b3d28cb88da9a0361c3470a98\"" Feb 13 15:32:23.639585 containerd[1898]: time="2025-02-13T15:32:23.639541208Z" level=info msg="CreateContainer within sandbox \"933377fcd8a381c089cf21324bb8d67dd3d0532b3d28cb88da9a0361c3470a98\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 15:32:23.662223 containerd[1898]: time="2025-02-13T15:32:23.662102049Z" level=info msg="CreateContainer within sandbox \"933377fcd8a381c089cf21324bb8d67dd3d0532b3d28cb88da9a0361c3470a98\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"58f9ac6f7bbb5ca10f4c28b2b23ec8832bdccb67866bc870d89164d5bb50e4e1\"" Feb 13 15:32:23.665152 containerd[1898]: time="2025-02-13T15:32:23.663678537Z" level=info msg="StartContainer for \"58f9ac6f7bbb5ca10f4c28b2b23ec8832bdccb67866bc870d89164d5bb50e4e1\"" Feb 13 15:32:23.700236 systemd[1]: Started cri-containerd-58f9ac6f7bbb5ca10f4c28b2b23ec8832bdccb67866bc870d89164d5bb50e4e1.scope - libcontainer container 58f9ac6f7bbb5ca10f4c28b2b23ec8832bdccb67866bc870d89164d5bb50e4e1. Feb 13 15:32:23.736518 containerd[1898]: time="2025-02-13T15:32:23.735763778Z" level=info msg="StartContainer for \"58f9ac6f7bbb5ca10f4c28b2b23ec8832bdccb67866bc870d89164d5bb50e4e1\" returns successfully" Feb 13 15:32:24.279866 kubelet[3292]: I0213 15:32:24.279578 3292 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-kh297" podStartSLOduration=2.279529792 podStartE2EDuration="2.279529792s" podCreationTimestamp="2025-02-13 15:32:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:32:24.279166485 +0000 UTC m=+15.385136005" watchObservedRunningTime="2025-02-13 15:32:24.279529792 +0000 UTC m=+15.385499314" Feb 13 15:32:33.919834 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2013506460.mount: Deactivated successfully. Feb 13 15:32:38.087390 containerd[1898]: time="2025-02-13T15:32:38.087331576Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:32:38.088951 containerd[1898]: time="2025-02-13T15:32:38.088913230Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Feb 13 15:32:38.090726 containerd[1898]: time="2025-02-13T15:32:38.090413593Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:32:38.092824 containerd[1898]: time="2025-02-13T15:32:38.092779786Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 15.359894522s" Feb 13 15:32:38.092996 containerd[1898]: time="2025-02-13T15:32:38.092974881Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 13 15:32:38.095420 containerd[1898]: time="2025-02-13T15:32:38.095376221Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 15:32:38.101169 containerd[1898]: time="2025-02-13T15:32:38.100401423Z" level=info msg="CreateContainer within sandbox \"1fd223449b8590f2f94194b7932a74baf731185aaa5097de4850cc27be74b0df\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 15:32:38.259452 containerd[1898]: time="2025-02-13T15:32:38.259408520Z" level=info msg="CreateContainer within sandbox \"1fd223449b8590f2f94194b7932a74baf731185aaa5097de4850cc27be74b0df\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7c1a3e0e3c2fab5c7cbd8256b30921241588136e076e76d12b530d706aa1a043\"" Feb 13 15:32:38.261444 containerd[1898]: time="2025-02-13T15:32:38.261404808Z" level=info msg="StartContainer for \"7c1a3e0e3c2fab5c7cbd8256b30921241588136e076e76d12b530d706aa1a043\"" Feb 13 15:32:38.616333 systemd[1]: Started cri-containerd-7c1a3e0e3c2fab5c7cbd8256b30921241588136e076e76d12b530d706aa1a043.scope - libcontainer container 7c1a3e0e3c2fab5c7cbd8256b30921241588136e076e76d12b530d706aa1a043. Feb 13 15:32:38.671220 containerd[1898]: time="2025-02-13T15:32:38.670906057Z" level=info msg="StartContainer for \"7c1a3e0e3c2fab5c7cbd8256b30921241588136e076e76d12b530d706aa1a043\" returns successfully" Feb 13 15:32:38.696475 systemd[1]: cri-containerd-7c1a3e0e3c2fab5c7cbd8256b30921241588136e076e76d12b530d706aa1a043.scope: Deactivated successfully. Feb 13 15:32:38.786786 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7c1a3e0e3c2fab5c7cbd8256b30921241588136e076e76d12b530d706aa1a043-rootfs.mount: Deactivated successfully. Feb 13 15:32:39.283209 containerd[1898]: time="2025-02-13T15:32:39.236292247Z" level=info msg="shim disconnected" id=7c1a3e0e3c2fab5c7cbd8256b30921241588136e076e76d12b530d706aa1a043 namespace=k8s.io Feb 13 15:32:39.283755 containerd[1898]: time="2025-02-13T15:32:39.283220324Z" level=warning msg="cleaning up after shim disconnected" id=7c1a3e0e3c2fab5c7cbd8256b30921241588136e076e76d12b530d706aa1a043 namespace=k8s.io Feb 13 15:32:39.283755 containerd[1898]: time="2025-02-13T15:32:39.283241610Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:32:39.304883 containerd[1898]: time="2025-02-13T15:32:39.304770280Z" level=warning msg="cleanup warnings time=\"2025-02-13T15:32:39Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 15:32:39.423720 containerd[1898]: time="2025-02-13T15:32:39.423426561Z" level=info msg="CreateContainer within sandbox \"1fd223449b8590f2f94194b7932a74baf731185aaa5097de4850cc27be74b0df\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 15:32:39.467068 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3124350876.mount: Deactivated successfully. Feb 13 15:32:39.477406 containerd[1898]: time="2025-02-13T15:32:39.477336735Z" level=info msg="CreateContainer within sandbox \"1fd223449b8590f2f94194b7932a74baf731185aaa5097de4850cc27be74b0df\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a178c7d244737c250ea4376e02f048dfdd2d0798cc8f162af41e56cc9a4cab2c\"" Feb 13 15:32:39.483280 containerd[1898]: time="2025-02-13T15:32:39.479666907Z" level=info msg="StartContainer for \"a178c7d244737c250ea4376e02f048dfdd2d0798cc8f162af41e56cc9a4cab2c\"" Feb 13 15:32:39.564432 systemd[1]: Started cri-containerd-a178c7d244737c250ea4376e02f048dfdd2d0798cc8f162af41e56cc9a4cab2c.scope - libcontainer container a178c7d244737c250ea4376e02f048dfdd2d0798cc8f162af41e56cc9a4cab2c. Feb 13 15:32:39.605725 containerd[1898]: time="2025-02-13T15:32:39.605673283Z" level=info msg="StartContainer for \"a178c7d244737c250ea4376e02f048dfdd2d0798cc8f162af41e56cc9a4cab2c\" returns successfully" Feb 13 15:32:39.627222 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:32:39.628252 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:32:39.628526 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:32:39.650679 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:32:39.660507 systemd[1]: cri-containerd-a178c7d244737c250ea4376e02f048dfdd2d0798cc8f162af41e56cc9a4cab2c.scope: Deactivated successfully. Feb 13 15:32:39.727927 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:32:39.761255 containerd[1898]: time="2025-02-13T15:32:39.761188721Z" level=info msg="shim disconnected" id=a178c7d244737c250ea4376e02f048dfdd2d0798cc8f162af41e56cc9a4cab2c namespace=k8s.io Feb 13 15:32:39.762198 containerd[1898]: time="2025-02-13T15:32:39.762161801Z" level=warning msg="cleaning up after shim disconnected" id=a178c7d244737c250ea4376e02f048dfdd2d0798cc8f162af41e56cc9a4cab2c namespace=k8s.io Feb 13 15:32:39.762198 containerd[1898]: time="2025-02-13T15:32:39.762195264Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:32:39.809480 containerd[1898]: time="2025-02-13T15:32:39.809427065Z" level=warning msg="cleanup warnings time=\"2025-02-13T15:32:39Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 15:32:40.452260 containerd[1898]: time="2025-02-13T15:32:40.449680264Z" level=info msg="CreateContainer within sandbox \"1fd223449b8590f2f94194b7932a74baf731185aaa5097de4850cc27be74b0df\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 15:32:40.465098 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount115225159.mount: Deactivated successfully. Feb 13 15:32:40.465407 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a178c7d244737c250ea4376e02f048dfdd2d0798cc8f162af41e56cc9a4cab2c-rootfs.mount: Deactivated successfully. Feb 13 15:32:40.575195 containerd[1898]: time="2025-02-13T15:32:40.575110156Z" level=info msg="CreateContainer within sandbox \"1fd223449b8590f2f94194b7932a74baf731185aaa5097de4850cc27be74b0df\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0179d92aa942e2ca435be8255e45643e0c14b41e96406641b5681273b0520bc1\"" Feb 13 15:32:40.577260 containerd[1898]: time="2025-02-13T15:32:40.575615550Z" level=info msg="StartContainer for \"0179d92aa942e2ca435be8255e45643e0c14b41e96406641b5681273b0520bc1\"" Feb 13 15:32:40.646393 systemd[1]: Started cri-containerd-0179d92aa942e2ca435be8255e45643e0c14b41e96406641b5681273b0520bc1.scope - libcontainer container 0179d92aa942e2ca435be8255e45643e0c14b41e96406641b5681273b0520bc1. Feb 13 15:32:40.694091 systemd[1]: cri-containerd-0179d92aa942e2ca435be8255e45643e0c14b41e96406641b5681273b0520bc1.scope: Deactivated successfully. Feb 13 15:32:40.694658 containerd[1898]: time="2025-02-13T15:32:40.694477110Z" level=info msg="StartContainer for \"0179d92aa942e2ca435be8255e45643e0c14b41e96406641b5681273b0520bc1\" returns successfully" Feb 13 15:32:40.744564 containerd[1898]: time="2025-02-13T15:32:40.744415887Z" level=info msg="shim disconnected" id=0179d92aa942e2ca435be8255e45643e0c14b41e96406641b5681273b0520bc1 namespace=k8s.io Feb 13 15:32:40.744564 containerd[1898]: time="2025-02-13T15:32:40.744471952Z" level=warning msg="cleaning up after shim disconnected" id=0179d92aa942e2ca435be8255e45643e0c14b41e96406641b5681273b0520bc1 namespace=k8s.io Feb 13 15:32:40.744564 containerd[1898]: time="2025-02-13T15:32:40.744484614Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:32:41.468536 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0179d92aa942e2ca435be8255e45643e0c14b41e96406641b5681273b0520bc1-rootfs.mount: Deactivated successfully. Feb 13 15:32:41.512886 containerd[1898]: time="2025-02-13T15:32:41.512824536Z" level=info msg="CreateContainer within sandbox \"1fd223449b8590f2f94194b7932a74baf731185aaa5097de4850cc27be74b0df\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 15:32:41.621700 containerd[1898]: time="2025-02-13T15:32:41.621310534Z" level=info msg="CreateContainer within sandbox \"1fd223449b8590f2f94194b7932a74baf731185aaa5097de4850cc27be74b0df\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d283fea7eaa8bdca3cbe9d06b9babb35ef02c254f245ec18deecb606f8e2a51c\"" Feb 13 15:32:41.622561 containerd[1898]: time="2025-02-13T15:32:41.622399692Z" level=info msg="StartContainer for \"d283fea7eaa8bdca3cbe9d06b9babb35ef02c254f245ec18deecb606f8e2a51c\"" Feb 13 15:32:41.659361 systemd[1]: Started cri-containerd-d283fea7eaa8bdca3cbe9d06b9babb35ef02c254f245ec18deecb606f8e2a51c.scope - libcontainer container d283fea7eaa8bdca3cbe9d06b9babb35ef02c254f245ec18deecb606f8e2a51c. Feb 13 15:32:41.686927 systemd[1]: cri-containerd-d283fea7eaa8bdca3cbe9d06b9babb35ef02c254f245ec18deecb606f8e2a51c.scope: Deactivated successfully. Feb 13 15:32:41.695044 containerd[1898]: time="2025-02-13T15:32:41.694998693Z" level=info msg="StartContainer for \"d283fea7eaa8bdca3cbe9d06b9babb35ef02c254f245ec18deecb606f8e2a51c\" returns successfully" Feb 13 15:32:41.718794 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d283fea7eaa8bdca3cbe9d06b9babb35ef02c254f245ec18deecb606f8e2a51c-rootfs.mount: Deactivated successfully. Feb 13 15:32:41.730503 containerd[1898]: time="2025-02-13T15:32:41.730410854Z" level=info msg="shim disconnected" id=d283fea7eaa8bdca3cbe9d06b9babb35ef02c254f245ec18deecb606f8e2a51c namespace=k8s.io Feb 13 15:32:41.730764 containerd[1898]: time="2025-02-13T15:32:41.730513290Z" level=warning msg="cleaning up after shim disconnected" id=d283fea7eaa8bdca3cbe9d06b9babb35ef02c254f245ec18deecb606f8e2a51c namespace=k8s.io Feb 13 15:32:41.730764 containerd[1898]: time="2025-02-13T15:32:41.730528082Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:32:42.447947 containerd[1898]: time="2025-02-13T15:32:42.447904562Z" level=info msg="CreateContainer within sandbox \"1fd223449b8590f2f94194b7932a74baf731185aaa5097de4850cc27be74b0df\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 15:32:42.496643 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount720957653.mount: Deactivated successfully. Feb 13 15:32:42.502204 containerd[1898]: time="2025-02-13T15:32:42.502155003Z" level=info msg="CreateContainer within sandbox \"1fd223449b8590f2f94194b7932a74baf731185aaa5097de4850cc27be74b0df\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4b04194ccaa5e49c3ab58e2cd264f401d1903963530f7fdc8349260cb678c506\"" Feb 13 15:32:42.506952 containerd[1898]: time="2025-02-13T15:32:42.505680262Z" level=info msg="StartContainer for \"4b04194ccaa5e49c3ab58e2cd264f401d1903963530f7fdc8349260cb678c506\"" Feb 13 15:32:42.613380 systemd[1]: Started cri-containerd-4b04194ccaa5e49c3ab58e2cd264f401d1903963530f7fdc8349260cb678c506.scope - libcontainer container 4b04194ccaa5e49c3ab58e2cd264f401d1903963530f7fdc8349260cb678c506. Feb 13 15:32:42.695859 containerd[1898]: time="2025-02-13T15:32:42.695806141Z" level=info msg="StartContainer for \"4b04194ccaa5e49c3ab58e2cd264f401d1903963530f7fdc8349260cb678c506\" returns successfully" Feb 13 15:32:43.058943 kubelet[3292]: I0213 15:32:43.058912 3292 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 15:32:43.106906 kubelet[3292]: I0213 15:32:43.106838 3292 topology_manager.go:215] "Topology Admit Handler" podUID="c8ca1350-c9e4-4723-97b4-e773425179ef" podNamespace="kube-system" podName="coredns-76f75df574-66qw5" Feb 13 15:32:43.123759 systemd[1]: Created slice kubepods-burstable-podc8ca1350_c9e4_4723_97b4_e773425179ef.slice - libcontainer container kubepods-burstable-podc8ca1350_c9e4_4723_97b4_e773425179ef.slice. Feb 13 15:32:43.134618 kubelet[3292]: I0213 15:32:43.134581 3292 topology_manager.go:215] "Topology Admit Handler" podUID="609d5dd6-dd87-4448-89cf-897385435808" podNamespace="kube-system" podName="coredns-76f75df574-d74xq" Feb 13 15:32:43.145979 systemd[1]: Created slice kubepods-burstable-pod609d5dd6_dd87_4448_89cf_897385435808.slice - libcontainer container kubepods-burstable-pod609d5dd6_dd87_4448_89cf_897385435808.slice. Feb 13 15:32:43.208867 kubelet[3292]: I0213 15:32:43.208770 3292 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c8ca1350-c9e4-4723-97b4-e773425179ef-config-volume\") pod \"coredns-76f75df574-66qw5\" (UID: \"c8ca1350-c9e4-4723-97b4-e773425179ef\") " pod="kube-system/coredns-76f75df574-66qw5" Feb 13 15:32:43.209508 kubelet[3292]: I0213 15:32:43.209445 3292 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbfs4\" (UniqueName: \"kubernetes.io/projected/c8ca1350-c9e4-4723-97b4-e773425179ef-kube-api-access-jbfs4\") pod \"coredns-76f75df574-66qw5\" (UID: \"c8ca1350-c9e4-4723-97b4-e773425179ef\") " pod="kube-system/coredns-76f75df574-66qw5" Feb 13 15:32:43.310744 kubelet[3292]: I0213 15:32:43.310621 3292 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jhph\" (UniqueName: \"kubernetes.io/projected/609d5dd6-dd87-4448-89cf-897385435808-kube-api-access-6jhph\") pod \"coredns-76f75df574-d74xq\" (UID: \"609d5dd6-dd87-4448-89cf-897385435808\") " pod="kube-system/coredns-76f75df574-d74xq" Feb 13 15:32:43.310744 kubelet[3292]: I0213 15:32:43.310679 3292 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/609d5dd6-dd87-4448-89cf-897385435808-config-volume\") pod \"coredns-76f75df574-d74xq\" (UID: \"609d5dd6-dd87-4448-89cf-897385435808\") " pod="kube-system/coredns-76f75df574-d74xq" Feb 13 15:32:43.478574 containerd[1898]: time="2025-02-13T15:32:43.476026443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-d74xq,Uid:609d5dd6-dd87-4448-89cf-897385435808,Namespace:kube-system,Attempt:0,}" Feb 13 15:32:43.478574 containerd[1898]: time="2025-02-13T15:32:43.476123820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-66qw5,Uid:c8ca1350-c9e4-4723-97b4-e773425179ef,Namespace:kube-system,Attempt:0,}" Feb 13 15:32:43.525239 kubelet[3292]: I0213 15:32:43.525197 3292 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-jxxls" podStartSLOduration=6.162432429 podStartE2EDuration="21.525125623s" podCreationTimestamp="2025-02-13 15:32:22 +0000 UTC" firstStartedPulling="2025-02-13 15:32:22.731398953 +0000 UTC m=+13.837368453" lastFinishedPulling="2025-02-13 15:32:38.09409213 +0000 UTC m=+29.200061647" observedRunningTime="2025-02-13 15:32:43.524943064 +0000 UTC m=+34.630912584" watchObservedRunningTime="2025-02-13 15:32:43.525125623 +0000 UTC m=+34.631095135" Feb 13 15:32:44.665400 containerd[1898]: time="2025-02-13T15:32:44.665338842Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:32:44.667094 containerd[1898]: time="2025-02-13T15:32:44.667031621Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Feb 13 15:32:44.668868 containerd[1898]: time="2025-02-13T15:32:44.668811326Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:32:44.670650 containerd[1898]: time="2025-02-13T15:32:44.670495604Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 6.575079321s" Feb 13 15:32:44.670650 containerd[1898]: time="2025-02-13T15:32:44.670540377Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 13 15:32:44.673076 containerd[1898]: time="2025-02-13T15:32:44.672987405Z" level=info msg="CreateContainer within sandbox \"abeeff2366480d8c3ced95cb7171fd1d01e1bdbbd23b91586ddb3bc4784dd1a1\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 15:32:44.707249 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1922641808.mount: Deactivated successfully. Feb 13 15:32:44.709627 containerd[1898]: time="2025-02-13T15:32:44.709583688Z" level=info msg="CreateContainer within sandbox \"abeeff2366480d8c3ced95cb7171fd1d01e1bdbbd23b91586ddb3bc4784dd1a1\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"9d4a5f63da51fd9c7e9b25fe4b8c1aac219044febe7875e965f69ee1d238ab45\"" Feb 13 15:32:44.710711 containerd[1898]: time="2025-02-13T15:32:44.710358633Z" level=info msg="StartContainer for \"9d4a5f63da51fd9c7e9b25fe4b8c1aac219044febe7875e965f69ee1d238ab45\"" Feb 13 15:32:44.748635 systemd[1]: run-containerd-runc-k8s.io-9d4a5f63da51fd9c7e9b25fe4b8c1aac219044febe7875e965f69ee1d238ab45-runc.MWA3Qw.mount: Deactivated successfully. Feb 13 15:32:44.758352 systemd[1]: Started cri-containerd-9d4a5f63da51fd9c7e9b25fe4b8c1aac219044febe7875e965f69ee1d238ab45.scope - libcontainer container 9d4a5f63da51fd9c7e9b25fe4b8c1aac219044febe7875e965f69ee1d238ab45. Feb 13 15:32:44.824160 containerd[1898]: time="2025-02-13T15:32:44.824080561Z" level=info msg="StartContainer for \"9d4a5f63da51fd9c7e9b25fe4b8c1aac219044febe7875e965f69ee1d238ab45\" returns successfully" Feb 13 15:32:48.703647 systemd-networkd[1740]: cilium_host: Link UP Feb 13 15:32:48.703917 systemd-networkd[1740]: cilium_net: Link UP Feb 13 15:32:48.705252 systemd-networkd[1740]: cilium_net: Gained carrier Feb 13 15:32:48.705711 systemd-networkd[1740]: cilium_host: Gained carrier Feb 13 15:32:48.713819 (udev-worker)[4113]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:32:48.715408 (udev-worker)[4112]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:32:48.974558 (udev-worker)[4122]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:32:48.991906 systemd-networkd[1740]: cilium_vxlan: Link UP Feb 13 15:32:48.991915 systemd-networkd[1740]: cilium_vxlan: Gained carrier Feb 13 15:32:49.597772 systemd-networkd[1740]: cilium_net: Gained IPv6LL Feb 13 15:32:49.661363 systemd-networkd[1740]: cilium_host: Gained IPv6LL Feb 13 15:32:49.766169 kernel: NET: Registered PF_ALG protocol family Feb 13 15:32:50.645977 systemd-networkd[1740]: lxc_health: Link UP Feb 13 15:32:50.666461 systemd-networkd[1740]: lxc_health: Gained carrier Feb 13 15:32:50.679766 kubelet[3292]: I0213 15:32:50.679726 3292 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-r5s5z" podStartSLOduration=6.822478971 podStartE2EDuration="28.679662811s" podCreationTimestamp="2025-02-13 15:32:22 +0000 UTC" firstStartedPulling="2025-02-13 15:32:22.813777003 +0000 UTC m=+13.919746508" lastFinishedPulling="2025-02-13 15:32:44.670960836 +0000 UTC m=+35.776930348" observedRunningTime="2025-02-13 15:32:45.645722843 +0000 UTC m=+36.751692363" watchObservedRunningTime="2025-02-13 15:32:50.679662811 +0000 UTC m=+41.785632334" Feb 13 15:32:50.749380 systemd-networkd[1740]: cilium_vxlan: Gained IPv6LL Feb 13 15:32:51.185920 systemd-networkd[1740]: lxcd2300129a219: Link UP Feb 13 15:32:51.190213 kernel: eth0: renamed from tmpdce3d Feb 13 15:32:51.197328 systemd-networkd[1740]: lxcd2300129a219: Gained carrier Feb 13 15:32:51.239644 systemd-networkd[1740]: lxc1fc88442fba0: Link UP Feb 13 15:32:51.246248 kernel: eth0: renamed from tmp86c84 Feb 13 15:32:51.251200 systemd-networkd[1740]: lxc1fc88442fba0: Gained carrier Feb 13 15:32:52.479496 systemd-networkd[1740]: lxc_health: Gained IPv6LL Feb 13 15:32:52.541357 systemd-networkd[1740]: lxc1fc88442fba0: Gained IPv6LL Feb 13 15:32:52.989382 systemd-networkd[1740]: lxcd2300129a219: Gained IPv6LL Feb 13 15:32:55.327272 ntpd[1867]: Listen normally on 7 cilium_host 192.168.0.65:123 Feb 13 15:32:55.328593 ntpd[1867]: 13 Feb 15:32:55 ntpd[1867]: Listen normally on 7 cilium_host 192.168.0.65:123 Feb 13 15:32:55.328593 ntpd[1867]: 13 Feb 15:32:55 ntpd[1867]: Listen normally on 8 cilium_net [fe80::648b:80ff:fee6:4562%4]:123 Feb 13 15:32:55.328593 ntpd[1867]: 13 Feb 15:32:55 ntpd[1867]: Listen normally on 9 cilium_host [fe80::28a8:7ff:fe98:8239%5]:123 Feb 13 15:32:55.328593 ntpd[1867]: 13 Feb 15:32:55 ntpd[1867]: Listen normally on 10 cilium_vxlan [fe80::b87b:95ff:fe7c:b0a%6]:123 Feb 13 15:32:55.328593 ntpd[1867]: 13 Feb 15:32:55 ntpd[1867]: Listen normally on 11 lxc_health [fe80::44b6:14ff:fec7:6709%8]:123 Feb 13 15:32:55.328593 ntpd[1867]: 13 Feb 15:32:55 ntpd[1867]: Listen normally on 12 lxcd2300129a219 [fe80::18ff:a1ff:fe1c:d350%10]:123 Feb 13 15:32:55.328593 ntpd[1867]: 13 Feb 15:32:55 ntpd[1867]: Listen normally on 13 lxc1fc88442fba0 [fe80::30f8:f5ff:fec9:e646%12]:123 Feb 13 15:32:55.327376 ntpd[1867]: Listen normally on 8 cilium_net [fe80::648b:80ff:fee6:4562%4]:123 Feb 13 15:32:55.327431 ntpd[1867]: Listen normally on 9 cilium_host [fe80::28a8:7ff:fe98:8239%5]:123 Feb 13 15:32:55.327473 ntpd[1867]: Listen normally on 10 cilium_vxlan [fe80::b87b:95ff:fe7c:b0a%6]:123 Feb 13 15:32:55.327511 ntpd[1867]: Listen normally on 11 lxc_health [fe80::44b6:14ff:fec7:6709%8]:123 Feb 13 15:32:55.327547 ntpd[1867]: Listen normally on 12 lxcd2300129a219 [fe80::18ff:a1ff:fe1c:d350%10]:123 Feb 13 15:32:55.327585 ntpd[1867]: Listen normally on 13 lxc1fc88442fba0 [fe80::30f8:f5ff:fec9:e646%12]:123 Feb 13 15:32:56.487589 systemd[1]: Started sshd@7-172.31.29.23:22-139.178.89.65:53590.service - OpenSSH per-connection server daemon (139.178.89.65:53590). Feb 13 15:32:56.723193 sshd[4480]: Accepted publickey for core from 139.178.89.65 port 53590 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:32:56.732393 sshd-session[4480]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:32:56.795262 systemd-logind[1875]: New session 8 of user core. Feb 13 15:32:56.803992 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 15:32:58.134557 sshd[4484]: Connection closed by 139.178.89.65 port 53590 Feb 13 15:32:58.136387 sshd-session[4480]: pam_unix(sshd:session): session closed for user core Feb 13 15:32:58.142472 systemd-logind[1875]: Session 8 logged out. Waiting for processes to exit. Feb 13 15:32:58.144012 systemd[1]: sshd@7-172.31.29.23:22-139.178.89.65:53590.service: Deactivated successfully. Feb 13 15:32:58.150985 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 15:32:58.153440 systemd-logind[1875]: Removed session 8. Feb 13 15:32:59.560545 containerd[1898]: time="2025-02-13T15:32:59.560009948Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:32:59.560545 containerd[1898]: time="2025-02-13T15:32:59.560096047Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:32:59.560545 containerd[1898]: time="2025-02-13T15:32:59.560374474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:32:59.587523 containerd[1898]: time="2025-02-13T15:32:59.583956394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:32:59.636223 containerd[1898]: time="2025-02-13T15:32:59.631603670Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:32:59.636223 containerd[1898]: time="2025-02-13T15:32:59.631733830Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:32:59.636223 containerd[1898]: time="2025-02-13T15:32:59.631752346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:32:59.636223 containerd[1898]: time="2025-02-13T15:32:59.631943798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:32:59.674803 systemd[1]: run-containerd-runc-k8s.io-86c8424a4591df791046fb2bca87ab7cdd2e32520636a913f746fad88b5109aa-runc.wGUXU8.mount: Deactivated successfully. Feb 13 15:32:59.686502 systemd[1]: Started cri-containerd-86c8424a4591df791046fb2bca87ab7cdd2e32520636a913f746fad88b5109aa.scope - libcontainer container 86c8424a4591df791046fb2bca87ab7cdd2e32520636a913f746fad88b5109aa. Feb 13 15:32:59.761364 systemd[1]: Started cri-containerd-dce3d2676ca424203c564aca8c0a9dd021478faa7d092178675083f5dbf2e211.scope - libcontainer container dce3d2676ca424203c564aca8c0a9dd021478faa7d092178675083f5dbf2e211. Feb 13 15:32:59.928569 containerd[1898]: time="2025-02-13T15:32:59.927289852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-66qw5,Uid:c8ca1350-c9e4-4723-97b4-e773425179ef,Namespace:kube-system,Attempt:0,} returns sandbox id \"86c8424a4591df791046fb2bca87ab7cdd2e32520636a913f746fad88b5109aa\"" Feb 13 15:32:59.936892 containerd[1898]: time="2025-02-13T15:32:59.936750274Z" level=info msg="CreateContainer within sandbox \"86c8424a4591df791046fb2bca87ab7cdd2e32520636a913f746fad88b5109aa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:32:59.981597 containerd[1898]: time="2025-02-13T15:32:59.981304998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-d74xq,Uid:609d5dd6-dd87-4448-89cf-897385435808,Namespace:kube-system,Attempt:0,} returns sandbox id \"dce3d2676ca424203c564aca8c0a9dd021478faa7d092178675083f5dbf2e211\"" Feb 13 15:32:59.996182 containerd[1898]: time="2025-02-13T15:32:59.995290145Z" level=info msg="CreateContainer within sandbox \"dce3d2676ca424203c564aca8c0a9dd021478faa7d092178675083f5dbf2e211\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:32:59.997823 containerd[1898]: time="2025-02-13T15:32:59.996767985Z" level=info msg="CreateContainer within sandbox \"86c8424a4591df791046fb2bca87ab7cdd2e32520636a913f746fad88b5109aa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d356e4feda0afb76ec88e11755943e4b9f57585fd294122c5f7c6acace0a3596\"" Feb 13 15:33:00.046548 containerd[1898]: time="2025-02-13T15:33:00.025757750Z" level=info msg="StartContainer for \"d356e4feda0afb76ec88e11755943e4b9f57585fd294122c5f7c6acace0a3596\"" Feb 13 15:33:00.148346 containerd[1898]: time="2025-02-13T15:33:00.148292519Z" level=info msg="CreateContainer within sandbox \"dce3d2676ca424203c564aca8c0a9dd021478faa7d092178675083f5dbf2e211\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5805d3aef6f801061445947520e68d599eb2c6ff41b533ebedb9aa9f80094895\"" Feb 13 15:33:00.153683 containerd[1898]: time="2025-02-13T15:33:00.153638992Z" level=info msg="StartContainer for \"5805d3aef6f801061445947520e68d599eb2c6ff41b533ebedb9aa9f80094895\"" Feb 13 15:33:00.217382 systemd[1]: Started cri-containerd-d356e4feda0afb76ec88e11755943e4b9f57585fd294122c5f7c6acace0a3596.scope - libcontainer container d356e4feda0afb76ec88e11755943e4b9f57585fd294122c5f7c6acace0a3596. Feb 13 15:33:00.280375 systemd[1]: Started cri-containerd-5805d3aef6f801061445947520e68d599eb2c6ff41b533ebedb9aa9f80094895.scope - libcontainer container 5805d3aef6f801061445947520e68d599eb2c6ff41b533ebedb9aa9f80094895. Feb 13 15:33:00.338878 containerd[1898]: time="2025-02-13T15:33:00.338791564Z" level=info msg="StartContainer for \"d356e4feda0afb76ec88e11755943e4b9f57585fd294122c5f7c6acace0a3596\" returns successfully" Feb 13 15:33:00.353288 containerd[1898]: time="2025-02-13T15:33:00.353102027Z" level=info msg="StartContainer for \"5805d3aef6f801061445947520e68d599eb2c6ff41b533ebedb9aa9f80094895\" returns successfully" Feb 13 15:33:00.618596 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2306945396.mount: Deactivated successfully. Feb 13 15:33:00.655273 kubelet[3292]: I0213 15:33:00.652598 3292 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-66qw5" podStartSLOduration=38.652523832 podStartE2EDuration="38.652523832s" podCreationTimestamp="2025-02-13 15:32:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:33:00.642506134 +0000 UTC m=+51.748475655" watchObservedRunningTime="2025-02-13 15:33:00.652523832 +0000 UTC m=+51.758493347" Feb 13 15:33:00.699795 kubelet[3292]: I0213 15:33:00.699186 3292 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-d74xq" podStartSLOduration=38.698946408 podStartE2EDuration="38.698946408s" podCreationTimestamp="2025-02-13 15:32:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:33:00.696886267 +0000 UTC m=+51.802855788" watchObservedRunningTime="2025-02-13 15:33:00.698946408 +0000 UTC m=+51.804915927" Feb 13 15:33:03.176746 systemd[1]: Started sshd@8-172.31.29.23:22-139.178.89.65:53592.service - OpenSSH per-connection server daemon (139.178.89.65:53592). Feb 13 15:33:03.479123 sshd[4672]: Accepted publickey for core from 139.178.89.65 port 53592 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:33:03.490942 sshd-session[4672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:33:03.515984 systemd-logind[1875]: New session 9 of user core. Feb 13 15:33:03.519473 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 15:33:03.886586 sshd[4674]: Connection closed by 139.178.89.65 port 53592 Feb 13 15:33:03.887992 sshd-session[4672]: pam_unix(sshd:session): session closed for user core Feb 13 15:33:03.896698 systemd[1]: sshd@8-172.31.29.23:22-139.178.89.65:53592.service: Deactivated successfully. Feb 13 15:33:03.901444 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 15:33:03.902653 systemd-logind[1875]: Session 9 logged out. Waiting for processes to exit. Feb 13 15:33:03.905193 systemd-logind[1875]: Removed session 9. Feb 13 15:33:08.923490 systemd[1]: Started sshd@9-172.31.29.23:22-139.178.89.65:49888.service - OpenSSH per-connection server daemon (139.178.89.65:49888). Feb 13 15:33:09.132780 sshd[4686]: Accepted publickey for core from 139.178.89.65 port 49888 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:33:09.134836 sshd-session[4686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:33:09.145090 systemd-logind[1875]: New session 10 of user core. Feb 13 15:33:09.151614 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 15:33:09.423178 sshd[4689]: Connection closed by 139.178.89.65 port 49888 Feb 13 15:33:09.421779 sshd-session[4686]: pam_unix(sshd:session): session closed for user core Feb 13 15:33:09.441321 systemd[1]: sshd@9-172.31.29.23:22-139.178.89.65:49888.service: Deactivated successfully. Feb 13 15:33:09.453790 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 15:33:09.459557 systemd-logind[1875]: Session 10 logged out. Waiting for processes to exit. Feb 13 15:33:09.461907 systemd-logind[1875]: Removed session 10. Feb 13 15:33:14.460598 systemd[1]: Started sshd@10-172.31.29.23:22-139.178.89.65:49896.service - OpenSSH per-connection server daemon (139.178.89.65:49896). Feb 13 15:33:14.714733 sshd[4704]: Accepted publickey for core from 139.178.89.65 port 49896 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:33:14.722759 sshd-session[4704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:33:14.738207 systemd-logind[1875]: New session 11 of user core. Feb 13 15:33:14.744429 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 15:33:15.055468 sshd[4706]: Connection closed by 139.178.89.65 port 49896 Feb 13 15:33:15.058081 sshd-session[4704]: pam_unix(sshd:session): session closed for user core Feb 13 15:33:15.063790 systemd[1]: sshd@10-172.31.29.23:22-139.178.89.65:49896.service: Deactivated successfully. Feb 13 15:33:15.070619 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 15:33:15.073775 systemd-logind[1875]: Session 11 logged out. Waiting for processes to exit. Feb 13 15:33:15.075260 systemd-logind[1875]: Removed session 11. Feb 13 15:33:15.093535 systemd[1]: Started sshd@11-172.31.29.23:22-139.178.89.65:48212.service - OpenSSH per-connection server daemon (139.178.89.65:48212). Feb 13 15:33:15.321875 sshd[4718]: Accepted publickey for core from 139.178.89.65 port 48212 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:33:15.330655 sshd-session[4718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:33:15.342215 systemd-logind[1875]: New session 12 of user core. Feb 13 15:33:15.347409 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 15:33:15.752909 sshd[4720]: Connection closed by 139.178.89.65 port 48212 Feb 13 15:33:15.757348 sshd-session[4718]: pam_unix(sshd:session): session closed for user core Feb 13 15:33:15.773087 systemd[1]: sshd@11-172.31.29.23:22-139.178.89.65:48212.service: Deactivated successfully. Feb 13 15:33:15.778561 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 15:33:15.783476 systemd-logind[1875]: Session 12 logged out. Waiting for processes to exit. Feb 13 15:33:15.809688 systemd[1]: Started sshd@12-172.31.29.23:22-139.178.89.65:48216.service - OpenSSH per-connection server daemon (139.178.89.65:48216). Feb 13 15:33:15.812470 systemd-logind[1875]: Removed session 12. Feb 13 15:33:15.991634 sshd[4729]: Accepted publickey for core from 139.178.89.65 port 48216 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:33:15.996492 sshd-session[4729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:33:16.033977 systemd-logind[1875]: New session 13 of user core. Feb 13 15:33:16.047498 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 15:33:16.402056 sshd[4731]: Connection closed by 139.178.89.65 port 48216 Feb 13 15:33:16.404123 sshd-session[4729]: pam_unix(sshd:session): session closed for user core Feb 13 15:33:16.409636 systemd-logind[1875]: Session 13 logged out. Waiting for processes to exit. Feb 13 15:33:16.410381 systemd[1]: sshd@12-172.31.29.23:22-139.178.89.65:48216.service: Deactivated successfully. Feb 13 15:33:16.414700 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 15:33:16.415969 systemd-logind[1875]: Removed session 13. Feb 13 15:33:21.444664 systemd[1]: Started sshd@13-172.31.29.23:22-139.178.89.65:48232.service - OpenSSH per-connection server daemon (139.178.89.65:48232). Feb 13 15:33:21.617170 sshd[4743]: Accepted publickey for core from 139.178.89.65 port 48232 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:33:21.618279 sshd-session[4743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:33:21.625981 systemd-logind[1875]: New session 14 of user core. Feb 13 15:33:21.629468 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 15:33:21.851681 sshd[4745]: Connection closed by 139.178.89.65 port 48232 Feb 13 15:33:21.852627 sshd-session[4743]: pam_unix(sshd:session): session closed for user core Feb 13 15:33:21.858986 systemd[1]: sshd@13-172.31.29.23:22-139.178.89.65:48232.service: Deactivated successfully. Feb 13 15:33:21.862379 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 15:33:21.865098 systemd-logind[1875]: Session 14 logged out. Waiting for processes to exit. Feb 13 15:33:21.867313 systemd-logind[1875]: Removed session 14. Feb 13 15:33:26.889690 systemd[1]: Started sshd@14-172.31.29.23:22-139.178.89.65:33614.service - OpenSSH per-connection server daemon (139.178.89.65:33614). Feb 13 15:33:27.067760 sshd[4758]: Accepted publickey for core from 139.178.89.65 port 33614 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:33:27.068573 sshd-session[4758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:33:27.082419 systemd-logind[1875]: New session 15 of user core. Feb 13 15:33:27.091408 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 15:33:27.381570 sshd[4760]: Connection closed by 139.178.89.65 port 33614 Feb 13 15:33:27.382215 sshd-session[4758]: pam_unix(sshd:session): session closed for user core Feb 13 15:33:27.396404 systemd[1]: sshd@14-172.31.29.23:22-139.178.89.65:33614.service: Deactivated successfully. Feb 13 15:33:27.402580 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 15:33:27.408820 systemd-logind[1875]: Session 15 logged out. Waiting for processes to exit. Feb 13 15:33:27.410161 systemd-logind[1875]: Removed session 15. Feb 13 15:33:32.435228 systemd[1]: Started sshd@15-172.31.29.23:22-139.178.89.65:33620.service - OpenSSH per-connection server daemon (139.178.89.65:33620). Feb 13 15:33:32.660787 sshd[4773]: Accepted publickey for core from 139.178.89.65 port 33620 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:33:32.662782 sshd-session[4773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:33:32.670197 systemd-logind[1875]: New session 16 of user core. Feb 13 15:33:32.673411 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 15:33:32.989317 sshd[4775]: Connection closed by 139.178.89.65 port 33620 Feb 13 15:33:32.990475 sshd-session[4773]: pam_unix(sshd:session): session closed for user core Feb 13 15:33:33.022679 systemd[1]: sshd@15-172.31.29.23:22-139.178.89.65:33620.service: Deactivated successfully. Feb 13 15:33:33.026267 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 15:33:33.028313 systemd-logind[1875]: Session 16 logged out. Waiting for processes to exit. Feb 13 15:33:33.036827 systemd[1]: Started sshd@16-172.31.29.23:22-139.178.89.65:33634.service - OpenSSH per-connection server daemon (139.178.89.65:33634). Feb 13 15:33:33.040052 systemd-logind[1875]: Removed session 16. Feb 13 15:33:33.213589 sshd[4786]: Accepted publickey for core from 139.178.89.65 port 33634 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:33:33.215849 sshd-session[4786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:33:33.223579 systemd-logind[1875]: New session 17 of user core. Feb 13 15:33:33.229360 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 15:33:34.008119 sshd[4788]: Connection closed by 139.178.89.65 port 33634 Feb 13 15:33:34.010424 sshd-session[4786]: pam_unix(sshd:session): session closed for user core Feb 13 15:33:34.018804 systemd[1]: sshd@16-172.31.29.23:22-139.178.89.65:33634.service: Deactivated successfully. Feb 13 15:33:34.022339 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 15:33:34.024220 systemd-logind[1875]: Session 17 logged out. Waiting for processes to exit. Feb 13 15:33:34.028206 systemd-logind[1875]: Removed session 17. Feb 13 15:33:34.044087 systemd[1]: Started sshd@17-172.31.29.23:22-139.178.89.65:33648.service - OpenSSH per-connection server daemon (139.178.89.65:33648). Feb 13 15:33:34.262928 sshd[4797]: Accepted publickey for core from 139.178.89.65 port 33648 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:33:34.264714 sshd-session[4797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:33:34.274891 systemd-logind[1875]: New session 18 of user core. Feb 13 15:33:34.286401 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 15:33:36.490599 sshd[4799]: Connection closed by 139.178.89.65 port 33648 Feb 13 15:33:36.491796 sshd-session[4797]: pam_unix(sshd:session): session closed for user core Feb 13 15:33:36.505020 systemd[1]: sshd@17-172.31.29.23:22-139.178.89.65:33648.service: Deactivated successfully. Feb 13 15:33:36.511832 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 15:33:36.520481 systemd-logind[1875]: Session 18 logged out. Waiting for processes to exit. Feb 13 15:33:36.537903 systemd[1]: Started sshd@18-172.31.29.23:22-139.178.89.65:39410.service - OpenSSH per-connection server daemon (139.178.89.65:39410). Feb 13 15:33:36.540552 systemd-logind[1875]: Removed session 18. Feb 13 15:33:36.733446 sshd[4815]: Accepted publickey for core from 139.178.89.65 port 39410 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:33:36.736521 sshd-session[4815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:33:36.747258 systemd-logind[1875]: New session 19 of user core. Feb 13 15:33:36.754419 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 15:33:37.457976 sshd[4817]: Connection closed by 139.178.89.65 port 39410 Feb 13 15:33:37.460475 sshd-session[4815]: pam_unix(sshd:session): session closed for user core Feb 13 15:33:37.469716 systemd[1]: sshd@18-172.31.29.23:22-139.178.89.65:39410.service: Deactivated successfully. Feb 13 15:33:37.474854 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 15:33:37.479790 systemd-logind[1875]: Session 19 logged out. Waiting for processes to exit. Feb 13 15:33:37.507802 systemd[1]: Started sshd@19-172.31.29.23:22-139.178.89.65:39422.service - OpenSSH per-connection server daemon (139.178.89.65:39422). Feb 13 15:33:37.509857 systemd-logind[1875]: Removed session 19. Feb 13 15:33:37.677381 sshd[4826]: Accepted publickey for core from 139.178.89.65 port 39422 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:33:37.679269 sshd-session[4826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:33:37.690584 systemd-logind[1875]: New session 20 of user core. Feb 13 15:33:37.704700 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 15:33:37.967894 sshd[4828]: Connection closed by 139.178.89.65 port 39422 Feb 13 15:33:37.969028 sshd-session[4826]: pam_unix(sshd:session): session closed for user core Feb 13 15:33:37.975960 systemd[1]: sshd@19-172.31.29.23:22-139.178.89.65:39422.service: Deactivated successfully. Feb 13 15:33:37.979719 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 15:33:37.982591 systemd-logind[1875]: Session 20 logged out. Waiting for processes to exit. Feb 13 15:33:37.984048 systemd-logind[1875]: Removed session 20. Feb 13 15:33:43.002212 systemd[1]: Started sshd@20-172.31.29.23:22-139.178.89.65:39434.service - OpenSSH per-connection server daemon (139.178.89.65:39434). Feb 13 15:33:43.191339 sshd[4840]: Accepted publickey for core from 139.178.89.65 port 39434 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:33:43.192588 sshd-session[4840]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:33:43.200801 systemd-logind[1875]: New session 21 of user core. Feb 13 15:33:43.209021 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 15:33:43.448804 sshd[4842]: Connection closed by 139.178.89.65 port 39434 Feb 13 15:33:43.452438 sshd-session[4840]: pam_unix(sshd:session): session closed for user core Feb 13 15:33:43.463079 systemd[1]: sshd@20-172.31.29.23:22-139.178.89.65:39434.service: Deactivated successfully. Feb 13 15:33:43.477797 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 15:33:43.488455 systemd-logind[1875]: Session 21 logged out. Waiting for processes to exit. Feb 13 15:33:43.498441 systemd-logind[1875]: Removed session 21. Feb 13 15:33:48.510680 systemd[1]: Started sshd@21-172.31.29.23:22-139.178.89.65:47052.service - OpenSSH per-connection server daemon (139.178.89.65:47052). Feb 13 15:33:48.696063 sshd[4856]: Accepted publickey for core from 139.178.89.65 port 47052 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:33:48.700788 sshd-session[4856]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:33:48.705847 systemd-logind[1875]: New session 22 of user core. Feb 13 15:33:48.711720 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 15:33:48.995438 sshd[4858]: Connection closed by 139.178.89.65 port 47052 Feb 13 15:33:48.997333 sshd-session[4856]: pam_unix(sshd:session): session closed for user core Feb 13 15:33:49.003542 systemd[1]: sshd@21-172.31.29.23:22-139.178.89.65:47052.service: Deactivated successfully. Feb 13 15:33:49.006265 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 15:33:49.007224 systemd-logind[1875]: Session 22 logged out. Waiting for processes to exit. Feb 13 15:33:49.008420 systemd-logind[1875]: Removed session 22. Feb 13 15:33:54.037231 systemd[1]: Started sshd@22-172.31.29.23:22-139.178.89.65:47062.service - OpenSSH per-connection server daemon (139.178.89.65:47062). Feb 13 15:33:54.234458 sshd[4869]: Accepted publickey for core from 139.178.89.65 port 47062 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:33:54.236001 sshd-session[4869]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:33:54.259237 systemd-logind[1875]: New session 23 of user core. Feb 13 15:33:54.268365 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 15:33:54.463578 sshd[4871]: Connection closed by 139.178.89.65 port 47062 Feb 13 15:33:54.464379 sshd-session[4869]: pam_unix(sshd:session): session closed for user core Feb 13 15:33:54.467824 systemd[1]: sshd@22-172.31.29.23:22-139.178.89.65:47062.service: Deactivated successfully. Feb 13 15:33:54.470566 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 15:33:54.472682 systemd-logind[1875]: Session 23 logged out. Waiting for processes to exit. Feb 13 15:33:54.474032 systemd-logind[1875]: Removed session 23. Feb 13 15:33:59.505547 systemd[1]: Started sshd@23-172.31.29.23:22-139.178.89.65:40486.service - OpenSSH per-connection server daemon (139.178.89.65:40486). Feb 13 15:33:59.718181 sshd[4884]: Accepted publickey for core from 139.178.89.65 port 40486 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:33:59.719450 sshd-session[4884]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:33:59.742876 systemd-logind[1875]: New session 24 of user core. Feb 13 15:33:59.749844 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 15:34:00.164856 sshd[4886]: Connection closed by 139.178.89.65 port 40486 Feb 13 15:34:00.166553 sshd-session[4884]: pam_unix(sshd:session): session closed for user core Feb 13 15:34:00.179471 systemd[1]: sshd@23-172.31.29.23:22-139.178.89.65:40486.service: Deactivated successfully. Feb 13 15:34:00.193613 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 15:34:00.200473 systemd-logind[1875]: Session 24 logged out. Waiting for processes to exit. Feb 13 15:34:00.225235 systemd[1]: Started sshd@24-172.31.29.23:22-139.178.89.65:40488.service - OpenSSH per-connection server daemon (139.178.89.65:40488). Feb 13 15:34:00.232097 systemd-logind[1875]: Removed session 24. Feb 13 15:34:00.452153 sshd[4896]: Accepted publickey for core from 139.178.89.65 port 40488 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:34:00.453888 sshd-session[4896]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:34:00.465464 systemd-logind[1875]: New session 25 of user core. Feb 13 15:34:00.475456 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 15:34:02.956909 systemd[1]: run-containerd-runc-k8s.io-4b04194ccaa5e49c3ab58e2cd264f401d1903963530f7fdc8349260cb678c506-runc.spU7th.mount: Deactivated successfully. Feb 13 15:34:02.968157 containerd[1898]: time="2025-02-13T15:34:02.968102394Z" level=info msg="StopContainer for \"9d4a5f63da51fd9c7e9b25fe4b8c1aac219044febe7875e965f69ee1d238ab45\" with timeout 30 (s)" Feb 13 15:34:02.972187 containerd[1898]: time="2025-02-13T15:34:02.971287217Z" level=info msg="Stop container \"9d4a5f63da51fd9c7e9b25fe4b8c1aac219044febe7875e965f69ee1d238ab45\" with signal terminated" Feb 13 15:34:03.012629 containerd[1898]: time="2025-02-13T15:34:03.012534799Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:34:03.028988 containerd[1898]: time="2025-02-13T15:34:03.028761763Z" level=info msg="StopContainer for \"4b04194ccaa5e49c3ab58e2cd264f401d1903963530f7fdc8349260cb678c506\" with timeout 2 (s)" Feb 13 15:34:03.029640 containerd[1898]: time="2025-02-13T15:34:03.029578175Z" level=info msg="Stop container \"4b04194ccaa5e49c3ab58e2cd264f401d1903963530f7fdc8349260cb678c506\" with signal terminated" Feb 13 15:34:03.042057 systemd[1]: cri-containerd-9d4a5f63da51fd9c7e9b25fe4b8c1aac219044febe7875e965f69ee1d238ab45.scope: Deactivated successfully. Feb 13 15:34:03.118439 systemd-networkd[1740]: lxc_health: Link DOWN Feb 13 15:34:03.118452 systemd-networkd[1740]: lxc_health: Lost carrier Feb 13 15:34:03.338546 systemd[1]: cri-containerd-4b04194ccaa5e49c3ab58e2cd264f401d1903963530f7fdc8349260cb678c506.scope: Deactivated successfully. Feb 13 15:34:03.340282 systemd[1]: cri-containerd-4b04194ccaa5e49c3ab58e2cd264f401d1903963530f7fdc8349260cb678c506.scope: Consumed 9.576s CPU time. Feb 13 15:34:03.466772 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9d4a5f63da51fd9c7e9b25fe4b8c1aac219044febe7875e965f69ee1d238ab45-rootfs.mount: Deactivated successfully. Feb 13 15:34:03.519639 containerd[1898]: time="2025-02-13T15:34:03.519539623Z" level=info msg="shim disconnected" id=9d4a5f63da51fd9c7e9b25fe4b8c1aac219044febe7875e965f69ee1d238ab45 namespace=k8s.io Feb 13 15:34:03.519961 containerd[1898]: time="2025-02-13T15:34:03.519932658Z" level=warning msg="cleaning up after shim disconnected" id=9d4a5f63da51fd9c7e9b25fe4b8c1aac219044febe7875e965f69ee1d238ab45 namespace=k8s.io Feb 13 15:34:03.520069 containerd[1898]: time="2025-02-13T15:34:03.520052585Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:34:03.559230 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4b04194ccaa5e49c3ab58e2cd264f401d1903963530f7fdc8349260cb678c506-rootfs.mount: Deactivated successfully. Feb 13 15:34:03.594597 containerd[1898]: time="2025-02-13T15:34:03.594402214Z" level=info msg="shim disconnected" id=4b04194ccaa5e49c3ab58e2cd264f401d1903963530f7fdc8349260cb678c506 namespace=k8s.io Feb 13 15:34:03.596350 containerd[1898]: time="2025-02-13T15:34:03.594916083Z" level=warning msg="cleaning up after shim disconnected" id=4b04194ccaa5e49c3ab58e2cd264f401d1903963530f7fdc8349260cb678c506 namespace=k8s.io Feb 13 15:34:03.600780 containerd[1898]: time="2025-02-13T15:34:03.600725159Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:34:03.627220 containerd[1898]: time="2025-02-13T15:34:03.627171330Z" level=info msg="StopContainer for \"9d4a5f63da51fd9c7e9b25fe4b8c1aac219044febe7875e965f69ee1d238ab45\" returns successfully" Feb 13 15:34:03.640424 containerd[1898]: time="2025-02-13T15:34:03.640329513Z" level=info msg="StopContainer for \"4b04194ccaa5e49c3ab58e2cd264f401d1903963530f7fdc8349260cb678c506\" returns successfully" Feb 13 15:34:03.669328 containerd[1898]: time="2025-02-13T15:34:03.655802309Z" level=info msg="StopPodSandbox for \"abeeff2366480d8c3ced95cb7171fd1d01e1bdbbd23b91586ddb3bc4784dd1a1\"" Feb 13 15:34:03.669328 containerd[1898]: time="2025-02-13T15:34:03.657208786Z" level=info msg="StopPodSandbox for \"1fd223449b8590f2f94194b7932a74baf731185aaa5097de4850cc27be74b0df\"" Feb 13 15:34:03.680240 containerd[1898]: time="2025-02-13T15:34:03.677698627Z" level=info msg="Container to stop \"a178c7d244737c250ea4376e02f048dfdd2d0798cc8f162af41e56cc9a4cab2c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:34:03.680240 containerd[1898]: time="2025-02-13T15:34:03.680243319Z" level=info msg="Container to stop \"4b04194ccaa5e49c3ab58e2cd264f401d1903963530f7fdc8349260cb678c506\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:34:03.680438 containerd[1898]: time="2025-02-13T15:34:03.680258246Z" level=info msg="Container to stop \"7c1a3e0e3c2fab5c7cbd8256b30921241588136e076e76d12b530d706aa1a043\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:34:03.680438 containerd[1898]: time="2025-02-13T15:34:03.680272923Z" level=info msg="Container to stop \"0179d92aa942e2ca435be8255e45643e0c14b41e96406641b5681273b0520bc1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:34:03.680438 containerd[1898]: time="2025-02-13T15:34:03.680293883Z" level=info msg="Container to stop \"d283fea7eaa8bdca3cbe9d06b9babb35ef02c254f245ec18deecb606f8e2a51c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:34:03.680719 containerd[1898]: time="2025-02-13T15:34:03.677698613Z" level=info msg="Container to stop \"9d4a5f63da51fd9c7e9b25fe4b8c1aac219044febe7875e965f69ee1d238ab45\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:34:03.693276 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-abeeff2366480d8c3ced95cb7171fd1d01e1bdbbd23b91586ddb3bc4784dd1a1-shm.mount: Deactivated successfully. Feb 13 15:34:03.693459 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1fd223449b8590f2f94194b7932a74baf731185aaa5097de4850cc27be74b0df-shm.mount: Deactivated successfully. Feb 13 15:34:03.714197 systemd[1]: cri-containerd-abeeff2366480d8c3ced95cb7171fd1d01e1bdbbd23b91586ddb3bc4784dd1a1.scope: Deactivated successfully. Feb 13 15:34:03.739936 systemd[1]: cri-containerd-1fd223449b8590f2f94194b7932a74baf731185aaa5097de4850cc27be74b0df.scope: Deactivated successfully. Feb 13 15:34:03.860445 containerd[1898]: time="2025-02-13T15:34:03.860058794Z" level=info msg="shim disconnected" id=abeeff2366480d8c3ced95cb7171fd1d01e1bdbbd23b91586ddb3bc4784dd1a1 namespace=k8s.io Feb 13 15:34:03.861771 containerd[1898]: time="2025-02-13T15:34:03.861554665Z" level=warning msg="cleaning up after shim disconnected" id=abeeff2366480d8c3ced95cb7171fd1d01e1bdbbd23b91586ddb3bc4784dd1a1 namespace=k8s.io Feb 13 15:34:03.861771 containerd[1898]: time="2025-02-13T15:34:03.861588455Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:34:03.884196 containerd[1898]: time="2025-02-13T15:34:03.884023281Z" level=info msg="shim disconnected" id=1fd223449b8590f2f94194b7932a74baf731185aaa5097de4850cc27be74b0df namespace=k8s.io Feb 13 15:34:03.884196 containerd[1898]: time="2025-02-13T15:34:03.884094658Z" level=warning msg="cleaning up after shim disconnected" id=1fd223449b8590f2f94194b7932a74baf731185aaa5097de4850cc27be74b0df namespace=k8s.io Feb 13 15:34:03.884196 containerd[1898]: time="2025-02-13T15:34:03.884108214Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:34:03.938216 containerd[1898]: time="2025-02-13T15:34:03.937504829Z" level=warning msg="cleanup warnings time=\"2025-02-13T15:34:03Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 15:34:03.939355 containerd[1898]: time="2025-02-13T15:34:03.939319586Z" level=info msg="TearDown network for sandbox \"abeeff2366480d8c3ced95cb7171fd1d01e1bdbbd23b91586ddb3bc4784dd1a1\" successfully" Feb 13 15:34:03.939493 containerd[1898]: time="2025-02-13T15:34:03.939475634Z" level=info msg="StopPodSandbox for \"abeeff2366480d8c3ced95cb7171fd1d01e1bdbbd23b91586ddb3bc4784dd1a1\" returns successfully" Feb 13 15:34:03.956101 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-abeeff2366480d8c3ced95cb7171fd1d01e1bdbbd23b91586ddb3bc4784dd1a1-rootfs.mount: Deactivated successfully. Feb 13 15:34:03.956351 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1fd223449b8590f2f94194b7932a74baf731185aaa5097de4850cc27be74b0df-rootfs.mount: Deactivated successfully. Feb 13 15:34:03.964824 containerd[1898]: time="2025-02-13T15:34:03.964726870Z" level=info msg="TearDown network for sandbox \"1fd223449b8590f2f94194b7932a74baf731185aaa5097de4850cc27be74b0df\" successfully" Feb 13 15:34:03.969817 containerd[1898]: time="2025-02-13T15:34:03.969771563Z" level=info msg="StopPodSandbox for \"1fd223449b8590f2f94194b7932a74baf731185aaa5097de4850cc27be74b0df\" returns successfully" Feb 13 15:34:04.098560 kubelet[3292]: I0213 15:34:04.098520 3292 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gt95t\" (UniqueName: \"kubernetes.io/projected/ee2a8f05-f891-43fc-8410-d4394031cfae-kube-api-access-gt95t\") pod \"ee2a8f05-f891-43fc-8410-d4394031cfae\" (UID: \"ee2a8f05-f891-43fc-8410-d4394031cfae\") " Feb 13 15:34:04.099153 kubelet[3292]: I0213 15:34:04.099110 3292 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ee2a8f05-f891-43fc-8410-d4394031cfae-cilium-config-path\") pod \"ee2a8f05-f891-43fc-8410-d4394031cfae\" (UID: \"ee2a8f05-f891-43fc-8410-d4394031cfae\") " Feb 13 15:34:04.099255 kubelet[3292]: I0213 15:34:04.099190 3292 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ee2a8f05-f891-43fc-8410-d4394031cfae-clustermesh-secrets\") pod \"ee2a8f05-f891-43fc-8410-d4394031cfae\" (UID: \"ee2a8f05-f891-43fc-8410-d4394031cfae\") " Feb 13 15:34:04.099255 kubelet[3292]: I0213 15:34:04.099223 3292 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ee2a8f05-f891-43fc-8410-d4394031cfae-hostproc\") pod \"ee2a8f05-f891-43fc-8410-d4394031cfae\" (UID: \"ee2a8f05-f891-43fc-8410-d4394031cfae\") " Feb 13 15:34:04.099255 kubelet[3292]: I0213 15:34:04.099250 3292 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ee2a8f05-f891-43fc-8410-d4394031cfae-etc-cni-netd\") pod \"ee2a8f05-f891-43fc-8410-d4394031cfae\" (UID: \"ee2a8f05-f891-43fc-8410-d4394031cfae\") " Feb 13 15:34:04.099401 kubelet[3292]: I0213 15:34:04.099280 3292 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ee2a8f05-f891-43fc-8410-d4394031cfae-host-proc-sys-kernel\") pod \"ee2a8f05-f891-43fc-8410-d4394031cfae\" (UID: \"ee2a8f05-f891-43fc-8410-d4394031cfae\") " Feb 13 15:34:04.099401 kubelet[3292]: I0213 15:34:04.099314 3292 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ee2a8f05-f891-43fc-8410-d4394031cfae-hubble-tls\") pod \"ee2a8f05-f891-43fc-8410-d4394031cfae\" (UID: \"ee2a8f05-f891-43fc-8410-d4394031cfae\") " Feb 13 15:34:04.099401 kubelet[3292]: I0213 15:34:04.099349 3292 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6p7ld\" (UniqueName: \"kubernetes.io/projected/169fe880-db03-415c-8e71-809acabe91f6-kube-api-access-6p7ld\") pod \"169fe880-db03-415c-8e71-809acabe91f6\" (UID: \"169fe880-db03-415c-8e71-809acabe91f6\") " Feb 13 15:34:04.099401 kubelet[3292]: I0213 15:34:04.099379 3292 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ee2a8f05-f891-43fc-8410-d4394031cfae-cilium-cgroup\") pod \"ee2a8f05-f891-43fc-8410-d4394031cfae\" (UID: \"ee2a8f05-f891-43fc-8410-d4394031cfae\") " Feb 13 15:34:04.099594 kubelet[3292]: I0213 15:34:04.099411 3292 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ee2a8f05-f891-43fc-8410-d4394031cfae-xtables-lock\") pod \"ee2a8f05-f891-43fc-8410-d4394031cfae\" (UID: \"ee2a8f05-f891-43fc-8410-d4394031cfae\") " Feb 13 15:34:04.099594 kubelet[3292]: I0213 15:34:04.099439 3292 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ee2a8f05-f891-43fc-8410-d4394031cfae-host-proc-sys-net\") pod \"ee2a8f05-f891-43fc-8410-d4394031cfae\" (UID: \"ee2a8f05-f891-43fc-8410-d4394031cfae\") " Feb 13 15:34:04.099594 kubelet[3292]: I0213 15:34:04.099469 3292 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/169fe880-db03-415c-8e71-809acabe91f6-cilium-config-path\") pod \"169fe880-db03-415c-8e71-809acabe91f6\" (UID: \"169fe880-db03-415c-8e71-809acabe91f6\") " Feb 13 15:34:04.099594 kubelet[3292]: I0213 15:34:04.099495 3292 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ee2a8f05-f891-43fc-8410-d4394031cfae-bpf-maps\") pod \"ee2a8f05-f891-43fc-8410-d4394031cfae\" (UID: \"ee2a8f05-f891-43fc-8410-d4394031cfae\") " Feb 13 15:34:04.099594 kubelet[3292]: I0213 15:34:04.099523 3292 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ee2a8f05-f891-43fc-8410-d4394031cfae-cni-path\") pod \"ee2a8f05-f891-43fc-8410-d4394031cfae\" (UID: \"ee2a8f05-f891-43fc-8410-d4394031cfae\") " Feb 13 15:34:04.099594 kubelet[3292]: I0213 15:34:04.099555 3292 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ee2a8f05-f891-43fc-8410-d4394031cfae-lib-modules\") pod \"ee2a8f05-f891-43fc-8410-d4394031cfae\" (UID: \"ee2a8f05-f891-43fc-8410-d4394031cfae\") " Feb 13 15:34:04.102202 kubelet[3292]: I0213 15:34:04.099585 3292 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ee2a8f05-f891-43fc-8410-d4394031cfae-cilium-run\") pod \"ee2a8f05-f891-43fc-8410-d4394031cfae\" (UID: \"ee2a8f05-f891-43fc-8410-d4394031cfae\") " Feb 13 15:34:04.106328 kubelet[3292]: I0213 15:34:04.099662 3292 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee2a8f05-f891-43fc-8410-d4394031cfae-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ee2a8f05-f891-43fc-8410-d4394031cfae" (UID: "ee2a8f05-f891-43fc-8410-d4394031cfae"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:34:04.113970 kubelet[3292]: I0213 15:34:04.113090 3292 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee2a8f05-f891-43fc-8410-d4394031cfae-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ee2a8f05-f891-43fc-8410-d4394031cfae" (UID: "ee2a8f05-f891-43fc-8410-d4394031cfae"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 15:34:04.131738 kubelet[3292]: I0213 15:34:04.131672 3292 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee2a8f05-f891-43fc-8410-d4394031cfae-hostproc" (OuterVolumeSpecName: "hostproc") pod "ee2a8f05-f891-43fc-8410-d4394031cfae" (UID: "ee2a8f05-f891-43fc-8410-d4394031cfae"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:34:04.131738 kubelet[3292]: I0213 15:34:04.131734 3292 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee2a8f05-f891-43fc-8410-d4394031cfae-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ee2a8f05-f891-43fc-8410-d4394031cfae" (UID: "ee2a8f05-f891-43fc-8410-d4394031cfae"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:34:04.132015 kubelet[3292]: I0213 15:34:04.131755 3292 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee2a8f05-f891-43fc-8410-d4394031cfae-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ee2a8f05-f891-43fc-8410-d4394031cfae" (UID: "ee2a8f05-f891-43fc-8410-d4394031cfae"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:34:04.158360 kubelet[3292]: I0213 15:34:04.158309 3292 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee2a8f05-f891-43fc-8410-d4394031cfae-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ee2a8f05-f891-43fc-8410-d4394031cfae" (UID: "ee2a8f05-f891-43fc-8410-d4394031cfae"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:34:04.159416 kubelet[3292]: I0213 15:34:04.158761 3292 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee2a8f05-f891-43fc-8410-d4394031cfae-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ee2a8f05-f891-43fc-8410-d4394031cfae" (UID: "ee2a8f05-f891-43fc-8410-d4394031cfae"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:34:04.159416 kubelet[3292]: I0213 15:34:04.158939 3292 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee2a8f05-f891-43fc-8410-d4394031cfae-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ee2a8f05-f891-43fc-8410-d4394031cfae" (UID: "ee2a8f05-f891-43fc-8410-d4394031cfae"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:34:04.160664 kubelet[3292]: I0213 15:34:04.159781 3292 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee2a8f05-f891-43fc-8410-d4394031cfae-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ee2a8f05-f891-43fc-8410-d4394031cfae" (UID: "ee2a8f05-f891-43fc-8410-d4394031cfae"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:34:04.160664 kubelet[3292]: I0213 15:34:04.160187 3292 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee2a8f05-f891-43fc-8410-d4394031cfae-cni-path" (OuterVolumeSpecName: "cni-path") pod "ee2a8f05-f891-43fc-8410-d4394031cfae" (UID: "ee2a8f05-f891-43fc-8410-d4394031cfae"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:34:04.160664 kubelet[3292]: I0213 15:34:04.160483 3292 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee2a8f05-f891-43fc-8410-d4394031cfae-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ee2a8f05-f891-43fc-8410-d4394031cfae" (UID: "ee2a8f05-f891-43fc-8410-d4394031cfae"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:34:04.176634 kubelet[3292]: I0213 15:34:04.169737 3292 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee2a8f05-f891-43fc-8410-d4394031cfae-kube-api-access-gt95t" (OuterVolumeSpecName: "kube-api-access-gt95t") pod "ee2a8f05-f891-43fc-8410-d4394031cfae" (UID: "ee2a8f05-f891-43fc-8410-d4394031cfae"). InnerVolumeSpecName "kube-api-access-gt95t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:34:04.176634 kubelet[3292]: I0213 15:34:04.169889 3292 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee2a8f05-f891-43fc-8410-d4394031cfae-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ee2a8f05-f891-43fc-8410-d4394031cfae" (UID: "ee2a8f05-f891-43fc-8410-d4394031cfae"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:34:04.176634 kubelet[3292]: I0213 15:34:04.172000 3292 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/169fe880-db03-415c-8e71-809acabe91f6-kube-api-access-6p7ld" (OuterVolumeSpecName: "kube-api-access-6p7ld") pod "169fe880-db03-415c-8e71-809acabe91f6" (UID: "169fe880-db03-415c-8e71-809acabe91f6"). InnerVolumeSpecName "kube-api-access-6p7ld". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:34:04.166435 systemd[1]: var-lib-kubelet-pods-ee2a8f05\x2df891\x2d43fc\x2d8410\x2dd4394031cfae-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 15:34:04.166587 systemd[1]: var-lib-kubelet-pods-ee2a8f05\x2df891\x2d43fc\x2d8410\x2dd4394031cfae-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgt95t.mount: Deactivated successfully. Feb 13 15:34:04.182733 kubelet[3292]: I0213 15:34:04.177714 3292 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/169fe880-db03-415c-8e71-809acabe91f6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "169fe880-db03-415c-8e71-809acabe91f6" (UID: "169fe880-db03-415c-8e71-809acabe91f6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 15:34:04.191185 kubelet[3292]: I0213 15:34:04.190740 3292 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee2a8f05-f891-43fc-8410-d4394031cfae-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ee2a8f05-f891-43fc-8410-d4394031cfae" (UID: "ee2a8f05-f891-43fc-8410-d4394031cfae"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 15:34:04.194484 systemd[1]: var-lib-kubelet-pods-169fe880\x2ddb03\x2d415c\x2d8e71\x2d809acabe91f6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6p7ld.mount: Deactivated successfully. Feb 13 15:34:04.200962 kubelet[3292]: I0213 15:34:04.200899 3292 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ee2a8f05-f891-43fc-8410-d4394031cfae-etc-cni-netd\") on node \"ip-172-31-29-23\" DevicePath \"\"" Feb 13 15:34:04.200962 kubelet[3292]: I0213 15:34:04.200948 3292 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ee2a8f05-f891-43fc-8410-d4394031cfae-host-proc-sys-kernel\") on node \"ip-172-31-29-23\" DevicePath \"\"" Feb 13 15:34:04.200962 kubelet[3292]: I0213 15:34:04.200965 3292 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ee2a8f05-f891-43fc-8410-d4394031cfae-hubble-tls\") on node \"ip-172-31-29-23\" DevicePath \"\"" Feb 13 15:34:04.201381 kubelet[3292]: I0213 15:34:04.200979 3292 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ee2a8f05-f891-43fc-8410-d4394031cfae-hostproc\") on node \"ip-172-31-29-23\" DevicePath \"\"" Feb 13 15:34:04.201381 kubelet[3292]: I0213 15:34:04.200994 3292 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ee2a8f05-f891-43fc-8410-d4394031cfae-cilium-cgroup\") on node \"ip-172-31-29-23\" DevicePath \"\"" Feb 13 15:34:04.201381 kubelet[3292]: I0213 15:34:04.201009 3292 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-6p7ld\" (UniqueName: \"kubernetes.io/projected/169fe880-db03-415c-8e71-809acabe91f6-kube-api-access-6p7ld\") on node \"ip-172-31-29-23\" DevicePath \"\"" Feb 13 15:34:04.201381 kubelet[3292]: I0213 15:34:04.201024 3292 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ee2a8f05-f891-43fc-8410-d4394031cfae-host-proc-sys-net\") on node \"ip-172-31-29-23\" DevicePath \"\"" Feb 13 15:34:04.201381 kubelet[3292]: I0213 15:34:04.201042 3292 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/169fe880-db03-415c-8e71-809acabe91f6-cilium-config-path\") on node \"ip-172-31-29-23\" DevicePath \"\"" Feb 13 15:34:04.201381 kubelet[3292]: I0213 15:34:04.201060 3292 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ee2a8f05-f891-43fc-8410-d4394031cfae-bpf-maps\") on node \"ip-172-31-29-23\" DevicePath \"\"" Feb 13 15:34:04.201381 kubelet[3292]: I0213 15:34:04.201077 3292 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ee2a8f05-f891-43fc-8410-d4394031cfae-cni-path\") on node \"ip-172-31-29-23\" DevicePath \"\"" Feb 13 15:34:04.201381 kubelet[3292]: I0213 15:34:04.201210 3292 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ee2a8f05-f891-43fc-8410-d4394031cfae-xtables-lock\") on node \"ip-172-31-29-23\" DevicePath \"\"" Feb 13 15:34:04.201991 kubelet[3292]: I0213 15:34:04.201264 3292 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ee2a8f05-f891-43fc-8410-d4394031cfae-cilium-run\") on node \"ip-172-31-29-23\" DevicePath \"\"" Feb 13 15:34:04.201991 kubelet[3292]: I0213 15:34:04.201280 3292 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ee2a8f05-f891-43fc-8410-d4394031cfae-lib-modules\") on node \"ip-172-31-29-23\" DevicePath \"\"" Feb 13 15:34:04.201991 kubelet[3292]: I0213 15:34:04.201297 3292 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-gt95t\" (UniqueName: \"kubernetes.io/projected/ee2a8f05-f891-43fc-8410-d4394031cfae-kube-api-access-gt95t\") on node \"ip-172-31-29-23\" DevicePath \"\"" Feb 13 15:34:04.201991 kubelet[3292]: I0213 15:34:04.201313 3292 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ee2a8f05-f891-43fc-8410-d4394031cfae-clustermesh-secrets\") on node \"ip-172-31-29-23\" DevicePath \"\"" Feb 13 15:34:04.201991 kubelet[3292]: I0213 15:34:04.201339 3292 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ee2a8f05-f891-43fc-8410-d4394031cfae-cilium-config-path\") on node \"ip-172-31-29-23\" DevicePath \"\"" Feb 13 15:34:04.210500 systemd[1]: var-lib-kubelet-pods-ee2a8f05\x2df891\x2d43fc\x2d8410\x2dd4394031cfae-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 15:34:04.370897 kubelet[3292]: E0213 15:34:04.370767 3292 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 15:34:04.609867 sshd[4898]: Connection closed by 139.178.89.65 port 40488 Feb 13 15:34:04.610890 sshd-session[4896]: pam_unix(sshd:session): session closed for user core Feb 13 15:34:04.628314 systemd[1]: sshd@24-172.31.29.23:22-139.178.89.65:40488.service: Deactivated successfully. Feb 13 15:34:04.641782 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 15:34:04.651940 systemd-logind[1875]: Session 25 logged out. Waiting for processes to exit. Feb 13 15:34:04.679793 systemd[1]: Started sshd@25-172.31.29.23:22-139.178.89.65:58676.service - OpenSSH per-connection server daemon (139.178.89.65:58676). Feb 13 15:34:04.685578 systemd-logind[1875]: Removed session 25. Feb 13 15:34:04.900723 kubelet[3292]: I0213 15:34:04.900603 3292 scope.go:117] "RemoveContainer" containerID="9d4a5f63da51fd9c7e9b25fe4b8c1aac219044febe7875e965f69ee1d238ab45" Feb 13 15:34:04.940432 systemd[1]: Removed slice kubepods-besteffort-pod169fe880_db03_415c_8e71_809acabe91f6.slice - libcontainer container kubepods-besteffort-pod169fe880_db03_415c_8e71_809acabe91f6.slice. Feb 13 15:34:04.983505 containerd[1898]: time="2025-02-13T15:34:04.983448164Z" level=info msg="RemoveContainer for \"9d4a5f63da51fd9c7e9b25fe4b8c1aac219044febe7875e965f69ee1d238ab45\"" Feb 13 15:34:05.008770 containerd[1898]: time="2025-02-13T15:34:05.008722713Z" level=info msg="RemoveContainer for \"9d4a5f63da51fd9c7e9b25fe4b8c1aac219044febe7875e965f69ee1d238ab45\" returns successfully" Feb 13 15:34:05.046448 sshd[5058]: Accepted publickey for core from 139.178.89.65 port 58676 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:34:05.062327 sshd-session[5058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:34:05.080298 systemd-logind[1875]: New session 26 of user core. Feb 13 15:34:05.093658 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 15:34:05.165204 kubelet[3292]: I0213 15:34:05.165030 3292 scope.go:117] "RemoveContainer" containerID="4b04194ccaa5e49c3ab58e2cd264f401d1903963530f7fdc8349260cb678c506" Feb 13 15:34:05.167033 systemd[1]: Removed slice kubepods-burstable-podee2a8f05_f891_43fc_8410_d4394031cfae.slice - libcontainer container kubepods-burstable-podee2a8f05_f891_43fc_8410_d4394031cfae.slice. Feb 13 15:34:05.171825 systemd[1]: kubepods-burstable-podee2a8f05_f891_43fc_8410_d4394031cfae.slice: Consumed 9.669s CPU time. Feb 13 15:34:05.174628 containerd[1898]: time="2025-02-13T15:34:05.174283276Z" level=info msg="RemoveContainer for \"4b04194ccaa5e49c3ab58e2cd264f401d1903963530f7fdc8349260cb678c506\"" Feb 13 15:34:05.200699 kubelet[3292]: I0213 15:34:05.200658 3292 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="169fe880-db03-415c-8e71-809acabe91f6" path="/var/lib/kubelet/pods/169fe880-db03-415c-8e71-809acabe91f6/volumes" Feb 13 15:34:05.206094 containerd[1898]: time="2025-02-13T15:34:05.205946121Z" level=info msg="RemoveContainer for \"4b04194ccaa5e49c3ab58e2cd264f401d1903963530f7fdc8349260cb678c506\" returns successfully" Feb 13 15:34:05.210365 kubelet[3292]: I0213 15:34:05.209374 3292 scope.go:117] "RemoveContainer" containerID="d283fea7eaa8bdca3cbe9d06b9babb35ef02c254f245ec18deecb606f8e2a51c" Feb 13 15:34:05.222731 containerd[1898]: time="2025-02-13T15:34:05.222687067Z" level=info msg="RemoveContainer for \"d283fea7eaa8bdca3cbe9d06b9babb35ef02c254f245ec18deecb606f8e2a51c\"" Feb 13 15:34:05.263558 containerd[1898]: time="2025-02-13T15:34:05.263391738Z" level=info msg="RemoveContainer for \"d283fea7eaa8bdca3cbe9d06b9babb35ef02c254f245ec18deecb606f8e2a51c\" returns successfully" Feb 13 15:34:05.267240 containerd[1898]: time="2025-02-13T15:34:05.266074277Z" level=info msg="RemoveContainer for \"0179d92aa942e2ca435be8255e45643e0c14b41e96406641b5681273b0520bc1\"" Feb 13 15:34:05.267439 kubelet[3292]: I0213 15:34:05.264156 3292 scope.go:117] "RemoveContainer" containerID="0179d92aa942e2ca435be8255e45643e0c14b41e96406641b5681273b0520bc1" Feb 13 15:34:05.286750 containerd[1898]: time="2025-02-13T15:34:05.286701337Z" level=info msg="RemoveContainer for \"0179d92aa942e2ca435be8255e45643e0c14b41e96406641b5681273b0520bc1\" returns successfully" Feb 13 15:34:05.287791 kubelet[3292]: I0213 15:34:05.287753 3292 scope.go:117] "RemoveContainer" containerID="a178c7d244737c250ea4376e02f048dfdd2d0798cc8f162af41e56cc9a4cab2c" Feb 13 15:34:05.291917 containerd[1898]: time="2025-02-13T15:34:05.291274288Z" level=info msg="RemoveContainer for \"a178c7d244737c250ea4376e02f048dfdd2d0798cc8f162af41e56cc9a4cab2c\"" Feb 13 15:34:05.297285 containerd[1898]: time="2025-02-13T15:34:05.297164398Z" level=info msg="RemoveContainer for \"a178c7d244737c250ea4376e02f048dfdd2d0798cc8f162af41e56cc9a4cab2c\" returns successfully" Feb 13 15:34:05.298392 kubelet[3292]: I0213 15:34:05.297924 3292 scope.go:117] "RemoveContainer" containerID="7c1a3e0e3c2fab5c7cbd8256b30921241588136e076e76d12b530d706aa1a043" Feb 13 15:34:05.301237 containerd[1898]: time="2025-02-13T15:34:05.301052020Z" level=info msg="RemoveContainer for \"7c1a3e0e3c2fab5c7cbd8256b30921241588136e076e76d12b530d706aa1a043\"" Feb 13 15:34:05.309094 containerd[1898]: time="2025-02-13T15:34:05.309023293Z" level=info msg="RemoveContainer for \"7c1a3e0e3c2fab5c7cbd8256b30921241588136e076e76d12b530d706aa1a043\" returns successfully" Feb 13 15:34:05.326835 ntpd[1867]: Deleting interface #11 lxc_health, fe80::44b6:14ff:fec7:6709%8#123, interface stats: received=0, sent=0, dropped=0, active_time=70 secs Feb 13 15:34:05.327403 ntpd[1867]: 13 Feb 15:34:05 ntpd[1867]: Deleting interface #11 lxc_health, fe80::44b6:14ff:fec7:6709%8#123, interface stats: received=0, sent=0, dropped=0, active_time=70 secs Feb 13 15:34:06.209262 sshd[5060]: Connection closed by 139.178.89.65 port 58676 Feb 13 15:34:06.210620 sshd-session[5058]: pam_unix(sshd:session): session closed for user core Feb 13 15:34:06.219857 systemd[1]: sshd@25-172.31.29.23:22-139.178.89.65:58676.service: Deactivated successfully. Feb 13 15:34:06.224958 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 15:34:06.232973 systemd-logind[1875]: Session 26 logged out. Waiting for processes to exit. Feb 13 15:34:06.234400 kubelet[3292]: I0213 15:34:06.232974 3292 topology_manager.go:215] "Topology Admit Handler" podUID="2508b9e0-cd98-4688-9b2b-becb00871db3" podNamespace="kube-system" podName="cilium-dxfkk" Feb 13 15:34:06.238962 kubelet[3292]: E0213 15:34:06.238913 3292 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ee2a8f05-f891-43fc-8410-d4394031cfae" containerName="mount-bpf-fs" Feb 13 15:34:06.238962 kubelet[3292]: E0213 15:34:06.238957 3292 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="169fe880-db03-415c-8e71-809acabe91f6" containerName="cilium-operator" Feb 13 15:34:06.238962 kubelet[3292]: E0213 15:34:06.238968 3292 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ee2a8f05-f891-43fc-8410-d4394031cfae" containerName="mount-cgroup" Feb 13 15:34:06.239276 kubelet[3292]: E0213 15:34:06.238977 3292 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ee2a8f05-f891-43fc-8410-d4394031cfae" containerName="apply-sysctl-overwrites" Feb 13 15:34:06.239276 kubelet[3292]: E0213 15:34:06.238986 3292 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ee2a8f05-f891-43fc-8410-d4394031cfae" containerName="clean-cilium-state" Feb 13 15:34:06.239276 kubelet[3292]: E0213 15:34:06.238994 3292 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ee2a8f05-f891-43fc-8410-d4394031cfae" containerName="cilium-agent" Feb 13 15:34:06.239276 kubelet[3292]: I0213 15:34:06.239071 3292 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee2a8f05-f891-43fc-8410-d4394031cfae" containerName="cilium-agent" Feb 13 15:34:06.239276 kubelet[3292]: I0213 15:34:06.239082 3292 memory_manager.go:354] "RemoveStaleState removing state" podUID="169fe880-db03-415c-8e71-809acabe91f6" containerName="cilium-operator" Feb 13 15:34:06.259579 systemd[1]: Started sshd@26-172.31.29.23:22-139.178.89.65:58682.service - OpenSSH per-connection server daemon (139.178.89.65:58682). Feb 13 15:34:06.266574 systemd-logind[1875]: Removed session 26. Feb 13 15:34:06.298755 systemd[1]: Created slice kubepods-burstable-pod2508b9e0_cd98_4688_9b2b_becb00871db3.slice - libcontainer container kubepods-burstable-pod2508b9e0_cd98_4688_9b2b_becb00871db3.slice. Feb 13 15:34:06.388819 kubelet[3292]: I0213 15:34:06.388688 3292 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2508b9e0-cd98-4688-9b2b-becb00871db3-cilium-config-path\") pod \"cilium-dxfkk\" (UID: \"2508b9e0-cd98-4688-9b2b-becb00871db3\") " pod="kube-system/cilium-dxfkk" Feb 13 15:34:06.390938 kubelet[3292]: I0213 15:34:06.390896 3292 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2508b9e0-cd98-4688-9b2b-becb00871db3-hostproc\") pod \"cilium-dxfkk\" (UID: \"2508b9e0-cd98-4688-9b2b-becb00871db3\") " pod="kube-system/cilium-dxfkk" Feb 13 15:34:06.391070 kubelet[3292]: I0213 15:34:06.390953 3292 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2508b9e0-cd98-4688-9b2b-becb00871db3-host-proc-sys-net\") pod \"cilium-dxfkk\" (UID: \"2508b9e0-cd98-4688-9b2b-becb00871db3\") " pod="kube-system/cilium-dxfkk" Feb 13 15:34:06.391070 kubelet[3292]: I0213 15:34:06.390990 3292 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2508b9e0-cd98-4688-9b2b-becb00871db3-bpf-maps\") pod \"cilium-dxfkk\" (UID: \"2508b9e0-cd98-4688-9b2b-becb00871db3\") " pod="kube-system/cilium-dxfkk" Feb 13 15:34:06.391070 kubelet[3292]: I0213 15:34:06.391017 3292 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2508b9e0-cd98-4688-9b2b-becb00871db3-cilium-cgroup\") pod \"cilium-dxfkk\" (UID: \"2508b9e0-cd98-4688-9b2b-becb00871db3\") " pod="kube-system/cilium-dxfkk" Feb 13 15:34:06.391070 kubelet[3292]: I0213 15:34:06.391044 3292 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2508b9e0-cd98-4688-9b2b-becb00871db3-xtables-lock\") pod \"cilium-dxfkk\" (UID: \"2508b9e0-cd98-4688-9b2b-becb00871db3\") " pod="kube-system/cilium-dxfkk" Feb 13 15:34:06.391374 kubelet[3292]: I0213 15:34:06.391075 3292 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2508b9e0-cd98-4688-9b2b-becb00871db3-host-proc-sys-kernel\") pod \"cilium-dxfkk\" (UID: \"2508b9e0-cd98-4688-9b2b-becb00871db3\") " pod="kube-system/cilium-dxfkk" Feb 13 15:34:06.391374 kubelet[3292]: I0213 15:34:06.391108 3292 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2508b9e0-cd98-4688-9b2b-becb00871db3-lib-modules\") pod \"cilium-dxfkk\" (UID: \"2508b9e0-cd98-4688-9b2b-becb00871db3\") " pod="kube-system/cilium-dxfkk" Feb 13 15:34:06.391374 kubelet[3292]: I0213 15:34:06.391148 3292 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2508b9e0-cd98-4688-9b2b-becb00871db3-clustermesh-secrets\") pod \"cilium-dxfkk\" (UID: \"2508b9e0-cd98-4688-9b2b-becb00871db3\") " pod="kube-system/cilium-dxfkk" Feb 13 15:34:06.391374 kubelet[3292]: I0213 15:34:06.391177 3292 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2508b9e0-cd98-4688-9b2b-becb00871db3-hubble-tls\") pod \"cilium-dxfkk\" (UID: \"2508b9e0-cd98-4688-9b2b-becb00871db3\") " pod="kube-system/cilium-dxfkk" Feb 13 15:34:06.391374 kubelet[3292]: I0213 15:34:06.391210 3292 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frcjb\" (UniqueName: \"kubernetes.io/projected/2508b9e0-cd98-4688-9b2b-becb00871db3-kube-api-access-frcjb\") pod \"cilium-dxfkk\" (UID: \"2508b9e0-cd98-4688-9b2b-becb00871db3\") " pod="kube-system/cilium-dxfkk" Feb 13 15:34:06.391374 kubelet[3292]: I0213 15:34:06.391240 3292 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2508b9e0-cd98-4688-9b2b-becb00871db3-cilium-run\") pod \"cilium-dxfkk\" (UID: \"2508b9e0-cd98-4688-9b2b-becb00871db3\") " pod="kube-system/cilium-dxfkk" Feb 13 15:34:06.391655 kubelet[3292]: I0213 15:34:06.391267 3292 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2508b9e0-cd98-4688-9b2b-becb00871db3-cni-path\") pod \"cilium-dxfkk\" (UID: \"2508b9e0-cd98-4688-9b2b-becb00871db3\") " pod="kube-system/cilium-dxfkk" Feb 13 15:34:06.391655 kubelet[3292]: I0213 15:34:06.391293 3292 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2508b9e0-cd98-4688-9b2b-becb00871db3-etc-cni-netd\") pod \"cilium-dxfkk\" (UID: \"2508b9e0-cd98-4688-9b2b-becb00871db3\") " pod="kube-system/cilium-dxfkk" Feb 13 15:34:06.391655 kubelet[3292]: I0213 15:34:06.391321 3292 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2508b9e0-cd98-4688-9b2b-becb00871db3-cilium-ipsec-secrets\") pod \"cilium-dxfkk\" (UID: \"2508b9e0-cd98-4688-9b2b-becb00871db3\") " pod="kube-system/cilium-dxfkk" Feb 13 15:34:06.480180 sshd[5069]: Accepted publickey for core from 139.178.89.65 port 58682 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:34:06.485321 sshd-session[5069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:34:06.493085 systemd-logind[1875]: New session 27 of user core. Feb 13 15:34:06.500220 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 15:34:06.616000 containerd[1898]: time="2025-02-13T15:34:06.615958220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dxfkk,Uid:2508b9e0-cd98-4688-9b2b-becb00871db3,Namespace:kube-system,Attempt:0,}" Feb 13 15:34:06.642587 sshd[5073]: Connection closed by 139.178.89.65 port 58682 Feb 13 15:34:06.637668 sshd-session[5069]: pam_unix(sshd:session): session closed for user core Feb 13 15:34:06.643686 systemd[1]: sshd@26-172.31.29.23:22-139.178.89.65:58682.service: Deactivated successfully. Feb 13 15:34:06.657825 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 15:34:06.663442 systemd-logind[1875]: Session 27 logged out. Waiting for processes to exit. Feb 13 15:34:06.694811 systemd[1]: Started sshd@27-172.31.29.23:22-139.178.89.65:58698.service - OpenSSH per-connection server daemon (139.178.89.65:58698). Feb 13 15:34:06.697425 systemd-logind[1875]: Removed session 27. Feb 13 15:34:06.698390 containerd[1898]: time="2025-02-13T15:34:06.697966757Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:34:06.701725 containerd[1898]: time="2025-02-13T15:34:06.699196174Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:34:06.701725 containerd[1898]: time="2025-02-13T15:34:06.699228903Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:34:06.701725 containerd[1898]: time="2025-02-13T15:34:06.699436080Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:34:06.759940 systemd[1]: Started cri-containerd-4ad8bd455865aea9ec10a92948c79e6f864d0241fd8e6a4076fba8e687b9af26.scope - libcontainer container 4ad8bd455865aea9ec10a92948c79e6f864d0241fd8e6a4076fba8e687b9af26. Feb 13 15:34:06.804160 containerd[1898]: time="2025-02-13T15:34:06.804111099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dxfkk,Uid:2508b9e0-cd98-4688-9b2b-becb00871db3,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ad8bd455865aea9ec10a92948c79e6f864d0241fd8e6a4076fba8e687b9af26\"" Feb 13 15:34:06.810580 containerd[1898]: time="2025-02-13T15:34:06.810538378Z" level=info msg="CreateContainer within sandbox \"4ad8bd455865aea9ec10a92948c79e6f864d0241fd8e6a4076fba8e687b9af26\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 15:34:06.863517 containerd[1898]: time="2025-02-13T15:34:06.863337271Z" level=info msg="CreateContainer within sandbox \"4ad8bd455865aea9ec10a92948c79e6f864d0241fd8e6a4076fba8e687b9af26\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ae96de8c904417cfc0f649ba7d99f4b4b91333d0db34b7158470dbec3bae28b6\"" Feb 13 15:34:06.865101 containerd[1898]: time="2025-02-13T15:34:06.864334488Z" level=info msg="StartContainer for \"ae96de8c904417cfc0f649ba7d99f4b4b91333d0db34b7158470dbec3bae28b6\"" Feb 13 15:34:06.921316 systemd[1]: Started cri-containerd-ae96de8c904417cfc0f649ba7d99f4b4b91333d0db34b7158470dbec3bae28b6.scope - libcontainer container ae96de8c904417cfc0f649ba7d99f4b4b91333d0db34b7158470dbec3bae28b6. Feb 13 15:34:06.935170 sshd[5094]: Accepted publickey for core from 139.178.89.65 port 58698 ssh2: RSA SHA256:v7hTrtZ9/NhiAvXSp1iZfOxZYI4fXxME+gLHhLHyxgM Feb 13 15:34:06.937281 sshd-session[5094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:34:06.951164 systemd-logind[1875]: New session 28 of user core. Feb 13 15:34:06.968350 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 13 15:34:07.018800 containerd[1898]: time="2025-02-13T15:34:07.018426623Z" level=info msg="StartContainer for \"ae96de8c904417cfc0f649ba7d99f4b4b91333d0db34b7158470dbec3bae28b6\" returns successfully" Feb 13 15:34:07.061832 systemd[1]: cri-containerd-ae96de8c904417cfc0f649ba7d99f4b4b91333d0db34b7158470dbec3bae28b6.scope: Deactivated successfully. Feb 13 15:34:07.132027 kubelet[3292]: I0213 15:34:07.131982 3292 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="ee2a8f05-f891-43fc-8410-d4394031cfae" path="/var/lib/kubelet/pods/ee2a8f05-f891-43fc-8410-d4394031cfae/volumes" Feb 13 15:34:07.166361 containerd[1898]: time="2025-02-13T15:34:07.163980320Z" level=info msg="shim disconnected" id=ae96de8c904417cfc0f649ba7d99f4b4b91333d0db34b7158470dbec3bae28b6 namespace=k8s.io Feb 13 15:34:07.166745 containerd[1898]: time="2025-02-13T15:34:07.166366962Z" level=warning msg="cleaning up after shim disconnected" id=ae96de8c904417cfc0f649ba7d99f4b4b91333d0db34b7158470dbec3bae28b6 namespace=k8s.io Feb 13 15:34:07.166745 containerd[1898]: time="2025-02-13T15:34:07.166450913Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:34:08.042335 containerd[1898]: time="2025-02-13T15:34:08.042105364Z" level=info msg="CreateContainer within sandbox \"4ad8bd455865aea9ec10a92948c79e6f864d0241fd8e6a4076fba8e687b9af26\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 15:34:08.072319 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2616425890.mount: Deactivated successfully. Feb 13 15:34:08.075275 containerd[1898]: time="2025-02-13T15:34:08.075231988Z" level=info msg="CreateContainer within sandbox \"4ad8bd455865aea9ec10a92948c79e6f864d0241fd8e6a4076fba8e687b9af26\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"804f8ce54b2d7c242f817168c5c0f382ff1cbaff56ca3836607dfc34d3249270\"" Feb 13 15:34:08.076167 containerd[1898]: time="2025-02-13T15:34:08.075833063Z" level=info msg="StartContainer for \"804f8ce54b2d7c242f817168c5c0f382ff1cbaff56ca3836607dfc34d3249270\"" Feb 13 15:34:08.117350 systemd[1]: Started cri-containerd-804f8ce54b2d7c242f817168c5c0f382ff1cbaff56ca3836607dfc34d3249270.scope - libcontainer container 804f8ce54b2d7c242f817168c5c0f382ff1cbaff56ca3836607dfc34d3249270. Feb 13 15:34:08.151439 containerd[1898]: time="2025-02-13T15:34:08.151397885Z" level=info msg="StartContainer for \"804f8ce54b2d7c242f817168c5c0f382ff1cbaff56ca3836607dfc34d3249270\" returns successfully" Feb 13 15:34:08.166002 systemd[1]: cri-containerd-804f8ce54b2d7c242f817168c5c0f382ff1cbaff56ca3836607dfc34d3249270.scope: Deactivated successfully. Feb 13 15:34:08.206640 containerd[1898]: time="2025-02-13T15:34:08.206577146Z" level=info msg="shim disconnected" id=804f8ce54b2d7c242f817168c5c0f382ff1cbaff56ca3836607dfc34d3249270 namespace=k8s.io Feb 13 15:34:08.206640 containerd[1898]: time="2025-02-13T15:34:08.206639067Z" level=warning msg="cleaning up after shim disconnected" id=804f8ce54b2d7c242f817168c5c0f382ff1cbaff56ca3836607dfc34d3249270 namespace=k8s.io Feb 13 15:34:08.207234 containerd[1898]: time="2025-02-13T15:34:08.206651174Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:34:08.225848 containerd[1898]: time="2025-02-13T15:34:08.225797831Z" level=warning msg="cleanup warnings time=\"2025-02-13T15:34:08Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 15:34:08.527125 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-804f8ce54b2d7c242f817168c5c0f382ff1cbaff56ca3836607dfc34d3249270-rootfs.mount: Deactivated successfully. Feb 13 15:34:09.074186 containerd[1898]: time="2025-02-13T15:34:09.074142707Z" level=info msg="CreateContainer within sandbox \"4ad8bd455865aea9ec10a92948c79e6f864d0241fd8e6a4076fba8e687b9af26\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 15:34:09.119298 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount748517347.mount: Deactivated successfully. Feb 13 15:34:09.135534 containerd[1898]: time="2025-02-13T15:34:09.135180295Z" level=info msg="CreateContainer within sandbox \"4ad8bd455865aea9ec10a92948c79e6f864d0241fd8e6a4076fba8e687b9af26\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7ce6d7f54b400381ca1738865fe71d4ec7241bf089c8bd61e6de87b83252e552\"" Feb 13 15:34:09.137618 containerd[1898]: time="2025-02-13T15:34:09.136869070Z" level=info msg="StartContainer for \"7ce6d7f54b400381ca1738865fe71d4ec7241bf089c8bd61e6de87b83252e552\"" Feb 13 15:34:09.176399 containerd[1898]: time="2025-02-13T15:34:09.175965562Z" level=info msg="StopPodSandbox for \"abeeff2366480d8c3ced95cb7171fd1d01e1bdbbd23b91586ddb3bc4784dd1a1\"" Feb 13 15:34:09.176399 containerd[1898]: time="2025-02-13T15:34:09.176083232Z" level=info msg="TearDown network for sandbox \"abeeff2366480d8c3ced95cb7171fd1d01e1bdbbd23b91586ddb3bc4784dd1a1\" successfully" Feb 13 15:34:09.176399 containerd[1898]: time="2025-02-13T15:34:09.176105008Z" level=info msg="StopPodSandbox for \"abeeff2366480d8c3ced95cb7171fd1d01e1bdbbd23b91586ddb3bc4784dd1a1\" returns successfully" Feb 13 15:34:09.180091 containerd[1898]: time="2025-02-13T15:34:09.180054729Z" level=info msg="RemovePodSandbox for \"abeeff2366480d8c3ced95cb7171fd1d01e1bdbbd23b91586ddb3bc4784dd1a1\"" Feb 13 15:34:09.180684 containerd[1898]: time="2025-02-13T15:34:09.180658796Z" level=info msg="Forcibly stopping sandbox \"abeeff2366480d8c3ced95cb7171fd1d01e1bdbbd23b91586ddb3bc4784dd1a1\"" Feb 13 15:34:09.180933 containerd[1898]: time="2025-02-13T15:34:09.180867999Z" level=info msg="TearDown network for sandbox \"abeeff2366480d8c3ced95cb7171fd1d01e1bdbbd23b91586ddb3bc4784dd1a1\" successfully" Feb 13 15:34:09.193479 containerd[1898]: time="2025-02-13T15:34:09.193252665Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"abeeff2366480d8c3ced95cb7171fd1d01e1bdbbd23b91586ddb3bc4784dd1a1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:34:09.193479 containerd[1898]: time="2025-02-13T15:34:09.193331990Z" level=info msg="RemovePodSandbox \"abeeff2366480d8c3ced95cb7171fd1d01e1bdbbd23b91586ddb3bc4784dd1a1\" returns successfully" Feb 13 15:34:09.195390 containerd[1898]: time="2025-02-13T15:34:09.194675878Z" level=info msg="StopPodSandbox for \"1fd223449b8590f2f94194b7932a74baf731185aaa5097de4850cc27be74b0df\"" Feb 13 15:34:09.195621 containerd[1898]: time="2025-02-13T15:34:09.195598177Z" level=info msg="TearDown network for sandbox \"1fd223449b8590f2f94194b7932a74baf731185aaa5097de4850cc27be74b0df\" successfully" Feb 13 15:34:09.195747 containerd[1898]: time="2025-02-13T15:34:09.195730609Z" level=info msg="StopPodSandbox for \"1fd223449b8590f2f94194b7932a74baf731185aaa5097de4850cc27be74b0df\" returns successfully" Feb 13 15:34:09.196985 containerd[1898]: time="2025-02-13T15:34:09.196747664Z" level=info msg="RemovePodSandbox for \"1fd223449b8590f2f94194b7932a74baf731185aaa5097de4850cc27be74b0df\"" Feb 13 15:34:09.199155 containerd[1898]: time="2025-02-13T15:34:09.198363429Z" level=info msg="Forcibly stopping sandbox \"1fd223449b8590f2f94194b7932a74baf731185aaa5097de4850cc27be74b0df\"" Feb 13 15:34:09.199155 containerd[1898]: time="2025-02-13T15:34:09.198561576Z" level=info msg="TearDown network for sandbox \"1fd223449b8590f2f94194b7932a74baf731185aaa5097de4850cc27be74b0df\" successfully" Feb 13 15:34:09.206434 systemd[1]: Started cri-containerd-7ce6d7f54b400381ca1738865fe71d4ec7241bf089c8bd61e6de87b83252e552.scope - libcontainer container 7ce6d7f54b400381ca1738865fe71d4ec7241bf089c8bd61e6de87b83252e552. Feb 13 15:34:09.211384 containerd[1898]: time="2025-02-13T15:34:09.211175447Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1fd223449b8590f2f94194b7932a74baf731185aaa5097de4850cc27be74b0df\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:34:09.213281 containerd[1898]: time="2025-02-13T15:34:09.213174607Z" level=info msg="RemovePodSandbox \"1fd223449b8590f2f94194b7932a74baf731185aaa5097de4850cc27be74b0df\" returns successfully" Feb 13 15:34:09.281806 containerd[1898]: time="2025-02-13T15:34:09.281762025Z" level=info msg="StartContainer for \"7ce6d7f54b400381ca1738865fe71d4ec7241bf089c8bd61e6de87b83252e552\" returns successfully" Feb 13 15:34:09.294465 systemd[1]: cri-containerd-7ce6d7f54b400381ca1738865fe71d4ec7241bf089c8bd61e6de87b83252e552.scope: Deactivated successfully. Feb 13 15:34:09.359986 containerd[1898]: time="2025-02-13T15:34:09.359674622Z" level=info msg="shim disconnected" id=7ce6d7f54b400381ca1738865fe71d4ec7241bf089c8bd61e6de87b83252e552 namespace=k8s.io Feb 13 15:34:09.359986 containerd[1898]: time="2025-02-13T15:34:09.359745963Z" level=warning msg="cleaning up after shim disconnected" id=7ce6d7f54b400381ca1738865fe71d4ec7241bf089c8bd61e6de87b83252e552 namespace=k8s.io Feb 13 15:34:09.359986 containerd[1898]: time="2025-02-13T15:34:09.359761463Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:34:09.375372 kubelet[3292]: E0213 15:34:09.375095 3292 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 15:34:09.526612 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7ce6d7f54b400381ca1738865fe71d4ec7241bf089c8bd61e6de87b83252e552-rootfs.mount: Deactivated successfully. Feb 13 15:34:10.092993 containerd[1898]: time="2025-02-13T15:34:10.092947547Z" level=info msg="CreateContainer within sandbox \"4ad8bd455865aea9ec10a92948c79e6f864d0241fd8e6a4076fba8e687b9af26\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 15:34:10.123862 containerd[1898]: time="2025-02-13T15:34:10.121973112Z" level=info msg="CreateContainer within sandbox \"4ad8bd455865aea9ec10a92948c79e6f864d0241fd8e6a4076fba8e687b9af26\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"46b0f1efde4a32c445ec1ffbcd22a872e601f9e9ae0b06d7700da3fdadef20f6\"" Feb 13 15:34:10.124443 containerd[1898]: time="2025-02-13T15:34:10.124404679Z" level=info msg="StartContainer for \"46b0f1efde4a32c445ec1ffbcd22a872e601f9e9ae0b06d7700da3fdadef20f6\"" Feb 13 15:34:10.173395 systemd[1]: Started cri-containerd-46b0f1efde4a32c445ec1ffbcd22a872e601f9e9ae0b06d7700da3fdadef20f6.scope - libcontainer container 46b0f1efde4a32c445ec1ffbcd22a872e601f9e9ae0b06d7700da3fdadef20f6. Feb 13 15:34:10.252587 systemd[1]: cri-containerd-46b0f1efde4a32c445ec1ffbcd22a872e601f9e9ae0b06d7700da3fdadef20f6.scope: Deactivated successfully. Feb 13 15:34:10.264083 containerd[1898]: time="2025-02-13T15:34:10.264040231Z" level=info msg="StartContainer for \"46b0f1efde4a32c445ec1ffbcd22a872e601f9e9ae0b06d7700da3fdadef20f6\" returns successfully" Feb 13 15:34:10.302885 containerd[1898]: time="2025-02-13T15:34:10.302793778Z" level=info msg="shim disconnected" id=46b0f1efde4a32c445ec1ffbcd22a872e601f9e9ae0b06d7700da3fdadef20f6 namespace=k8s.io Feb 13 15:34:10.302885 containerd[1898]: time="2025-02-13T15:34:10.302883231Z" level=warning msg="cleaning up after shim disconnected" id=46b0f1efde4a32c445ec1ffbcd22a872e601f9e9ae0b06d7700da3fdadef20f6 namespace=k8s.io Feb 13 15:34:10.303529 containerd[1898]: time="2025-02-13T15:34:10.302894419Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:34:10.321655 containerd[1898]: time="2025-02-13T15:34:10.321598291Z" level=warning msg="cleanup warnings time=\"2025-02-13T15:34:10Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 15:34:10.525865 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-46b0f1efde4a32c445ec1ffbcd22a872e601f9e9ae0b06d7700da3fdadef20f6-rootfs.mount: Deactivated successfully. Feb 13 15:34:11.104421 containerd[1898]: time="2025-02-13T15:34:11.103867563Z" level=info msg="CreateContainer within sandbox \"4ad8bd455865aea9ec10a92948c79e6f864d0241fd8e6a4076fba8e687b9af26\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 15:34:11.156622 containerd[1898]: time="2025-02-13T15:34:11.153737335Z" level=info msg="CreateContainer within sandbox \"4ad8bd455865aea9ec10a92948c79e6f864d0241fd8e6a4076fba8e687b9af26\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"978c26811483cd5ad8dee14e01a180319a7af138d69a839221f88b31d01979be\"" Feb 13 15:34:11.162747 containerd[1898]: time="2025-02-13T15:34:11.162662462Z" level=info msg="StartContainer for \"978c26811483cd5ad8dee14e01a180319a7af138d69a839221f88b31d01979be\"" Feb 13 15:34:11.231899 systemd[1]: Started cri-containerd-978c26811483cd5ad8dee14e01a180319a7af138d69a839221f88b31d01979be.scope - libcontainer container 978c26811483cd5ad8dee14e01a180319a7af138d69a839221f88b31d01979be. Feb 13 15:34:11.305983 containerd[1898]: time="2025-02-13T15:34:11.304035084Z" level=info msg="StartContainer for \"978c26811483cd5ad8dee14e01a180319a7af138d69a839221f88b31d01979be\" returns successfully" Feb 13 15:34:11.882583 kubelet[3292]: I0213 15:34:11.880533 3292 setters.go:568] "Node became not ready" node="ip-172-31-29-23" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T15:34:11Z","lastTransitionTime":"2025-02-13T15:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 13 15:34:12.158191 kubelet[3292]: I0213 15:34:12.156111 3292 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-dxfkk" podStartSLOduration=6.156063575 podStartE2EDuration="6.156063575s" podCreationTimestamp="2025-02-13 15:34:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:34:12.154552855 +0000 UTC m=+123.260522387" watchObservedRunningTime="2025-02-13 15:34:12.156063575 +0000 UTC m=+123.262033131" Feb 13 15:34:12.524086 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 13 15:34:16.104450 (udev-worker)[5961]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:34:16.108368 (udev-worker)[5962]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:34:16.109577 systemd-networkd[1740]: lxc_health: Link UP Feb 13 15:34:16.119505 systemd-networkd[1740]: lxc_health: Gained carrier Feb 13 15:34:17.855902 systemd-networkd[1740]: lxc_health: Gained IPv6LL Feb 13 15:34:20.327522 ntpd[1867]: Listen normally on 14 lxc_health [fe80::5035:59ff:fede:28a8%14]:123 Feb 13 15:34:20.329288 ntpd[1867]: 13 Feb 15:34:20 ntpd[1867]: Listen normally on 14 lxc_health [fe80::5035:59ff:fede:28a8%14]:123 Feb 13 15:34:20.865530 systemd[1]: run-containerd-runc-k8s.io-978c26811483cd5ad8dee14e01a180319a7af138d69a839221f88b31d01979be-runc.Z58o52.mount: Deactivated successfully. Feb 13 15:34:23.314721 sshd[5156]: Connection closed by 139.178.89.65 port 58698 Feb 13 15:34:23.316366 sshd-session[5094]: pam_unix(sshd:session): session closed for user core Feb 13 15:34:23.326212 systemd[1]: sshd@27-172.31.29.23:22-139.178.89.65:58698.service: Deactivated successfully. Feb 13 15:34:23.330258 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 15:34:23.335780 systemd-logind[1875]: Session 28 logged out. Waiting for processes to exit. Feb 13 15:34:23.341637 systemd-logind[1875]: Removed session 28. Feb 13 15:34:37.009757 systemd[1]: cri-containerd-1defedcc2442d81cec43745e6d854f19ffbfffcddd6e4de3ffdd2be8e7c1093c.scope: Deactivated successfully. Feb 13 15:34:37.011053 systemd[1]: cri-containerd-1defedcc2442d81cec43745e6d854f19ffbfffcddd6e4de3ffdd2be8e7c1093c.scope: Consumed 3.512s CPU time, 24.5M memory peak, 0B memory swap peak. Feb 13 15:34:37.046747 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1defedcc2442d81cec43745e6d854f19ffbfffcddd6e4de3ffdd2be8e7c1093c-rootfs.mount: Deactivated successfully. Feb 13 15:34:37.073396 containerd[1898]: time="2025-02-13T15:34:37.073320533Z" level=info msg="shim disconnected" id=1defedcc2442d81cec43745e6d854f19ffbfffcddd6e4de3ffdd2be8e7c1093c namespace=k8s.io Feb 13 15:34:37.073396 containerd[1898]: time="2025-02-13T15:34:37.073386242Z" level=warning msg="cleaning up after shim disconnected" id=1defedcc2442d81cec43745e6d854f19ffbfffcddd6e4de3ffdd2be8e7c1093c namespace=k8s.io Feb 13 15:34:37.073396 containerd[1898]: time="2025-02-13T15:34:37.073398400Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:34:37.229855 kubelet[3292]: I0213 15:34:37.229817 3292 scope.go:117] "RemoveContainer" containerID="1defedcc2442d81cec43745e6d854f19ffbfffcddd6e4de3ffdd2be8e7c1093c" Feb 13 15:34:37.233847 containerd[1898]: time="2025-02-13T15:34:37.233734939Z" level=info msg="CreateContainer within sandbox \"a4b384eb9923a04862ef550f3ea3ee4aa653bd6adcee89e850c9aa4a4476e6c1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Feb 13 15:34:37.276485 containerd[1898]: time="2025-02-13T15:34:37.276351524Z" level=info msg="CreateContainer within sandbox \"a4b384eb9923a04862ef550f3ea3ee4aa653bd6adcee89e850c9aa4a4476e6c1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"ea2e360c0e46b66cc0dcf055e7417338caea936791751030fcec0cb26aeda007\"" Feb 13 15:34:37.277614 containerd[1898]: time="2025-02-13T15:34:37.277580941Z" level=info msg="StartContainer for \"ea2e360c0e46b66cc0dcf055e7417338caea936791751030fcec0cb26aeda007\"" Feb 13 15:34:37.346398 systemd[1]: Started cri-containerd-ea2e360c0e46b66cc0dcf055e7417338caea936791751030fcec0cb26aeda007.scope - libcontainer container ea2e360c0e46b66cc0dcf055e7417338caea936791751030fcec0cb26aeda007. Feb 13 15:34:37.410808 containerd[1898]: time="2025-02-13T15:34:37.410762854Z" level=info msg="StartContainer for \"ea2e360c0e46b66cc0dcf055e7417338caea936791751030fcec0cb26aeda007\" returns successfully" Feb 13 15:34:42.274946 kubelet[3292]: E0213 15:34:42.274890 3292 controller.go:195] "Failed to update lease" err="Put \"https://172.31.29.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-23?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 13 15:34:42.596888 systemd[1]: cri-containerd-e049a2ceacfb3461ec2753b3bc7a548adad5a4021906b98b00ae190f59c0a106.scope: Deactivated successfully. Feb 13 15:34:42.598078 systemd[1]: cri-containerd-e049a2ceacfb3461ec2753b3bc7a548adad5a4021906b98b00ae190f59c0a106.scope: Consumed 1.336s CPU time, 17.5M memory peak, 0B memory swap peak. Feb 13 15:34:42.631859 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e049a2ceacfb3461ec2753b3bc7a548adad5a4021906b98b00ae190f59c0a106-rootfs.mount: Deactivated successfully. Feb 13 15:34:42.665593 containerd[1898]: time="2025-02-13T15:34:42.665525871Z" level=info msg="shim disconnected" id=e049a2ceacfb3461ec2753b3bc7a548adad5a4021906b98b00ae190f59c0a106 namespace=k8s.io Feb 13 15:34:42.665593 containerd[1898]: time="2025-02-13T15:34:42.665585887Z" level=warning msg="cleaning up after shim disconnected" id=e049a2ceacfb3461ec2753b3bc7a548adad5a4021906b98b00ae190f59c0a106 namespace=k8s.io Feb 13 15:34:42.665593 containerd[1898]: time="2025-02-13T15:34:42.665598146Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:34:43.250766 kubelet[3292]: I0213 15:34:43.250730 3292 scope.go:117] "RemoveContainer" containerID="e049a2ceacfb3461ec2753b3bc7a548adad5a4021906b98b00ae190f59c0a106" Feb 13 15:34:43.258447 containerd[1898]: time="2025-02-13T15:34:43.258394405Z" level=info msg="CreateContainer within sandbox \"9e9f89b94fe2b7006ff18f82a72c4bf77cb6c1bfde656926fca03d656cb0274d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Feb 13 15:34:43.294473 containerd[1898]: time="2025-02-13T15:34:43.294331304Z" level=info msg="CreateContainer within sandbox \"9e9f89b94fe2b7006ff18f82a72c4bf77cb6c1bfde656926fca03d656cb0274d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"25fd5d34d76231e4ad3ad570d84671e4de9acd9f8dc8baf51704f65a77a00b8b\"" Feb 13 15:34:43.296042 containerd[1898]: time="2025-02-13T15:34:43.296005405Z" level=info msg="StartContainer for \"25fd5d34d76231e4ad3ad570d84671e4de9acd9f8dc8baf51704f65a77a00b8b\"" Feb 13 15:34:43.401040 systemd[1]: Started cri-containerd-25fd5d34d76231e4ad3ad570d84671e4de9acd9f8dc8baf51704f65a77a00b8b.scope - libcontainer container 25fd5d34d76231e4ad3ad570d84671e4de9acd9f8dc8baf51704f65a77a00b8b. Feb 13 15:34:43.499067 containerd[1898]: time="2025-02-13T15:34:43.499009917Z" level=info msg="StartContainer for \"25fd5d34d76231e4ad3ad570d84671e4de9acd9f8dc8baf51704f65a77a00b8b\" returns successfully" Feb 13 15:34:52.275257 kubelet[3292]: E0213 15:34:52.275180 3292 controller.go:195] "Failed to update lease" err="Put \"https://172.31.29.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-23?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"