Feb 13 19:22:23.192218 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 17:41:03 -00 2025 Feb 13 19:22:23.192266 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=015d1d9e5e601f6a4e226c935072d3d0819e7eb2da20e68715973498f21aa3fe Feb 13 19:22:23.192280 kernel: BIOS-provided physical RAM map: Feb 13 19:22:23.192291 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 13 19:22:23.192301 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 13 19:22:23.192310 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 13 19:22:23.192325 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Feb 13 19:22:23.192335 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Feb 13 19:22:23.192346 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Feb 13 19:22:23.192356 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 13 19:22:23.192376 kernel: NX (Execute Disable) protection: active Feb 13 19:22:23.192386 kernel: APIC: Static calls initialized Feb 13 19:22:23.192396 kernel: SMBIOS 2.7 present. Feb 13 19:22:23.192407 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Feb 13 19:22:23.192424 kernel: Hypervisor detected: KVM Feb 13 19:22:23.192435 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 19:22:23.192447 kernel: kvm-clock: using sched offset of 9848898641 cycles Feb 13 19:22:23.192459 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 19:22:23.192471 kernel: tsc: Detected 2499.996 MHz processor Feb 13 19:22:23.192991 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 19:22:23.193008 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 19:22:23.193027 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Feb 13 19:22:23.193041 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Feb 13 19:22:23.193055 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 19:22:23.193069 kernel: Using GB pages for direct mapping Feb 13 19:22:23.193083 kernel: ACPI: Early table checksum verification disabled Feb 13 19:22:23.193097 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Feb 13 19:22:23.193111 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Feb 13 19:22:23.193125 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Feb 13 19:22:23.193139 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Feb 13 19:22:23.193156 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Feb 13 19:22:23.193170 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Feb 13 19:22:23.193184 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Feb 13 19:22:23.193198 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Feb 13 19:22:23.193211 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Feb 13 19:22:23.193225 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Feb 13 19:22:23.193239 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Feb 13 19:22:23.193252 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Feb 13 19:22:23.193266 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Feb 13 19:22:23.193283 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Feb 13 19:22:23.193303 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Feb 13 19:22:23.193318 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Feb 13 19:22:23.193332 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Feb 13 19:22:23.193347 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Feb 13 19:22:23.193364 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Feb 13 19:22:23.193378 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Feb 13 19:22:23.193393 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Feb 13 19:22:23.193407 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Feb 13 19:22:23.193421 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 13 19:22:23.193436 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 13 19:22:23.193450 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Feb 13 19:22:23.193465 kernel: NUMA: Initialized distance table, cnt=1 Feb 13 19:22:23.193479 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Feb 13 19:22:23.193497 kernel: Zone ranges: Feb 13 19:22:23.193512 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 19:22:23.193526 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Feb 13 19:22:23.193541 kernel: Normal empty Feb 13 19:22:23.193555 kernel: Movable zone start for each node Feb 13 19:22:23.193914 kernel: Early memory node ranges Feb 13 19:22:23.195195 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 13 19:22:23.195217 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Feb 13 19:22:23.195232 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Feb 13 19:22:23.195248 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 19:22:23.195268 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 13 19:22:23.195538 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Feb 13 19:22:23.198357 kernel: ACPI: PM-Timer IO Port: 0xb008 Feb 13 19:22:23.198390 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 19:22:23.198406 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Feb 13 19:22:23.198421 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 19:22:23.198436 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 19:22:23.198450 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 19:22:23.198465 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 19:22:23.198486 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 19:22:23.198501 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 19:22:23.198515 kernel: TSC deadline timer available Feb 13 19:22:23.198530 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 13 19:22:23.198544 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 13 19:22:23.198559 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Feb 13 19:22:23.198573 kernel: Booting paravirtualized kernel on KVM Feb 13 19:22:23.198587 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 19:22:23.198602 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Feb 13 19:22:23.198620 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Feb 13 19:22:23.198635 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Feb 13 19:22:23.198650 kernel: pcpu-alloc: [0] 0 1 Feb 13 19:22:23.198664 kernel: kvm-guest: PV spinlocks enabled Feb 13 19:22:23.198679 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 19:22:23.198696 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=015d1d9e5e601f6a4e226c935072d3d0819e7eb2da20e68715973498f21aa3fe Feb 13 19:22:23.198711 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:22:23.198725 kernel: random: crng init done Feb 13 19:22:23.198743 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 19:22:23.198757 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 13 19:22:23.198772 kernel: Fallback order for Node 0: 0 Feb 13 19:22:23.198787 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Feb 13 19:22:23.198801 kernel: Policy zone: DMA32 Feb 13 19:22:23.198816 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:22:23.198831 kernel: Memory: 1930300K/2057760K available (14336K kernel code, 2301K rwdata, 22800K rodata, 43320K init, 1752K bss, 127200K reserved, 0K cma-reserved) Feb 13 19:22:23.198846 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 19:22:23.198861 kernel: Kernel/User page tables isolation: enabled Feb 13 19:22:23.198879 kernel: ftrace: allocating 37893 entries in 149 pages Feb 13 19:22:23.198893 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 19:22:23.198908 kernel: Dynamic Preempt: voluntary Feb 13 19:22:23.198922 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:22:23.198938 kernel: rcu: RCU event tracing is enabled. Feb 13 19:22:23.203521 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 19:22:23.203551 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:22:23.203567 kernel: Rude variant of Tasks RCU enabled. Feb 13 19:22:23.203582 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:22:23.203603 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:22:23.203618 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 19:22:23.203633 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 13 19:22:23.203648 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:22:23.203663 kernel: Console: colour VGA+ 80x25 Feb 13 19:22:23.203678 kernel: printk: console [ttyS0] enabled Feb 13 19:22:23.203692 kernel: ACPI: Core revision 20230628 Feb 13 19:22:23.203707 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Feb 13 19:22:23.203721 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 19:22:23.203739 kernel: x2apic enabled Feb 13 19:22:23.203755 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 19:22:23.203781 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Feb 13 19:22:23.203800 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Feb 13 19:22:23.203815 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 13 19:22:23.203831 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 13 19:22:23.203846 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 19:22:23.203861 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 19:22:23.203876 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 19:22:23.203891 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 19:22:23.203907 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Feb 13 19:22:23.203923 kernel: RETBleed: Vulnerable Feb 13 19:22:23.203938 kernel: Speculative Store Bypass: Vulnerable Feb 13 19:22:23.203981 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 19:22:23.203996 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 19:22:23.204011 kernel: GDS: Unknown: Dependent on hypervisor status Feb 13 19:22:23.204026 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 19:22:23.204041 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 19:22:23.204057 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 19:22:23.204075 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Feb 13 19:22:23.204091 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Feb 13 19:22:23.204106 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Feb 13 19:22:23.204122 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Feb 13 19:22:23.204137 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Feb 13 19:22:23.204153 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Feb 13 19:22:23.204168 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 19:22:23.204184 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Feb 13 19:22:23.204199 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Feb 13 19:22:23.204214 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Feb 13 19:22:23.204230 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Feb 13 19:22:23.204248 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Feb 13 19:22:23.204264 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Feb 13 19:22:23.204280 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Feb 13 19:22:23.204296 kernel: Freeing SMP alternatives memory: 32K Feb 13 19:22:23.204311 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:22:23.204327 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:22:23.204343 kernel: landlock: Up and running. Feb 13 19:22:23.204359 kernel: SELinux: Initializing. Feb 13 19:22:23.204374 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 19:22:23.204390 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 19:22:23.204406 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Feb 13 19:22:23.204425 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:22:23.204441 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:22:23.204458 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:22:23.204473 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Feb 13 19:22:23.204490 kernel: signal: max sigframe size: 3632 Feb 13 19:22:23.204505 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:22:23.204522 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:22:23.204538 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 13 19:22:23.204553 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:22:23.204573 kernel: smpboot: x86: Booting SMP configuration: Feb 13 19:22:23.204587 kernel: .... node #0, CPUs: #1 Feb 13 19:22:23.204604 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Feb 13 19:22:23.204621 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 13 19:22:23.204637 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 19:22:23.204653 kernel: smpboot: Max logical packages: 1 Feb 13 19:22:23.204669 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Feb 13 19:22:23.204684 kernel: devtmpfs: initialized Feb 13 19:22:23.204700 kernel: x86/mm: Memory block size: 128MB Feb 13 19:22:23.204719 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:22:23.204735 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 19:22:23.204754 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:22:23.213232 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:22:23.213252 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:22:23.213269 kernel: audit: type=2000 audit(1739474541.729:1): state=initialized audit_enabled=0 res=1 Feb 13 19:22:23.213285 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:22:23.213301 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 19:22:23.213318 kernel: cpuidle: using governor menu Feb 13 19:22:23.213342 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:22:23.213358 kernel: dca service started, version 1.12.1 Feb 13 19:22:23.213374 kernel: PCI: Using configuration type 1 for base access Feb 13 19:22:23.213391 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 19:22:23.213407 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:22:23.213423 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:22:23.213440 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:22:23.213456 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:22:23.213473 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:22:23.213493 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:22:23.213510 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:22:23.213526 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:22:23.213541 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Feb 13 19:22:23.213557 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 19:22:23.213571 kernel: ACPI: Interpreter enabled Feb 13 19:22:23.214008 kernel: ACPI: PM: (supports S0 S5) Feb 13 19:22:23.214203 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 19:22:23.214487 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 19:22:23.214521 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 19:22:23.214538 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Feb 13 19:22:23.214829 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 19:22:23.216383 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:22:23.216551 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Feb 13 19:22:23.216687 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Feb 13 19:22:23.216707 kernel: acpiphp: Slot [3] registered Feb 13 19:22:23.216737 kernel: acpiphp: Slot [4] registered Feb 13 19:22:23.216762 kernel: acpiphp: Slot [5] registered Feb 13 19:22:23.216782 kernel: acpiphp: Slot [6] registered Feb 13 19:22:23.216803 kernel: acpiphp: Slot [7] registered Feb 13 19:22:23.216823 kernel: acpiphp: Slot [8] registered Feb 13 19:22:23.216842 kernel: acpiphp: Slot [9] registered Feb 13 19:22:23.216863 kernel: acpiphp: Slot [10] registered Feb 13 19:22:23.216882 kernel: acpiphp: Slot [11] registered Feb 13 19:22:23.216897 kernel: acpiphp: Slot [12] registered Feb 13 19:22:23.216916 kernel: acpiphp: Slot [13] registered Feb 13 19:22:23.216931 kernel: acpiphp: Slot [14] registered Feb 13 19:22:23.223014 kernel: acpiphp: Slot [15] registered Feb 13 19:22:23.223047 kernel: acpiphp: Slot [16] registered Feb 13 19:22:23.223063 kernel: acpiphp: Slot [17] registered Feb 13 19:22:23.223079 kernel: acpiphp: Slot [18] registered Feb 13 19:22:23.223094 kernel: acpiphp: Slot [19] registered Feb 13 19:22:23.223109 kernel: acpiphp: Slot [20] registered Feb 13 19:22:23.223125 kernel: acpiphp: Slot [21] registered Feb 13 19:22:23.223140 kernel: acpiphp: Slot [22] registered Feb 13 19:22:23.223161 kernel: acpiphp: Slot [23] registered Feb 13 19:22:23.223176 kernel: acpiphp: Slot [24] registered Feb 13 19:22:23.223191 kernel: acpiphp: Slot [25] registered Feb 13 19:22:23.223205 kernel: acpiphp: Slot [26] registered Feb 13 19:22:23.223220 kernel: acpiphp: Slot [27] registered Feb 13 19:22:23.223236 kernel: acpiphp: Slot [28] registered Feb 13 19:22:23.223251 kernel: acpiphp: Slot [29] registered Feb 13 19:22:23.223266 kernel: acpiphp: Slot [30] registered Feb 13 19:22:23.223281 kernel: acpiphp: Slot [31] registered Feb 13 19:22:23.223299 kernel: PCI host bridge to bus 0000:00 Feb 13 19:22:23.225167 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 19:22:23.225336 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 19:22:23.225460 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 19:22:23.225579 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Feb 13 19:22:23.225701 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 19:22:23.225854 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 13 19:22:23.228613 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 13 19:22:23.228811 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Feb 13 19:22:23.228982 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Feb 13 19:22:23.229250 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Feb 13 19:22:23.229391 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Feb 13 19:22:23.229527 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Feb 13 19:22:23.229655 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Feb 13 19:22:23.229793 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Feb 13 19:22:23.229919 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Feb 13 19:22:23.232160 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Feb 13 19:22:23.232322 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Feb 13 19:22:23.232515 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Feb 13 19:22:23.232703 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Feb 13 19:22:23.232831 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 19:22:23.235041 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Feb 13 19:22:23.235213 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Feb 13 19:22:23.235351 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Feb 13 19:22:23.235479 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Feb 13 19:22:23.235498 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 19:22:23.235513 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 19:22:23.235528 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 19:22:23.235550 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 19:22:23.235565 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 13 19:22:23.235579 kernel: iommu: Default domain type: Translated Feb 13 19:22:23.235594 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 19:22:23.235608 kernel: PCI: Using ACPI for IRQ routing Feb 13 19:22:23.235623 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 19:22:23.235637 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 13 19:22:23.235650 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Feb 13 19:22:23.235805 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Feb 13 19:22:23.237877 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Feb 13 19:22:23.240124 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 19:22:23.240160 kernel: vgaarb: loaded Feb 13 19:22:23.240177 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Feb 13 19:22:23.240194 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Feb 13 19:22:23.240211 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 19:22:23.240226 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:22:23.240243 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:22:23.240265 kernel: pnp: PnP ACPI init Feb 13 19:22:23.240281 kernel: pnp: PnP ACPI: found 5 devices Feb 13 19:22:23.240297 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 19:22:23.240313 kernel: NET: Registered PF_INET protocol family Feb 13 19:22:23.240329 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 19:22:23.240345 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 13 19:22:23.240362 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:22:23.240377 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 19:22:23.240394 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 19:22:23.240413 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 13 19:22:23.240429 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 19:22:23.240445 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 19:22:23.240460 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:22:23.240476 kernel: NET: Registered PF_XDP protocol family Feb 13 19:22:23.240609 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 19:22:23.240732 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 19:22:23.240849 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 19:22:23.240990 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Feb 13 19:22:23.241133 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 13 19:22:23.241155 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:22:23.241172 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 13 19:22:23.241189 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Feb 13 19:22:23.241205 kernel: clocksource: Switched to clocksource tsc Feb 13 19:22:23.241220 kernel: Initialise system trusted keyrings Feb 13 19:22:23.241236 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 13 19:22:23.241256 kernel: Key type asymmetric registered Feb 13 19:22:23.241271 kernel: Asymmetric key parser 'x509' registered Feb 13 19:22:23.241287 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 19:22:23.241302 kernel: io scheduler mq-deadline registered Feb 13 19:22:23.241318 kernel: io scheduler kyber registered Feb 13 19:22:23.241334 kernel: io scheduler bfq registered Feb 13 19:22:23.241351 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 19:22:23.241366 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:22:23.241382 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 19:22:23.241401 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 19:22:23.241417 kernel: i8042: Warning: Keylock active Feb 13 19:22:23.241432 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 19:22:23.241448 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 19:22:23.241586 kernel: rtc_cmos 00:00: RTC can wake from S4 Feb 13 19:22:23.241771 kernel: rtc_cmos 00:00: registered as rtc0 Feb 13 19:22:23.241897 kernel: rtc_cmos 00:00: setting system clock to 2025-02-13T19:22:22 UTC (1739474542) Feb 13 19:22:23.244088 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Feb 13 19:22:23.244155 kernel: intel_pstate: CPU model not supported Feb 13 19:22:23.244190 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:22:23.244222 kernel: Segment Routing with IPv6 Feb 13 19:22:23.244253 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:22:23.244284 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:22:23.244315 kernel: Key type dns_resolver registered Feb 13 19:22:23.244346 kernel: IPI shorthand broadcast: enabled Feb 13 19:22:23.244378 kernel: sched_clock: Marking stable (833204013, 244229220)->(1224143034, -146709801) Feb 13 19:22:23.244409 kernel: registered taskstats version 1 Feb 13 19:22:23.244437 kernel: Loading compiled-in X.509 certificates Feb 13 19:22:23.244451 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: b3acedbed401b3cd9632ee9302ddcce254d8924d' Feb 13 19:22:23.244465 kernel: Key type .fscrypt registered Feb 13 19:22:23.244479 kernel: Key type fscrypt-provisioning registered Feb 13 19:22:23.244492 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 19:22:23.244507 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:22:23.244523 kernel: ima: No architecture policies found Feb 13 19:22:23.244538 kernel: clk: Disabling unused clocks Feb 13 19:22:23.244553 kernel: Freeing unused kernel image (initmem) memory: 43320K Feb 13 19:22:23.244572 kernel: Write protecting the kernel read-only data: 38912k Feb 13 19:22:23.244588 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Feb 13 19:22:23.244603 kernel: Run /init as init process Feb 13 19:22:23.244618 kernel: with arguments: Feb 13 19:22:23.244633 kernel: /init Feb 13 19:22:23.244645 kernel: with environment: Feb 13 19:22:23.244657 kernel: HOME=/ Feb 13 19:22:23.244671 kernel: TERM=linux Feb 13 19:22:23.244686 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:22:23.244711 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:22:23.244741 systemd[1]: Detected virtualization amazon. Feb 13 19:22:23.244759 systemd[1]: Detected architecture x86-64. Feb 13 19:22:23.244772 systemd[1]: Running in initrd. Feb 13 19:22:23.244787 systemd[1]: No hostname configured, using default hostname. Feb 13 19:22:23.244806 systemd[1]: Hostname set to . Feb 13 19:22:23.244821 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:22:23.244836 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:22:23.244850 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:22:23.244866 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:22:23.244882 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:22:23.244897 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:22:23.244912 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:22:23.244930 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:22:23.244960 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:22:23.244976 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:22:23.244991 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:22:23.245006 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:22:23.245021 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:22:23.245035 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:22:23.245054 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:22:23.245068 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:22:23.245083 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:22:23.245098 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:22:23.245113 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:22:23.245128 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 19:22:23.245143 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:22:23.245158 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:22:23.245175 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:22:23.245190 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:22:23.245204 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:22:23.245219 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 19:22:23.245237 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:22:23.245254 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:22:23.245268 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:22:23.245286 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:22:23.245304 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:22:23.245320 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:22:23.245335 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:22:23.245350 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:22:23.245398 systemd-journald[179]: Collecting audit messages is disabled. Feb 13 19:22:23.245437 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:22:23.245453 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:22:23.245468 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:22:23.245487 systemd-journald[179]: Journal started Feb 13 19:22:23.245519 systemd-journald[179]: Runtime Journal (/run/log/journal/ec29f55d72ecbcf0b200c63d3aebd0ee) is 4.8M, max 38.5M, 33.7M free. Feb 13 19:22:23.252993 kernel: Bridge firewalling registered Feb 13 19:22:23.172347 systemd-modules-load[180]: Inserted module 'overlay' Feb 13 19:22:23.427709 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:22:23.250274 systemd-modules-load[180]: Inserted module 'br_netfilter' Feb 13 19:22:23.422131 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:22:23.422689 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:22:23.431184 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:22:23.453161 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:22:23.456285 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:22:23.471258 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:22:23.478343 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:22:23.505765 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:22:23.518171 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:22:23.538635 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:22:23.551165 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:22:23.567936 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:22:23.582223 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:22:23.597748 dracut-cmdline[215]: dracut-dracut-053 Feb 13 19:22:23.604696 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=015d1d9e5e601f6a4e226c935072d3d0819e7eb2da20e68715973498f21aa3fe Feb 13 19:22:23.644295 systemd-resolved[205]: Positive Trust Anchors: Feb 13 19:22:23.644313 systemd-resolved[205]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:22:23.644377 systemd-resolved[205]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:22:23.651176 systemd-resolved[205]: Defaulting to hostname 'linux'. Feb 13 19:22:23.653656 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:22:23.658597 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:22:23.776977 kernel: SCSI subsystem initialized Feb 13 19:22:23.789972 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:22:23.806071 kernel: iscsi: registered transport (tcp) Feb 13 19:22:23.852099 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:22:23.852188 kernel: QLogic iSCSI HBA Driver Feb 13 19:22:23.910255 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:22:23.919681 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:22:23.982204 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:22:23.982282 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:22:23.989413 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:22:24.050040 kernel: raid6: avx512x4 gen() 9580 MB/s Feb 13 19:22:24.068727 kernel: raid6: avx512x2 gen() 10683 MB/s Feb 13 19:22:24.093390 kernel: raid6: avx512x1 gen() 4831 MB/s Feb 13 19:22:24.110003 kernel: raid6: avx2x4 gen() 4877 MB/s Feb 13 19:22:24.138282 kernel: raid6: avx2x2 gen() 5830 MB/s Feb 13 19:22:24.163483 kernel: raid6: avx2x1 gen() 3121 MB/s Feb 13 19:22:24.163599 kernel: raid6: using algorithm avx512x2 gen() 10683 MB/s Feb 13 19:22:24.187886 kernel: raid6: .... xor() 1992 MB/s, rmw enabled Feb 13 19:22:24.187991 kernel: raid6: using avx512x2 recovery algorithm Feb 13 19:22:24.326982 kernel: xor: automatically using best checksumming function avx Feb 13 19:22:24.750977 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:22:24.768326 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:22:24.779150 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:22:24.815358 systemd-udevd[398]: Using default interface naming scheme 'v255'. Feb 13 19:22:24.823311 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:22:24.837048 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:22:24.864072 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Feb 13 19:22:24.924569 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:22:24.935173 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:22:25.054330 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:22:25.065294 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:22:25.112088 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:22:25.117072 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:22:25.118745 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:22:25.122365 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:22:25.140364 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:22:25.164213 kernel: ena 0000:00:05.0: ENA device version: 0.10 Feb 13 19:22:25.230834 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Feb 13 19:22:25.231153 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Feb 13 19:22:25.231345 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:eb:50:39:23:a7 Feb 13 19:22:25.231511 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 19:22:25.171707 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:22:25.247274 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 19:22:25.247340 kernel: AES CTR mode by8 optimization enabled Feb 13 19:22:25.264756 (udev-worker)[449]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:22:25.294390 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:22:25.294557 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:22:25.298171 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:22:25.299907 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:22:25.300204 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:22:25.302447 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:22:25.329270 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:22:25.336173 kernel: nvme nvme0: pci function 0000:00:04.0 Feb 13 19:22:25.336447 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Feb 13 19:22:25.349971 kernel: nvme nvme0: 2/0/0 default/read/poll queues Feb 13 19:22:25.359305 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:22:25.359387 kernel: GPT:9289727 != 16777215 Feb 13 19:22:25.359406 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:22:25.361224 kernel: GPT:9289727 != 16777215 Feb 13 19:22:25.361281 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:22:25.361299 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:22:25.511989 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (456) Feb 13 19:22:25.524031 kernel: BTRFS: device fsid c7adc9b8-df7f-4a5f-93bf-204def2767a9 devid 1 transid 39 /dev/nvme0n1p3 scanned by (udev-worker) (450) Feb 13 19:22:25.529302 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:22:25.541814 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:22:25.614676 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Feb 13 19:22:25.617999 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:22:25.651654 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Feb 13 19:22:25.675302 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 19:22:25.717231 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Feb 13 19:22:25.717471 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Feb 13 19:22:25.742284 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:22:25.778468 disk-uuid[631]: Primary Header is updated. Feb 13 19:22:25.778468 disk-uuid[631]: Secondary Entries is updated. Feb 13 19:22:25.778468 disk-uuid[631]: Secondary Header is updated. Feb 13 19:22:25.795080 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:22:25.814032 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:22:26.814896 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:22:26.815735 disk-uuid[632]: The operation has completed successfully. Feb 13 19:22:27.021190 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:22:27.021322 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:22:27.059225 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:22:27.075601 sh[892]: Success Feb 13 19:22:27.108975 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 19:22:27.282883 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:22:27.310700 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:22:27.343082 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:22:27.382317 kernel: BTRFS info (device dm-0): first mount of filesystem c7adc9b8-df7f-4a5f-93bf-204def2767a9 Feb 13 19:22:27.382387 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:22:27.382407 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:22:27.383559 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:22:27.384331 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:22:27.516005 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 19:22:27.540204 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:22:27.543074 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:22:27.554589 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:22:27.561884 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:22:27.595549 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 19:22:27.595620 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:22:27.595643 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 19:22:27.608019 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 19:22:27.624066 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:22:27.627350 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 19:22:27.634442 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:22:27.644327 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:22:27.703123 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:22:27.712978 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:22:27.814682 systemd-networkd[1084]: lo: Link UP Feb 13 19:22:27.814696 systemd-networkd[1084]: lo: Gained carrier Feb 13 19:22:27.819749 systemd-networkd[1084]: Enumeration completed Feb 13 19:22:27.820438 systemd-networkd[1084]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:22:27.820444 systemd-networkd[1084]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:22:27.820794 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:22:27.827593 systemd[1]: Reached target network.target - Network. Feb 13 19:22:27.836442 systemd-networkd[1084]: eth0: Link UP Feb 13 19:22:27.836453 systemd-networkd[1084]: eth0: Gained carrier Feb 13 19:22:27.836470 systemd-networkd[1084]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:22:27.852334 systemd-networkd[1084]: eth0: DHCPv4 address 172.31.18.187/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 19:22:28.121359 ignition[1015]: Ignition 2.20.0 Feb 13 19:22:28.121375 ignition[1015]: Stage: fetch-offline Feb 13 19:22:28.121614 ignition[1015]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:22:28.121628 ignition[1015]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:22:28.122599 ignition[1015]: Ignition finished successfully Feb 13 19:22:28.130333 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:22:28.138199 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 19:22:28.165432 ignition[1094]: Ignition 2.20.0 Feb 13 19:22:28.165446 ignition[1094]: Stage: fetch Feb 13 19:22:28.165970 ignition[1094]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:22:28.165994 ignition[1094]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:22:28.166134 ignition[1094]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:22:28.207343 ignition[1094]: PUT result: OK Feb 13 19:22:28.215020 ignition[1094]: parsed url from cmdline: "" Feb 13 19:22:28.217110 ignition[1094]: no config URL provided Feb 13 19:22:28.217124 ignition[1094]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:22:28.217148 ignition[1094]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:22:28.217192 ignition[1094]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:22:28.226369 ignition[1094]: PUT result: OK Feb 13 19:22:28.226477 ignition[1094]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Feb 13 19:22:28.237845 ignition[1094]: GET result: OK Feb 13 19:22:28.240935 ignition[1094]: parsing config with SHA512: 86de424a84140d4107a616c1c7313cdd7ec41a1a7e2e87d4d65ff5620a659d9aad39ebfaf38a0be19b0b2efad1f18b10f1214dd6c4c79910cc5b3f4def573b5b Feb 13 19:22:28.264189 unknown[1094]: fetched base config from "system" Feb 13 19:22:28.264203 unknown[1094]: fetched base config from "system" Feb 13 19:22:28.264824 ignition[1094]: fetch: fetch complete Feb 13 19:22:28.264210 unknown[1094]: fetched user config from "aws" Feb 13 19:22:28.264831 ignition[1094]: fetch: fetch passed Feb 13 19:22:28.276336 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 19:22:28.264893 ignition[1094]: Ignition finished successfully Feb 13 19:22:28.288310 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:22:28.327645 ignition[1100]: Ignition 2.20.0 Feb 13 19:22:28.327660 ignition[1100]: Stage: kargs Feb 13 19:22:28.328162 ignition[1100]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:22:28.328176 ignition[1100]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:22:28.328296 ignition[1100]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:22:28.330079 ignition[1100]: PUT result: OK Feb 13 19:22:28.337265 ignition[1100]: kargs: kargs passed Feb 13 19:22:28.337505 ignition[1100]: Ignition finished successfully Feb 13 19:22:28.342193 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:22:28.348216 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:22:28.393728 ignition[1106]: Ignition 2.20.0 Feb 13 19:22:28.393743 ignition[1106]: Stage: disks Feb 13 19:22:28.394198 ignition[1106]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:22:28.394212 ignition[1106]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:22:28.394333 ignition[1106]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:22:28.395867 ignition[1106]: PUT result: OK Feb 13 19:22:28.406571 ignition[1106]: disks: disks passed Feb 13 19:22:28.406657 ignition[1106]: Ignition finished successfully Feb 13 19:22:28.408935 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:22:28.409835 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:22:28.413718 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:22:28.415253 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:22:28.421592 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:22:28.424123 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:22:28.437178 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:22:28.495755 systemd-fsck[1114]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 19:22:28.499272 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:22:28.510107 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:22:28.638969 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 7d46b70d-4c30-46e6-9935-e1f7fb523560 r/w with ordered data mode. Quota mode: none. Feb 13 19:22:28.639296 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:22:28.640084 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:22:28.655188 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:22:28.667237 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:22:28.669564 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 19:22:28.669616 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:22:28.669644 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:22:28.677879 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:22:28.682163 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:22:28.698696 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1133) Feb 13 19:22:28.698756 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 19:22:28.700457 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:22:28.700501 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 19:22:28.716452 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 19:22:28.717517 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:22:29.114758 initrd-setup-root[1157]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:22:29.166056 initrd-setup-root[1164]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:22:29.175285 initrd-setup-root[1171]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:22:29.214471 initrd-setup-root[1178]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:22:29.618005 systemd-networkd[1084]: eth0: Gained IPv6LL Feb 13 19:22:29.643226 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:22:29.658203 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:22:29.666475 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:22:29.712987 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 19:22:29.712935 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:22:29.761876 ignition[1246]: INFO : Ignition 2.20.0 Feb 13 19:22:29.761876 ignition[1246]: INFO : Stage: mount Feb 13 19:22:29.773723 ignition[1246]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:22:29.773723 ignition[1246]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:22:29.773723 ignition[1246]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:22:29.762346 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:22:29.786747 ignition[1246]: INFO : PUT result: OK Feb 13 19:22:29.792937 ignition[1246]: INFO : mount: mount passed Feb 13 19:22:29.794334 ignition[1246]: INFO : Ignition finished successfully Feb 13 19:22:29.805999 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:22:29.821120 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:22:29.879226 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:22:29.932019 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1257) Feb 13 19:22:29.935532 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 19:22:29.935604 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:22:29.935622 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 19:22:29.942986 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 19:22:29.946567 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:22:30.019587 ignition[1273]: INFO : Ignition 2.20.0 Feb 13 19:22:30.019587 ignition[1273]: INFO : Stage: files Feb 13 19:22:30.033581 ignition[1273]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:22:30.033581 ignition[1273]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:22:30.033581 ignition[1273]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:22:30.040349 ignition[1273]: INFO : PUT result: OK Feb 13 19:22:30.045267 ignition[1273]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:22:30.061846 ignition[1273]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:22:30.061846 ignition[1273]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:22:30.094889 ignition[1273]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:22:30.097426 ignition[1273]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:22:30.097426 ignition[1273]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:22:30.096331 unknown[1273]: wrote ssh authorized keys file for user: core Feb 13 19:22:30.111519 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Feb 13 19:22:30.126355 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Feb 13 19:22:30.245286 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 19:22:30.438047 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Feb 13 19:22:30.441739 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 19:22:30.441739 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 13 19:22:30.968554 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 19:22:31.207371 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 19:22:31.209981 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:22:31.209981 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:22:31.209981 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:22:31.209981 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:22:31.209981 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:22:31.228784 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:22:31.228784 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:22:31.228784 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:22:31.228784 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:22:31.228784 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:22:31.261341 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 19:22:31.261341 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 19:22:31.261341 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 19:22:31.261341 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Feb 13 19:22:31.690896 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 19:22:32.656535 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 19:22:32.656535 ignition[1273]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Feb 13 19:22:32.667917 ignition[1273]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:22:32.673961 ignition[1273]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:22:32.673961 ignition[1273]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Feb 13 19:22:32.673961 ignition[1273]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Feb 13 19:22:32.673961 ignition[1273]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 19:22:32.673961 ignition[1273]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:22:32.673961 ignition[1273]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:22:32.673961 ignition[1273]: INFO : files: files passed Feb 13 19:22:32.673961 ignition[1273]: INFO : Ignition finished successfully Feb 13 19:22:32.678159 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:22:32.716360 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:22:32.747578 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:22:32.751824 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:22:32.751917 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:22:32.834232 initrd-setup-root-after-ignition[1304]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:22:32.834232 initrd-setup-root-after-ignition[1304]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:22:32.851110 initrd-setup-root-after-ignition[1308]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:22:32.865058 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:22:32.872556 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:22:32.890250 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:22:32.975152 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:22:32.975302 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:22:32.984418 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:22:32.987495 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:22:32.990091 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:22:33.001178 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:22:33.028410 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:22:33.039162 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:22:33.077791 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:22:33.086419 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:22:33.090094 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:22:33.102548 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:22:33.105052 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:22:33.117398 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:22:33.119120 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:22:33.124826 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:22:33.128064 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:22:33.134782 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:22:33.138361 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:22:33.141643 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:22:33.152168 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:22:33.158260 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:22:33.160287 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:22:33.169402 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:22:33.171044 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:22:33.187424 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:22:33.197536 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:22:33.199916 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:22:33.201974 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:22:33.211374 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:22:33.214140 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:22:33.227168 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:22:33.227402 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:22:33.232799 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:22:33.232989 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:22:33.266327 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:22:33.271915 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:22:33.272176 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:22:33.338843 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:22:33.345363 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:22:33.346280 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:22:33.366536 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:22:33.372709 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:22:33.421370 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:22:33.421515 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:22:33.426843 ignition[1328]: INFO : Ignition 2.20.0 Feb 13 19:22:33.426843 ignition[1328]: INFO : Stage: umount Feb 13 19:22:33.426843 ignition[1328]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:22:33.426843 ignition[1328]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:22:33.426843 ignition[1328]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:22:33.490063 ignition[1328]: INFO : PUT result: OK Feb 13 19:22:33.500205 ignition[1328]: INFO : umount: umount passed Feb 13 19:22:33.511317 ignition[1328]: INFO : Ignition finished successfully Feb 13 19:22:33.517127 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:22:33.518050 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:22:33.518189 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:22:33.520663 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:22:33.520787 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:22:33.527828 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:22:33.527920 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:22:33.533847 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 19:22:33.545817 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 19:22:33.546365 systemd[1]: Stopped target network.target - Network. Feb 13 19:22:33.546584 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:22:33.546663 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:22:33.546902 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:22:33.547278 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:22:33.562982 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:22:33.563155 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:22:33.574183 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:22:33.578523 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:22:33.578587 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:22:33.584914 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:22:33.584986 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:22:33.590206 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:22:33.590328 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:22:33.598727 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:22:33.598815 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:22:33.605167 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:22:33.608701 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:22:33.612280 systemd-networkd[1084]: eth0: DHCPv6 lease lost Feb 13 19:22:33.634654 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:22:33.635706 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:22:33.638153 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:22:33.638262 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:22:33.644673 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:22:33.644819 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:22:33.650240 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:22:33.650302 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:22:33.663030 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:22:33.663270 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:22:33.686657 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:22:33.686820 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:22:33.686905 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:22:33.694604 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:22:33.694689 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:22:33.699533 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:22:33.699669 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:22:33.704089 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:22:33.704169 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:22:33.709967 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:22:33.727126 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:22:33.727317 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:22:33.733011 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:22:33.733506 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:22:33.740166 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:22:33.740230 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:22:33.746461 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:22:33.746548 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:22:33.765541 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:22:33.765648 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:22:33.771329 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:22:33.771421 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:22:33.790178 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:22:33.791843 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:22:33.791992 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:22:33.798173 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:22:33.798268 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:22:33.807603 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:22:33.807721 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:22:33.842089 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:22:33.847377 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:22:33.854337 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:22:33.868526 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:22:33.925910 systemd[1]: Switching root. Feb 13 19:22:33.963189 systemd-journald[179]: Journal stopped Feb 13 19:22:37.778891 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). Feb 13 19:22:37.780058 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:22:37.780086 kernel: SELinux: policy capability open_perms=1 Feb 13 19:22:37.780190 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:22:37.780516 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:22:37.780562 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:22:37.780581 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:22:37.780598 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:22:37.780618 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:22:37.780635 kernel: audit: type=1403 audit(1739474554.659:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:22:37.780661 systemd[1]: Successfully loaded SELinux policy in 79.499ms. Feb 13 19:22:37.780693 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 35.997ms. Feb 13 19:22:37.780714 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:22:37.780734 systemd[1]: Detected virtualization amazon. Feb 13 19:22:37.780757 systemd[1]: Detected architecture x86-64. Feb 13 19:22:37.780776 systemd[1]: Detected first boot. Feb 13 19:22:37.780796 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:22:37.780814 zram_generator::config[1371]: No configuration found. Feb 13 19:22:37.780835 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:22:37.780854 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 19:22:37.780872 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 19:22:37.780892 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 19:22:37.780916 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:22:37.780935 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:22:37.781094 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:22:37.781118 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:22:37.781137 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:22:37.781156 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:22:37.781175 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:22:37.781194 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:22:37.781218 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:22:37.781238 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:22:37.781256 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:22:37.781275 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:22:37.781294 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:22:37.781314 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:22:37.781332 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 19:22:37.781774 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:22:37.781798 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 19:22:37.781820 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 19:22:37.781839 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 19:22:37.781866 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:22:37.781884 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:22:37.781903 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:22:37.781921 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:22:37.781942 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:22:37.781978 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:22:37.782011 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:22:37.782033 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:22:37.782055 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:22:37.782076 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:22:37.782098 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:22:37.782119 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:22:37.782141 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:22:37.782163 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:22:37.782184 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:22:37.782208 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:22:37.782230 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:22:37.782251 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:22:37.782867 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:22:37.782898 systemd[1]: Reached target machines.target - Containers. Feb 13 19:22:37.782917 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:22:37.782937 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:22:37.783051 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:22:37.783073 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:22:37.783098 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:22:37.783116 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:22:37.783135 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:22:37.783154 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:22:37.783172 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:22:37.783192 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:22:37.783211 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 19:22:37.783230 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 19:22:37.783253 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 19:22:37.783271 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 19:22:37.783290 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:22:37.783308 kernel: loop: module loaded Feb 13 19:22:37.783328 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:22:37.783346 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:22:37.783365 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:22:37.783385 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:22:37.783404 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 19:22:37.783426 systemd[1]: Stopped verity-setup.service. Feb 13 19:22:37.783446 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:22:37.783465 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:22:37.783484 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:22:37.783501 kernel: fuse: init (API version 7.39) Feb 13 19:22:37.783520 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:22:37.783538 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:22:37.783556 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:22:37.783577 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:22:37.783596 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:22:37.783614 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:22:37.783632 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:22:37.783650 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:22:37.783671 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:22:37.783690 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:22:37.783708 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:22:37.783729 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:22:37.783747 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:22:37.783765 kernel: ACPI: bus type drm_connector registered Feb 13 19:22:37.783784 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:22:37.783804 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:22:37.783826 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:22:37.783845 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:22:37.783898 systemd-journald[1446]: Collecting audit messages is disabled. Feb 13 19:22:37.784942 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:22:37.785002 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:22:37.785025 systemd-journald[1446]: Journal started Feb 13 19:22:37.785067 systemd-journald[1446]: Runtime Journal (/run/log/journal/ec29f55d72ecbcf0b200c63d3aebd0ee) is 4.8M, max 38.5M, 33.7M free. Feb 13 19:22:37.108683 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:22:37.181100 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Feb 13 19:22:37.181680 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 19:22:37.790151 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:22:37.793343 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:22:37.818014 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:22:37.826079 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:22:37.841098 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:22:37.845118 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:22:37.845177 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:22:37.851525 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 19:22:37.860168 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:22:37.869478 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:22:37.872549 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:22:37.881238 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:22:37.887538 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:22:37.889300 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:22:37.896387 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:22:37.900096 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:22:37.909320 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:22:37.914223 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:22:37.918755 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:22:37.921107 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:22:37.923589 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:22:37.925933 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:22:37.946199 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:22:37.961255 systemd-journald[1446]: Time spent on flushing to /var/log/journal/ec29f55d72ecbcf0b200c63d3aebd0ee is 97.060ms for 962 entries. Feb 13 19:22:37.961255 systemd-journald[1446]: System Journal (/var/log/journal/ec29f55d72ecbcf0b200c63d3aebd0ee) is 8.0M, max 195.6M, 187.6M free. Feb 13 19:22:38.076908 systemd-journald[1446]: Received client request to flush runtime journal. Feb 13 19:22:38.077013 kernel: loop0: detected capacity change from 0 to 141000 Feb 13 19:22:37.961497 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:22:37.968216 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 19:22:37.972270 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:22:38.001085 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:22:38.046591 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:22:38.059199 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:22:38.079740 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:22:38.098704 udevadm[1510]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 19:22:38.106071 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:22:38.108569 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 19:22:38.122552 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:22:38.135871 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:22:38.201980 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:22:38.203750 systemd-tmpfiles[1515]: ACLs are not supported, ignoring. Feb 13 19:22:38.205071 systemd-tmpfiles[1515]: ACLs are not supported, ignoring. Feb 13 19:22:38.221447 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:22:38.246324 kernel: loop1: detected capacity change from 0 to 62848 Feb 13 19:22:38.396975 kernel: loop2: detected capacity change from 0 to 138184 Feb 13 19:22:38.562085 kernel: loop3: detected capacity change from 0 to 218376 Feb 13 19:22:38.834073 kernel: loop4: detected capacity change from 0 to 141000 Feb 13 19:22:38.910444 kernel: loop5: detected capacity change from 0 to 62848 Feb 13 19:22:38.965985 kernel: loop6: detected capacity change from 0 to 138184 Feb 13 19:22:39.031126 kernel: loop7: detected capacity change from 0 to 218376 Feb 13 19:22:39.131193 (sd-merge)[1524]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Feb 13 19:22:39.137258 (sd-merge)[1524]: Merged extensions into '/usr'. Feb 13 19:22:39.177578 systemd[1]: Reloading requested from client PID 1497 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:22:39.177599 systemd[1]: Reloading... Feb 13 19:22:39.320979 zram_generator::config[1549]: No configuration found. Feb 13 19:22:39.586248 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:22:39.719888 systemd[1]: Reloading finished in 539 ms. Feb 13 19:22:39.750376 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:22:39.752397 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:22:39.811880 systemd[1]: Starting ensure-sysext.service... Feb 13 19:22:39.853261 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:22:39.870184 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:22:39.885455 systemd[1]: Reloading requested from client PID 1599 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:22:39.885479 systemd[1]: Reloading... Feb 13 19:22:39.891719 systemd-tmpfiles[1600]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:22:39.892168 systemd-tmpfiles[1600]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:22:39.896835 systemd-tmpfiles[1600]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:22:39.898110 systemd-tmpfiles[1600]: ACLs are not supported, ignoring. Feb 13 19:22:39.898199 systemd-tmpfiles[1600]: ACLs are not supported, ignoring. Feb 13 19:22:39.956767 systemd-tmpfiles[1600]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:22:39.962425 systemd-tmpfiles[1600]: Skipping /boot Feb 13 19:22:39.962885 systemd-udevd[1602]: Using default interface naming scheme 'v255'. Feb 13 19:22:40.002698 systemd-tmpfiles[1600]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:22:40.002717 systemd-tmpfiles[1600]: Skipping /boot Feb 13 19:22:40.135013 zram_generator::config[1629]: No configuration found. Feb 13 19:22:40.236113 (udev-worker)[1631]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:22:40.384861 ldconfig[1492]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:22:40.397974 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 13 19:22:40.404995 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Feb 13 19:22:40.437349 kernel: ACPI: button: Power Button [PWRF] Feb 13 19:22:40.437499 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Feb 13 19:22:40.442979 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Feb 13 19:22:40.448976 kernel: ACPI: button: Sleep Button [SLPF] Feb 13 19:22:40.474599 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:22:40.544337 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 19:22:40.567041 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1641) Feb 13 19:22:40.598165 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 19:22:40.598359 systemd[1]: Reloading finished in 712 ms. Feb 13 19:22:40.619434 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:22:40.621388 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:22:40.633669 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:22:40.686617 systemd[1]: Finished ensure-sysext.service. Feb 13 19:22:40.695516 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:22:40.705434 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:22:40.715353 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:22:40.717027 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:22:40.728881 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:22:40.733294 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:22:40.737023 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:22:40.742110 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:22:40.743517 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:22:40.756034 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:22:40.768825 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:22:40.788318 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:22:40.790129 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:22:40.802166 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:22:40.810289 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:22:40.812099 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:22:40.813070 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:22:40.815018 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:22:40.815500 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:22:40.815661 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:22:40.844898 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:22:40.858280 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:22:40.882460 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:22:40.882768 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:22:40.899418 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:22:40.910444 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:22:40.910705 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:22:40.938353 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:22:40.974836 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 19:22:41.010826 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:22:41.024605 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:22:41.033348 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:22:41.046809 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:22:41.057225 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:22:41.096162 lvm[1826]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:22:41.110054 augenrules[1833]: No rules Feb 13 19:22:41.105192 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:22:41.110578 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:22:41.110902 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:22:41.120681 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:22:41.124415 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:22:41.128017 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:22:41.130271 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:22:41.142449 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:22:41.142767 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:22:41.152298 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:22:41.189146 lvm[1849]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:22:41.253435 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:22:41.306211 systemd-networkd[1798]: lo: Link UP Feb 13 19:22:41.306589 systemd-networkd[1798]: lo: Gained carrier Feb 13 19:22:41.310359 systemd-networkd[1798]: Enumeration completed Feb 13 19:22:41.310798 systemd-networkd[1798]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:22:41.310803 systemd-networkd[1798]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:22:41.313980 systemd-networkd[1798]: eth0: Link UP Feb 13 19:22:41.314287 systemd-networkd[1798]: eth0: Gained carrier Feb 13 19:22:41.317146 systemd-networkd[1798]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:22:41.329046 systemd-networkd[1798]: eth0: DHCPv4 address 172.31.18.187/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 19:22:41.333067 systemd-resolved[1801]: Positive Trust Anchors: Feb 13 19:22:41.333432 systemd-resolved[1801]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:22:41.333544 systemd-resolved[1801]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:22:41.352289 systemd-resolved[1801]: Defaulting to hostname 'linux'. Feb 13 19:22:41.370976 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:22:41.372693 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:22:41.374587 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:22:41.376992 systemd[1]: Reached target network.target - Network. Feb 13 19:22:41.379489 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:22:41.382007 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:22:41.384724 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:22:41.391180 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:22:41.396093 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:22:41.398203 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:22:41.400978 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:22:41.404359 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:22:41.404399 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:22:41.405964 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:22:41.416687 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:22:41.420147 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:22:41.431843 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:22:41.439436 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:22:41.443498 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:22:41.447577 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:22:41.450983 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:22:41.460799 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:22:41.461259 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:22:41.480094 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:22:41.485909 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 19:22:41.499302 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:22:41.508996 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:22:41.536635 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:22:41.538340 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:22:41.551736 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:22:41.564757 systemd[1]: Started ntpd.service - Network Time Service. Feb 13 19:22:41.598108 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 19:22:41.644815 systemd[1]: Starting setup-oem.service - Setup OEM... Feb 13 19:22:41.666811 jq[1861]: false Feb 13 19:22:41.686539 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:22:41.709155 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:22:41.735239 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:22:41.736963 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 19:22:41.740083 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:22:41.749506 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:22:41.752133 dbus-daemon[1860]: [system] SELinux support is enabled Feb 13 19:22:41.754262 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:22:41.757446 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:22:41.776510 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:22:41.776859 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:22:41.781580 dbus-daemon[1860]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1798 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 13 19:22:41.787128 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:22:41.787584 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:22:41.829669 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:22:41.830055 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:22:41.870184 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:22:41.876444 update_engine[1872]: I20250213 19:22:41.873462 1872 main.cc:92] Flatcar Update Engine starting Feb 13 19:22:41.876272 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:22:41.876311 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:22:41.878355 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:22:41.886431 dbus-daemon[1860]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 19:22:41.878397 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:22:41.927326 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Feb 13 19:22:41.943129 update_engine[1872]: I20250213 19:22:41.933864 1872 update_check_scheduler.cc:74] Next update check in 10m1s Feb 13 19:22:41.933166 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:22:41.933324 (ntainerd)[1889]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:22:41.955609 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:22:42.021273 jq[1873]: true Feb 13 19:22:42.046070 extend-filesystems[1862]: Found loop4 Feb 13 19:22:42.046070 extend-filesystems[1862]: Found loop5 Feb 13 19:22:42.046070 extend-filesystems[1862]: Found loop6 Feb 13 19:22:42.046070 extend-filesystems[1862]: Found loop7 Feb 13 19:22:42.046070 extend-filesystems[1862]: Found nvme0n1 Feb 13 19:22:42.046070 extend-filesystems[1862]: Found nvme0n1p1 Feb 13 19:22:42.046070 extend-filesystems[1862]: Found nvme0n1p2 Feb 13 19:22:42.098753 extend-filesystems[1862]: Found nvme0n1p3 Feb 13 19:22:42.098753 extend-filesystems[1862]: Found usr Feb 13 19:22:42.098753 extend-filesystems[1862]: Found nvme0n1p4 Feb 13 19:22:42.098753 extend-filesystems[1862]: Found nvme0n1p6 Feb 13 19:22:42.098753 extend-filesystems[1862]: Found nvme0n1p7 Feb 13 19:22:42.098753 extend-filesystems[1862]: Found nvme0n1p9 Feb 13 19:22:42.098753 extend-filesystems[1862]: Checking size of /dev/nvme0n1p9 Feb 13 19:22:42.168043 tar[1880]: linux-amd64/LICENSE Feb 13 19:22:42.168043 tar[1880]: linux-amd64/helm Feb 13 19:22:42.128424 ntpd[1864]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 17:06:12 UTC 2025 (1): Starting Feb 13 19:22:42.259570 ntpd[1864]: 13 Feb 19:22:42 ntpd[1864]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 17:06:12 UTC 2025 (1): Starting Feb 13 19:22:42.259570 ntpd[1864]: 13 Feb 19:22:42 ntpd[1864]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 19:22:42.259570 ntpd[1864]: 13 Feb 19:22:42 ntpd[1864]: ---------------------------------------------------- Feb 13 19:22:42.259570 ntpd[1864]: 13 Feb 19:22:42 ntpd[1864]: ntp-4 is maintained by Network Time Foundation, Feb 13 19:22:42.259570 ntpd[1864]: 13 Feb 19:22:42 ntpd[1864]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 19:22:42.259570 ntpd[1864]: 13 Feb 19:22:42 ntpd[1864]: corporation. Support and training for ntp-4 are Feb 13 19:22:42.259570 ntpd[1864]: 13 Feb 19:22:42 ntpd[1864]: available at https://www.nwtime.org/support Feb 13 19:22:42.259570 ntpd[1864]: 13 Feb 19:22:42 ntpd[1864]: ---------------------------------------------------- Feb 13 19:22:42.259570 ntpd[1864]: 13 Feb 19:22:42 ntpd[1864]: proto: precision = 0.080 usec (-23) Feb 13 19:22:42.259570 ntpd[1864]: 13 Feb 19:22:42 ntpd[1864]: basedate set to 2025-02-01 Feb 13 19:22:42.259570 ntpd[1864]: 13 Feb 19:22:42 ntpd[1864]: gps base set to 2025-02-02 (week 2352) Feb 13 19:22:42.259570 ntpd[1864]: 13 Feb 19:22:42 ntpd[1864]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 19:22:42.259570 ntpd[1864]: 13 Feb 19:22:42 ntpd[1864]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 19:22:42.259570 ntpd[1864]: 13 Feb 19:22:42 ntpd[1864]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 19:22:42.259570 ntpd[1864]: 13 Feb 19:22:42 ntpd[1864]: Listen normally on 3 eth0 172.31.18.187:123 Feb 13 19:22:42.259570 ntpd[1864]: 13 Feb 19:22:42 ntpd[1864]: Listen normally on 4 lo [::1]:123 Feb 13 19:22:42.259570 ntpd[1864]: 13 Feb 19:22:42 ntpd[1864]: bind(21) AF_INET6 fe80::4eb:50ff:fe39:23a7%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 19:22:42.259570 ntpd[1864]: 13 Feb 19:22:42 ntpd[1864]: unable to create socket on eth0 (5) for fe80::4eb:50ff:fe39:23a7%2#123 Feb 13 19:22:42.259570 ntpd[1864]: 13 Feb 19:22:42 ntpd[1864]: failed to init interface for address fe80::4eb:50ff:fe39:23a7%2 Feb 13 19:22:42.259570 ntpd[1864]: 13 Feb 19:22:42 ntpd[1864]: Listening on routing socket on fd #21 for interface updates Feb 13 19:22:42.259570 ntpd[1864]: 13 Feb 19:22:42 ntpd[1864]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:22:42.259570 ntpd[1864]: 13 Feb 19:22:42 ntpd[1864]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:22:42.275161 extend-filesystems[1862]: Resized partition /dev/nvme0n1p9 Feb 13 19:22:42.128461 ntpd[1864]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 19:22:42.283359 systemd[1]: Finished setup-oem.service - Setup OEM. Feb 13 19:22:42.286605 extend-filesystems[1912]: resize2fs 1.47.1 (20-May-2024) Feb 13 19:22:42.301374 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Feb 13 19:22:42.128474 ntpd[1864]: ---------------------------------------------------- Feb 13 19:22:42.301768 jq[1902]: true Feb 13 19:22:42.128485 ntpd[1864]: ntp-4 is maintained by Network Time Foundation, Feb 13 19:22:42.128496 ntpd[1864]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 19:22:42.128505 ntpd[1864]: corporation. Support and training for ntp-4 are Feb 13 19:22:42.129251 ntpd[1864]: available at https://www.nwtime.org/support Feb 13 19:22:42.129269 ntpd[1864]: ---------------------------------------------------- Feb 13 19:22:42.143120 ntpd[1864]: proto: precision = 0.080 usec (-23) Feb 13 19:22:42.146130 ntpd[1864]: basedate set to 2025-02-01 Feb 13 19:22:42.146154 ntpd[1864]: gps base set to 2025-02-02 (week 2352) Feb 13 19:22:42.159754 ntpd[1864]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 19:22:42.159826 ntpd[1864]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 19:22:42.160075 ntpd[1864]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 19:22:42.160117 ntpd[1864]: Listen normally on 3 eth0 172.31.18.187:123 Feb 13 19:22:42.160351 ntpd[1864]: Listen normally on 4 lo [::1]:123 Feb 13 19:22:42.160422 ntpd[1864]: bind(21) AF_INET6 fe80::4eb:50ff:fe39:23a7%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 19:22:42.160444 ntpd[1864]: unable to create socket on eth0 (5) for fe80::4eb:50ff:fe39:23a7%2#123 Feb 13 19:22:42.160458 ntpd[1864]: failed to init interface for address fe80::4eb:50ff:fe39:23a7%2 Feb 13 19:22:42.160930 ntpd[1864]: Listening on routing socket on fd #21 for interface updates Feb 13 19:22:42.176586 ntpd[1864]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:22:42.176632 ntpd[1864]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:22:42.418014 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Feb 13 19:22:42.418099 coreos-metadata[1859]: Feb 13 19:22:42.414 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 19:22:42.426912 coreos-metadata[1859]: Feb 13 19:22:42.420 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Feb 13 19:22:42.427745 coreos-metadata[1859]: Feb 13 19:22:42.427 INFO Fetch successful Feb 13 19:22:42.427745 coreos-metadata[1859]: Feb 13 19:22:42.427 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Feb 13 19:22:42.435981 extend-filesystems[1912]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Feb 13 19:22:42.435981 extend-filesystems[1912]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 19:22:42.435981 extend-filesystems[1912]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Feb 13 19:22:42.435730 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:22:42.464620 coreos-metadata[1859]: Feb 13 19:22:42.437 INFO Fetch successful Feb 13 19:22:42.464620 coreos-metadata[1859]: Feb 13 19:22:42.437 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Feb 13 19:22:42.464620 coreos-metadata[1859]: Feb 13 19:22:42.443 INFO Fetch successful Feb 13 19:22:42.464620 coreos-metadata[1859]: Feb 13 19:22:42.443 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Feb 13 19:22:42.464620 coreos-metadata[1859]: Feb 13 19:22:42.444 INFO Fetch successful Feb 13 19:22:42.464620 coreos-metadata[1859]: Feb 13 19:22:42.444 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Feb 13 19:22:42.464620 coreos-metadata[1859]: Feb 13 19:22:42.447 INFO Fetch failed with 404: resource not found Feb 13 19:22:42.464620 coreos-metadata[1859]: Feb 13 19:22:42.447 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Feb 13 19:22:42.464620 coreos-metadata[1859]: Feb 13 19:22:42.448 INFO Fetch successful Feb 13 19:22:42.464620 coreos-metadata[1859]: Feb 13 19:22:42.448 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Feb 13 19:22:42.464620 coreos-metadata[1859]: Feb 13 19:22:42.448 INFO Fetch successful Feb 13 19:22:42.464620 coreos-metadata[1859]: Feb 13 19:22:42.448 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Feb 13 19:22:42.464620 coreos-metadata[1859]: Feb 13 19:22:42.450 INFO Fetch successful Feb 13 19:22:42.464620 coreos-metadata[1859]: Feb 13 19:22:42.450 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Feb 13 19:22:42.464620 coreos-metadata[1859]: Feb 13 19:22:42.451 INFO Fetch successful Feb 13 19:22:42.464620 coreos-metadata[1859]: Feb 13 19:22:42.451 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Feb 13 19:22:42.464620 coreos-metadata[1859]: Feb 13 19:22:42.452 INFO Fetch successful Feb 13 19:22:42.469281 extend-filesystems[1862]: Resized filesystem in /dev/nvme0n1p9 Feb 13 19:22:42.436864 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:22:42.548259 systemd-logind[1871]: Watching system buttons on /dev/input/event1 (Power Button) Feb 13 19:22:42.548296 systemd-logind[1871]: Watching system buttons on /dev/input/event3 (Sleep Button) Feb 13 19:22:42.548324 systemd-logind[1871]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 19:22:42.549186 systemd-networkd[1798]: eth0: Gained IPv6LL Feb 13 19:22:42.568106 systemd-logind[1871]: New seat seat0. Feb 13 19:22:42.613383 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:22:42.630434 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:22:42.635088 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:22:42.699348 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Feb 13 19:22:42.750443 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:22:42.762260 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:22:42.796324 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 19:22:42.798807 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:22:42.829084 bash[1945]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:22:42.832447 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:22:42.851674 systemd[1]: Starting sshkeys.service... Feb 13 19:22:42.929692 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1646) Feb 13 19:22:42.988927 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 19:22:43.006943 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 19:22:43.020156 dbus-daemon[1860]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 13 19:22:43.031760 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:22:43.036199 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Feb 13 19:22:43.044253 dbus-daemon[1860]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1893 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 13 19:22:43.056424 systemd[1]: Starting polkit.service - Authorization Manager... Feb 13 19:22:43.101008 sshd_keygen[1907]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:22:43.113734 amazon-ssm-agent[1943]: Initializing new seelog logger Feb 13 19:22:43.113734 amazon-ssm-agent[1943]: New Seelog Logger Creation Complete Feb 13 19:22:43.113734 amazon-ssm-agent[1943]: 2025/02/13 19:22:43 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:22:43.113734 amazon-ssm-agent[1943]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:22:43.113734 amazon-ssm-agent[1943]: 2025/02/13 19:22:43 processing appconfig overrides Feb 13 19:22:43.114277 amazon-ssm-agent[1943]: 2025/02/13 19:22:43 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:22:43.114277 amazon-ssm-agent[1943]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:22:43.114277 amazon-ssm-agent[1943]: 2025/02/13 19:22:43 processing appconfig overrides Feb 13 19:22:43.114277 amazon-ssm-agent[1943]: 2025/02/13 19:22:43 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:22:43.114277 amazon-ssm-agent[1943]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:22:43.114438 amazon-ssm-agent[1943]: 2025/02/13 19:22:43 processing appconfig overrides Feb 13 19:22:43.116315 amazon-ssm-agent[1943]: 2025-02-13 19:22:43 INFO Proxy environment variables: Feb 13 19:22:43.119507 amazon-ssm-agent[1943]: 2025/02/13 19:22:43 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:22:43.119507 amazon-ssm-agent[1943]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:22:43.119507 amazon-ssm-agent[1943]: 2025/02/13 19:22:43 processing appconfig overrides Feb 13 19:22:43.151049 polkitd[1990]: Started polkitd version 121 Feb 13 19:22:43.166383 locksmithd[1894]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:22:43.210186 polkitd[1990]: Loading rules from directory /etc/polkit-1/rules.d Feb 13 19:22:43.210282 polkitd[1990]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 13 19:22:43.213034 polkitd[1990]: Finished loading, compiling and executing 2 rules Feb 13 19:22:43.215102 amazon-ssm-agent[1943]: 2025-02-13 19:22:43 INFO https_proxy: Feb 13 19:22:43.216242 dbus-daemon[1860]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 13 19:22:43.216458 systemd[1]: Started polkit.service - Authorization Manager. Feb 13 19:22:43.219720 polkitd[1990]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 13 19:22:43.254784 systemd-hostnamed[1893]: Hostname set to (transient) Feb 13 19:22:43.255195 systemd-resolved[1801]: System hostname changed to 'ip-172-31-18-187'. Feb 13 19:22:43.258716 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:22:43.282555 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:22:43.310851 systemd[1]: Started sshd@0-172.31.18.187:22-139.178.89.65:51792.service - OpenSSH per-connection server daemon (139.178.89.65:51792). Feb 13 19:22:43.329099 amazon-ssm-agent[1943]: 2025-02-13 19:22:43 INFO http_proxy: Feb 13 19:22:43.392509 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:22:43.392761 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:22:43.410651 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:22:43.439990 amazon-ssm-agent[1943]: 2025-02-13 19:22:43 INFO no_proxy: Feb 13 19:22:43.494241 coreos-metadata[1984]: Feb 13 19:22:43.493 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 19:22:43.499557 coreos-metadata[1984]: Feb 13 19:22:43.499 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Feb 13 19:22:43.503156 coreos-metadata[1984]: Feb 13 19:22:43.503 INFO Fetch successful Feb 13 19:22:43.503460 coreos-metadata[1984]: Feb 13 19:22:43.503 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 13 19:22:43.506043 coreos-metadata[1984]: Feb 13 19:22:43.505 INFO Fetch successful Feb 13 19:22:43.508069 unknown[1984]: wrote ssh authorized keys file for user: core Feb 13 19:22:43.521804 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:22:43.535311 amazon-ssm-agent[1943]: 2025-02-13 19:22:43 INFO Checking if agent identity type OnPrem can be assumed Feb 13 19:22:43.536186 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:22:43.549485 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 19:22:43.551624 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:22:43.622078 update-ssh-keys[2090]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:22:43.627442 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 19:22:43.640163 amazon-ssm-agent[1943]: 2025-02-13 19:22:43 INFO Checking if agent identity type EC2 can be assumed Feb 13 19:22:43.647929 systemd[1]: Finished sshkeys.service. Feb 13 19:22:43.739414 amazon-ssm-agent[1943]: 2025-02-13 19:22:43 INFO Agent will take identity from EC2 Feb 13 19:22:43.742390 containerd[1889]: time="2025-02-13T19:22:43.742227451Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 19:22:43.759585 sshd[2058]: Accepted publickey for core from 139.178.89.65 port 51792 ssh2: RSA SHA256:KGbcKF8vZ4+NPkSlme0qB32HGnqAN+vlwaFvbJSvXYQ Feb 13 19:22:43.761943 sshd-session[2058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:22:43.791526 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:22:43.800331 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:22:43.814230 systemd-logind[1871]: New session 1 of user core. Feb 13 19:22:43.845884 amazon-ssm-agent[1943]: 2025-02-13 19:22:43 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 19:22:43.845774 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:22:43.859433 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:22:43.880250 (systemd)[2103]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:22:43.918394 containerd[1889]: time="2025-02-13T19:22:43.916394412Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:22:43.923042 containerd[1889]: time="2025-02-13T19:22:43.922890173Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:22:43.923867 containerd[1889]: time="2025-02-13T19:22:43.923794447Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:22:43.924941 containerd[1889]: time="2025-02-13T19:22:43.924908078Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:22:43.925384 containerd[1889]: time="2025-02-13T19:22:43.925253026Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:22:43.925506 containerd[1889]: time="2025-02-13T19:22:43.925487519Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:22:43.925737 containerd[1889]: time="2025-02-13T19:22:43.925714115Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:22:43.928528 containerd[1889]: time="2025-02-13T19:22:43.926251283Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:22:43.928528 containerd[1889]: time="2025-02-13T19:22:43.926538942Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:22:43.928528 containerd[1889]: time="2025-02-13T19:22:43.926561451Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:22:43.928528 containerd[1889]: time="2025-02-13T19:22:43.926581744Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:22:43.928528 containerd[1889]: time="2025-02-13T19:22:43.926602760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:22:43.928528 containerd[1889]: time="2025-02-13T19:22:43.926704587Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:22:43.928528 containerd[1889]: time="2025-02-13T19:22:43.927043935Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:22:43.928528 containerd[1889]: time="2025-02-13T19:22:43.927215515Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:22:43.928528 containerd[1889]: time="2025-02-13T19:22:43.927233659Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:22:43.928528 containerd[1889]: time="2025-02-13T19:22:43.927334320Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:22:43.928528 containerd[1889]: time="2025-02-13T19:22:43.927387945Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:22:43.940482 containerd[1889]: time="2025-02-13T19:22:43.940432300Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:22:43.943899 containerd[1889]: time="2025-02-13T19:22:43.940827136Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:22:43.943899 containerd[1889]: time="2025-02-13T19:22:43.940886198Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:22:43.943899 containerd[1889]: time="2025-02-13T19:22:43.940919434Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:22:43.943899 containerd[1889]: time="2025-02-13T19:22:43.940955722Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:22:43.943899 containerd[1889]: time="2025-02-13T19:22:43.941179253Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:22:43.943899 containerd[1889]: time="2025-02-13T19:22:43.941517286Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:22:43.943899 containerd[1889]: time="2025-02-13T19:22:43.941748221Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:22:43.943899 containerd[1889]: time="2025-02-13T19:22:43.941769128Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:22:43.943899 containerd[1889]: time="2025-02-13T19:22:43.941790529Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:22:43.943899 containerd[1889]: time="2025-02-13T19:22:43.941810318Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:22:43.943899 containerd[1889]: time="2025-02-13T19:22:43.941828133Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:22:43.943899 containerd[1889]: time="2025-02-13T19:22:43.941846332Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:22:43.943899 containerd[1889]: time="2025-02-13T19:22:43.941864932Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:22:43.943899 containerd[1889]: time="2025-02-13T19:22:43.941893325Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:22:43.944895 containerd[1889]: time="2025-02-13T19:22:43.941912118Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:22:43.944895 containerd[1889]: time="2025-02-13T19:22:43.941929050Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:22:43.944895 containerd[1889]: time="2025-02-13T19:22:43.941968764Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:22:43.944895 containerd[1889]: time="2025-02-13T19:22:43.942010651Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:22:43.944895 containerd[1889]: time="2025-02-13T19:22:43.942030020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:22:43.944895 containerd[1889]: time="2025-02-13T19:22:43.942047452Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:22:43.944895 containerd[1889]: time="2025-02-13T19:22:43.942068729Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:22:43.944895 containerd[1889]: time="2025-02-13T19:22:43.942086206Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:22:43.944895 containerd[1889]: time="2025-02-13T19:22:43.942104189Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:22:43.944895 containerd[1889]: time="2025-02-13T19:22:43.942120830Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:22:43.944895 containerd[1889]: time="2025-02-13T19:22:43.942140768Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:22:43.944895 containerd[1889]: time="2025-02-13T19:22:43.942158589Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:22:43.944895 containerd[1889]: time="2025-02-13T19:22:43.942178902Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:22:43.944895 containerd[1889]: time="2025-02-13T19:22:43.942196625Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:22:43.945508 containerd[1889]: time="2025-02-13T19:22:43.942212525Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:22:43.945508 containerd[1889]: time="2025-02-13T19:22:43.942230058Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:22:43.945508 containerd[1889]: time="2025-02-13T19:22:43.942249929Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:22:43.945508 containerd[1889]: time="2025-02-13T19:22:43.942281742Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:22:43.945508 containerd[1889]: time="2025-02-13T19:22:43.942303655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:22:43.945508 containerd[1889]: time="2025-02-13T19:22:43.942321326Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:22:43.945508 containerd[1889]: time="2025-02-13T19:22:43.942385735Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:22:43.945508 containerd[1889]: time="2025-02-13T19:22:43.942411144Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:22:43.945508 containerd[1889]: time="2025-02-13T19:22:43.942429005Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:22:43.945508 containerd[1889]: time="2025-02-13T19:22:43.942448399Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:22:43.945508 containerd[1889]: time="2025-02-13T19:22:43.942462472Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:22:43.945508 containerd[1889]: time="2025-02-13T19:22:43.942501107Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:22:43.945508 containerd[1889]: time="2025-02-13T19:22:43.942516450Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:22:43.945508 containerd[1889]: time="2025-02-13T19:22:43.942532339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:22:43.948397 amazon-ssm-agent[1943]: 2025-02-13 19:22:43 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 19:22:43.951616 containerd[1889]: time="2025-02-13T19:22:43.950647705Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:22:43.951616 containerd[1889]: time="2025-02-13T19:22:43.950782595Z" level=info msg="Connect containerd service" Feb 13 19:22:43.951616 containerd[1889]: time="2025-02-13T19:22:43.951413944Z" level=info msg="using legacy CRI server" Feb 13 19:22:43.951616 containerd[1889]: time="2025-02-13T19:22:43.951433923Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:22:43.953882 containerd[1889]: time="2025-02-13T19:22:43.953220820Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:22:43.958143 containerd[1889]: time="2025-02-13T19:22:43.958043584Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:22:43.959147 containerd[1889]: time="2025-02-13T19:22:43.958327675Z" level=info msg="Start subscribing containerd event" Feb 13 19:22:43.959147 containerd[1889]: time="2025-02-13T19:22:43.958408184Z" level=info msg="Start recovering state" Feb 13 19:22:43.959147 containerd[1889]: time="2025-02-13T19:22:43.958495581Z" level=info msg="Start event monitor" Feb 13 19:22:43.959147 containerd[1889]: time="2025-02-13T19:22:43.958517542Z" level=info msg="Start snapshots syncer" Feb 13 19:22:43.959147 containerd[1889]: time="2025-02-13T19:22:43.958530126Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:22:43.959147 containerd[1889]: time="2025-02-13T19:22:43.958540788Z" level=info msg="Start streaming server" Feb 13 19:22:43.963282 containerd[1889]: time="2025-02-13T19:22:43.962585850Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:22:43.963282 containerd[1889]: time="2025-02-13T19:22:43.962679862Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:22:43.968734 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:22:43.971988 containerd[1889]: time="2025-02-13T19:22:43.968777634Z" level=info msg="containerd successfully booted in 0.227888s" Feb 13 19:22:44.045472 amazon-ssm-agent[1943]: 2025-02-13 19:22:43 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 19:22:44.131470 systemd[2103]: Queued start job for default target default.target. Feb 13 19:22:44.140397 systemd[2103]: Created slice app.slice - User Application Slice. Feb 13 19:22:44.140458 systemd[2103]: Reached target paths.target - Paths. Feb 13 19:22:44.140482 systemd[2103]: Reached target timers.target - Timers. Feb 13 19:22:44.147972 amazon-ssm-agent[1943]: 2025-02-13 19:22:43 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Feb 13 19:22:44.156683 systemd[2103]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:22:44.189082 systemd[2103]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:22:44.189240 systemd[2103]: Reached target sockets.target - Sockets. Feb 13 19:22:44.189263 systemd[2103]: Reached target basic.target - Basic System. Feb 13 19:22:44.189466 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:22:44.192934 systemd[2103]: Reached target default.target - Main User Target. Feb 13 19:22:44.193034 systemd[2103]: Startup finished in 298ms. Feb 13 19:22:44.198356 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:22:44.247623 amazon-ssm-agent[1943]: 2025-02-13 19:22:43 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Feb 13 19:22:44.349648 amazon-ssm-agent[1943]: 2025-02-13 19:22:43 INFO [amazon-ssm-agent] Starting Core Agent Feb 13 19:22:44.368136 systemd[1]: Started sshd@1-172.31.18.187:22-139.178.89.65:40124.service - OpenSSH per-connection server daemon (139.178.89.65:40124). Feb 13 19:22:44.448188 amazon-ssm-agent[1943]: 2025-02-13 19:22:43 INFO [amazon-ssm-agent] registrar detected. Attempting registration Feb 13 19:22:44.546539 tar[1880]: linux-amd64/README.md Feb 13 19:22:44.550048 amazon-ssm-agent[1943]: 2025-02-13 19:22:43 INFO [Registrar] Starting registrar module Feb 13 19:22:44.569343 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 19:22:44.618641 sshd[2117]: Accepted publickey for core from 139.178.89.65 port 40124 ssh2: RSA SHA256:KGbcKF8vZ4+NPkSlme0qB32HGnqAN+vlwaFvbJSvXYQ Feb 13 19:22:44.621416 sshd-session[2117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:22:44.635497 systemd-logind[1871]: New session 2 of user core. Feb 13 19:22:44.638933 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:22:44.651792 amazon-ssm-agent[1943]: 2025-02-13 19:22:43 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Feb 13 19:22:44.781527 sshd[2122]: Connection closed by 139.178.89.65 port 40124 Feb 13 19:22:44.783094 sshd-session[2117]: pam_unix(sshd:session): session closed for user core Feb 13 19:22:44.796494 systemd[1]: sshd@1-172.31.18.187:22-139.178.89.65:40124.service: Deactivated successfully. Feb 13 19:22:44.821332 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 19:22:44.823283 systemd-logind[1871]: Session 2 logged out. Waiting for processes to exit. Feb 13 19:22:44.873558 systemd[1]: Started sshd@2-172.31.18.187:22-139.178.89.65:40126.service - OpenSSH per-connection server daemon (139.178.89.65:40126). Feb 13 19:22:44.888435 systemd-logind[1871]: Removed session 2. Feb 13 19:22:45.138717 ntpd[1864]: Listen normally on 6 eth0 [fe80::4eb:50ff:fe39:23a7%2]:123 Feb 13 19:22:45.141051 ntpd[1864]: 13 Feb 19:22:45 ntpd[1864]: Listen normally on 6 eth0 [fe80::4eb:50ff:fe39:23a7%2]:123 Feb 13 19:22:45.176089 sshd[2127]: Accepted publickey for core from 139.178.89.65 port 40126 ssh2: RSA SHA256:KGbcKF8vZ4+NPkSlme0qB32HGnqAN+vlwaFvbJSvXYQ Feb 13 19:22:45.177109 sshd-session[2127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:22:45.211283 systemd-logind[1871]: New session 3 of user core. Feb 13 19:22:45.223884 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:22:45.259707 amazon-ssm-agent[1943]: 2025-02-13 19:22:45 INFO [EC2Identity] EC2 registration was successful. Feb 13 19:22:45.311374 amazon-ssm-agent[1943]: 2025-02-13 19:22:45 INFO [CredentialRefresher] credentialRefresher has started Feb 13 19:22:45.311374 amazon-ssm-agent[1943]: 2025-02-13 19:22:45 INFO [CredentialRefresher] Starting credentials refresher loop Feb 13 19:22:45.312174 amazon-ssm-agent[1943]: 2025-02-13 19:22:45 INFO EC2RoleProvider Successfully connected with instance profile role credentials Feb 13 19:22:45.360343 amazon-ssm-agent[1943]: 2025-02-13 19:22:45 INFO [CredentialRefresher] Next credential rotation will be in 31.09164613966667 minutes Feb 13 19:22:45.370905 sshd[2129]: Connection closed by 139.178.89.65 port 40126 Feb 13 19:22:45.372409 sshd-session[2127]: pam_unix(sshd:session): session closed for user core Feb 13 19:22:45.381262 systemd[1]: sshd@2-172.31.18.187:22-139.178.89.65:40126.service: Deactivated successfully. Feb 13 19:22:45.396223 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 19:22:45.398551 systemd-logind[1871]: Session 3 logged out. Waiting for processes to exit. Feb 13 19:22:45.408154 systemd-logind[1871]: Removed session 3. Feb 13 19:22:46.342572 amazon-ssm-agent[1943]: 2025-02-13 19:22:46 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Feb 13 19:22:46.443992 amazon-ssm-agent[1943]: 2025-02-13 19:22:46 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2134) started Feb 13 19:22:46.543589 amazon-ssm-agent[1943]: 2025-02-13 19:22:46 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Feb 13 19:22:48.170216 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:22:48.191134 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:22:48.194508 (kubelet)[2149]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:22:48.196250 systemd[1]: Startup finished in 1.019s (kernel) + 11.841s (initrd) + 13.610s (userspace) = 26.471s. Feb 13 19:22:48.220116 agetty[2088]: failed to open credentials directory Feb 13 19:22:48.501521 agetty[2087]: failed to open credentials directory Feb 13 19:22:49.442814 systemd-resolved[1801]: Clock change detected. Flushing caches. Feb 13 19:22:50.753602 kubelet[2149]: E0213 19:22:50.753521 2149 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:22:50.761672 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:22:50.761865 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:22:50.762254 systemd[1]: kubelet.service: Consumed 1.038s CPU time. Feb 13 19:22:55.722545 systemd[1]: Started sshd@3-172.31.18.187:22-139.178.89.65:55808.service - OpenSSH per-connection server daemon (139.178.89.65:55808). Feb 13 19:22:55.905484 sshd[2161]: Accepted publickey for core from 139.178.89.65 port 55808 ssh2: RSA SHA256:KGbcKF8vZ4+NPkSlme0qB32HGnqAN+vlwaFvbJSvXYQ Feb 13 19:22:55.906906 sshd-session[2161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:22:55.919086 systemd-logind[1871]: New session 4 of user core. Feb 13 19:22:55.930385 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:22:56.058451 sshd[2163]: Connection closed by 139.178.89.65 port 55808 Feb 13 19:22:56.059113 sshd-session[2161]: pam_unix(sshd:session): session closed for user core Feb 13 19:22:56.066661 systemd[1]: sshd@3-172.31.18.187:22-139.178.89.65:55808.service: Deactivated successfully. Feb 13 19:22:56.068839 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 19:22:56.072549 systemd-logind[1871]: Session 4 logged out. Waiting for processes to exit. Feb 13 19:22:56.075494 systemd-logind[1871]: Removed session 4. Feb 13 19:22:56.086217 systemd[1]: Started sshd@4-172.31.18.187:22-139.178.89.65:55810.service - OpenSSH per-connection server daemon (139.178.89.65:55810). Feb 13 19:22:56.279602 sshd[2168]: Accepted publickey for core from 139.178.89.65 port 55810 ssh2: RSA SHA256:KGbcKF8vZ4+NPkSlme0qB32HGnqAN+vlwaFvbJSvXYQ Feb 13 19:22:56.281061 sshd-session[2168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:22:56.287782 systemd-logind[1871]: New session 5 of user core. Feb 13 19:22:56.300380 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:22:56.415338 sshd[2170]: Connection closed by 139.178.89.65 port 55810 Feb 13 19:22:56.416585 sshd-session[2168]: pam_unix(sshd:session): session closed for user core Feb 13 19:22:56.424362 systemd[1]: sshd@4-172.31.18.187:22-139.178.89.65:55810.service: Deactivated successfully. Feb 13 19:22:56.427593 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 19:22:56.430571 systemd-logind[1871]: Session 5 logged out. Waiting for processes to exit. Feb 13 19:22:56.431717 systemd-logind[1871]: Removed session 5. Feb 13 19:22:56.454550 systemd[1]: Started sshd@5-172.31.18.187:22-139.178.89.65:55812.service - OpenSSH per-connection server daemon (139.178.89.65:55812). Feb 13 19:22:56.647362 sshd[2175]: Accepted publickey for core from 139.178.89.65 port 55812 ssh2: RSA SHA256:KGbcKF8vZ4+NPkSlme0qB32HGnqAN+vlwaFvbJSvXYQ Feb 13 19:22:56.648816 sshd-session[2175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:22:56.655535 systemd-logind[1871]: New session 6 of user core. Feb 13 19:22:56.664396 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:22:56.801552 sshd[2177]: Connection closed by 139.178.89.65 port 55812 Feb 13 19:22:56.802791 sshd-session[2175]: pam_unix(sshd:session): session closed for user core Feb 13 19:22:56.806751 systemd[1]: sshd@5-172.31.18.187:22-139.178.89.65:55812.service: Deactivated successfully. Feb 13 19:22:56.815748 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:22:56.817619 systemd-logind[1871]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:22:56.818851 systemd-logind[1871]: Removed session 6. Feb 13 19:22:56.849574 systemd[1]: Started sshd@6-172.31.18.187:22-139.178.89.65:55818.service - OpenSSH per-connection server daemon (139.178.89.65:55818). Feb 13 19:22:57.070541 sshd[2182]: Accepted publickey for core from 139.178.89.65 port 55818 ssh2: RSA SHA256:KGbcKF8vZ4+NPkSlme0qB32HGnqAN+vlwaFvbJSvXYQ Feb 13 19:22:57.072268 sshd-session[2182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:22:57.084520 systemd-logind[1871]: New session 7 of user core. Feb 13 19:22:57.095383 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:22:57.264557 sudo[2185]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 19:22:57.264959 sudo[2185]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:22:57.301577 sudo[2185]: pam_unix(sudo:session): session closed for user root Feb 13 19:22:57.324210 sshd[2184]: Connection closed by 139.178.89.65 port 55818 Feb 13 19:22:57.325587 sshd-session[2182]: pam_unix(sshd:session): session closed for user core Feb 13 19:22:57.340168 systemd-logind[1871]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:22:57.341714 systemd[1]: sshd@6-172.31.18.187:22-139.178.89.65:55818.service: Deactivated successfully. Feb 13 19:22:57.348989 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:22:57.373548 systemd[1]: Started sshd@7-172.31.18.187:22-139.178.89.65:55828.service - OpenSSH per-connection server daemon (139.178.89.65:55828). Feb 13 19:22:57.374730 systemd-logind[1871]: Removed session 7. Feb 13 19:22:57.581669 sshd[2190]: Accepted publickey for core from 139.178.89.65 port 55828 ssh2: RSA SHA256:KGbcKF8vZ4+NPkSlme0qB32HGnqAN+vlwaFvbJSvXYQ Feb 13 19:22:57.583545 sshd-session[2190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:22:57.602189 systemd-logind[1871]: New session 8 of user core. Feb 13 19:22:57.610402 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 19:22:57.733854 sudo[2194]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 19:22:57.734613 sudo[2194]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:22:57.749925 sudo[2194]: pam_unix(sudo:session): session closed for user root Feb 13 19:22:57.771019 sudo[2193]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 19:22:57.774795 sudo[2193]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:22:57.810773 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:22:57.852398 augenrules[2216]: No rules Feb 13 19:22:57.854469 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:22:57.854787 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:22:57.856089 sudo[2193]: pam_unix(sudo:session): session closed for user root Feb 13 19:22:57.880055 sshd[2192]: Connection closed by 139.178.89.65 port 55828 Feb 13 19:22:57.881431 sshd-session[2190]: pam_unix(sshd:session): session closed for user core Feb 13 19:22:57.886232 systemd-logind[1871]: Session 8 logged out. Waiting for processes to exit. Feb 13 19:22:57.886867 systemd[1]: sshd@7-172.31.18.187:22-139.178.89.65:55828.service: Deactivated successfully. Feb 13 19:22:57.889480 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 19:22:57.890973 systemd-logind[1871]: Removed session 8. Feb 13 19:22:57.945661 systemd[1]: Started sshd@8-172.31.18.187:22-139.178.89.65:55836.service - OpenSSH per-connection server daemon (139.178.89.65:55836). Feb 13 19:22:58.123660 sshd[2224]: Accepted publickey for core from 139.178.89.65 port 55836 ssh2: RSA SHA256:KGbcKF8vZ4+NPkSlme0qB32HGnqAN+vlwaFvbJSvXYQ Feb 13 19:22:58.127218 sshd-session[2224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:22:58.141066 systemd-logind[1871]: New session 9 of user core. Feb 13 19:22:58.147612 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 19:22:58.263141 sudo[2227]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:22:58.263582 sudo[2227]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:22:59.810806 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 19:22:59.833692 (dockerd)[2245]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 19:23:00.832714 dockerd[2245]: time="2025-02-13T19:23:00.832648144Z" level=info msg="Starting up" Feb 13 19:23:00.841396 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 19:23:00.854426 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:23:01.183496 dockerd[2245]: time="2025-02-13T19:23:01.181955255Z" level=info msg="Loading containers: start." Feb 13 19:23:01.269440 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:23:01.290649 (kubelet)[2275]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:23:01.454056 kubelet[2275]: E0213 19:23:01.453863 2275 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:23:01.500975 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:23:01.501192 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:23:01.890497 kernel: Initializing XFRM netlink socket Feb 13 19:23:01.996626 (udev-worker)[2282]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:23:02.364742 systemd-networkd[1798]: docker0: Link UP Feb 13 19:23:02.469410 dockerd[2245]: time="2025-02-13T19:23:02.469332852Z" level=info msg="Loading containers: done." Feb 13 19:23:02.589351 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2291856774-merged.mount: Deactivated successfully. Feb 13 19:23:02.621123 dockerd[2245]: time="2025-02-13T19:23:02.620746416Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 19:23:02.621123 dockerd[2245]: time="2025-02-13T19:23:02.620915405Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Feb 13 19:23:02.621123 dockerd[2245]: time="2025-02-13T19:23:02.621062422Z" level=info msg="Daemon has completed initialization" Feb 13 19:23:02.778163 dockerd[2245]: time="2025-02-13T19:23:02.777843196Z" level=info msg="API listen on /run/docker.sock" Feb 13 19:23:02.777980 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 19:23:04.381182 containerd[1889]: time="2025-02-13T19:23:04.381138158Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.2\"" Feb 13 19:23:05.147630 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2864725834.mount: Deactivated successfully. Feb 13 19:23:08.009225 containerd[1889]: time="2025-02-13T19:23:08.009168638Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:23:08.010902 containerd[1889]: time="2025-02-13T19:23:08.010855755Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.2: active requests=0, bytes read=28673931" Feb 13 19:23:08.012867 containerd[1889]: time="2025-02-13T19:23:08.012789066Z" level=info msg="ImageCreate event name:\"sha256:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:23:08.020784 containerd[1889]: time="2025-02-13T19:23:08.020729810Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:23:08.025244 containerd[1889]: time="2025-02-13T19:23:08.024631288Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.2\" with image id \"sha256:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f\", size \"28670731\" in 3.643439529s" Feb 13 19:23:08.025244 containerd[1889]: time="2025-02-13T19:23:08.024738662Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.2\" returns image reference \"sha256:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef\"" Feb 13 19:23:08.027866 containerd[1889]: time="2025-02-13T19:23:08.027825503Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.2\"" Feb 13 19:23:11.396801 containerd[1889]: time="2025-02-13T19:23:11.396749107Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:23:11.398872 containerd[1889]: time="2025-02-13T19:23:11.398629776Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.2: active requests=0, bytes read=24771784" Feb 13 19:23:11.401592 containerd[1889]: time="2025-02-13T19:23:11.401075025Z" level=info msg="ImageCreate event name:\"sha256:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:23:11.406345 containerd[1889]: time="2025-02-13T19:23:11.406293228Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:23:11.408281 containerd[1889]: time="2025-02-13T19:23:11.408219136Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.2\" with image id \"sha256:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90\", size \"26259392\" in 3.380342805s" Feb 13 19:23:11.408463 containerd[1889]: time="2025-02-13T19:23:11.408442515Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.2\" returns image reference \"sha256:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389\"" Feb 13 19:23:11.410271 containerd[1889]: time="2025-02-13T19:23:11.409960570Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.2\"" Feb 13 19:23:11.751703 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 19:23:11.759530 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:23:12.217383 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:23:12.232099 (kubelet)[2515]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:23:12.312540 kubelet[2515]: E0213 19:23:12.312474 2515 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:23:12.319528 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:23:12.319723 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:23:13.601784 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 13 19:23:14.248363 containerd[1889]: time="2025-02-13T19:23:14.248305659Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:23:14.249808 containerd[1889]: time="2025-02-13T19:23:14.249748813Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.2: active requests=0, bytes read=19170276" Feb 13 19:23:14.252412 containerd[1889]: time="2025-02-13T19:23:14.252346657Z" level=info msg="ImageCreate event name:\"sha256:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:23:14.257551 containerd[1889]: time="2025-02-13T19:23:14.257483864Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:23:14.259028 containerd[1889]: time="2025-02-13T19:23:14.258743261Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.2\" with image id \"sha256:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76\", size \"20657902\" in 2.848467642s" Feb 13 19:23:14.259028 containerd[1889]: time="2025-02-13T19:23:14.258788210Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.2\" returns image reference \"sha256:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d\"" Feb 13 19:23:14.259403 containerd[1889]: time="2025-02-13T19:23:14.259376279Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\"" Feb 13 19:23:16.575373 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1259294143.mount: Deactivated successfully. Feb 13 19:23:17.634832 containerd[1889]: time="2025-02-13T19:23:17.633645040Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:23:17.636367 containerd[1889]: time="2025-02-13T19:23:17.636298869Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.2: active requests=0, bytes read=30908839" Feb 13 19:23:17.639204 containerd[1889]: time="2025-02-13T19:23:17.639155427Z" level=info msg="ImageCreate event name:\"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:23:17.656474 containerd[1889]: time="2025-02-13T19:23:17.656422317Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:23:17.666005 containerd[1889]: time="2025-02-13T19:23:17.665952843Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.2\" with image id \"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\", repo tag \"registry.k8s.io/kube-proxy:v1.32.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\", size \"30907858\" in 3.40653838s" Feb 13 19:23:17.666232 containerd[1889]: time="2025-02-13T19:23:17.666202860Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\" returns image reference \"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\"" Feb 13 19:23:17.670725 containerd[1889]: time="2025-02-13T19:23:17.670613059Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Feb 13 19:23:18.364495 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3616910854.mount: Deactivated successfully. Feb 13 19:23:20.067624 containerd[1889]: time="2025-02-13T19:23:20.067567395Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:23:20.069202 containerd[1889]: time="2025-02-13T19:23:20.069153207Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Feb 13 19:23:20.069984 containerd[1889]: time="2025-02-13T19:23:20.069919265Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:23:20.086757 containerd[1889]: time="2025-02-13T19:23:20.085911551Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:23:20.091850 containerd[1889]: time="2025-02-13T19:23:20.091791857Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.420578901s" Feb 13 19:23:20.091850 containerd[1889]: time="2025-02-13T19:23:20.091853192Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Feb 13 19:23:20.095379 containerd[1889]: time="2025-02-13T19:23:20.095239449Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 19:23:20.694852 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3788645421.mount: Deactivated successfully. Feb 13 19:23:20.711297 containerd[1889]: time="2025-02-13T19:23:20.711174229Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:23:20.717259 containerd[1889]: time="2025-02-13T19:23:20.716338446Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Feb 13 19:23:20.725989 containerd[1889]: time="2025-02-13T19:23:20.720400127Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:23:20.727288 containerd[1889]: time="2025-02-13T19:23:20.727246049Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:23:20.733071 containerd[1889]: time="2025-02-13T19:23:20.733017269Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 637.735438ms" Feb 13 19:23:20.733071 containerd[1889]: time="2025-02-13T19:23:20.733072532Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Feb 13 19:23:20.737312 containerd[1889]: time="2025-02-13T19:23:20.737269813Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Feb 13 19:23:21.401092 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2863049339.mount: Deactivated successfully. Feb 13 19:23:22.414940 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Feb 13 19:23:22.434624 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:23:22.939466 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:23:22.970193 (kubelet)[2653]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:23:23.115147 kubelet[2653]: E0213 19:23:23.113171 2653 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:23:23.116038 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:23:23.117113 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:23:24.979256 containerd[1889]: time="2025-02-13T19:23:24.979192509Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:23:24.982083 containerd[1889]: time="2025-02-13T19:23:24.982003585Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551320" Feb 13 19:23:24.985248 containerd[1889]: time="2025-02-13T19:23:24.985174423Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:23:25.000177 containerd[1889]: time="2025-02-13T19:23:24.999016057Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:23:25.005227 containerd[1889]: time="2025-02-13T19:23:25.005024166Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 4.267689894s" Feb 13 19:23:25.005227 containerd[1889]: time="2025-02-13T19:23:25.005071063Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Feb 13 19:23:27.051873 update_engine[1872]: I20250213 19:23:27.051015 1872 update_attempter.cc:509] Updating boot flags... Feb 13 19:23:27.159210 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (2698) Feb 13 19:23:27.444194 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (2700) Feb 13 19:23:29.305120 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:23:29.316925 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:23:29.364184 systemd[1]: Reloading requested from client PID 2873 ('systemctl') (unit session-9.scope)... Feb 13 19:23:29.364203 systemd[1]: Reloading... Feb 13 19:23:29.604163 zram_generator::config[2916]: No configuration found. Feb 13 19:23:29.842761 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:23:29.984993 systemd[1]: Reloading finished in 620 ms. Feb 13 19:23:30.145356 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 19:23:30.145597 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 19:23:30.145928 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:23:30.165630 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:23:30.534566 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:23:30.552227 (kubelet)[2973]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:23:30.686979 kubelet[2973]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:23:30.686979 kubelet[2973]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 19:23:30.686979 kubelet[2973]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:23:30.687571 kubelet[2973]: I0213 19:23:30.687124 2973 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:23:31.545656 kubelet[2973]: I0213 19:23:31.545579 2973 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 19:23:31.545656 kubelet[2973]: I0213 19:23:31.545625 2973 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:23:31.546160 kubelet[2973]: I0213 19:23:31.546112 2973 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 19:23:31.619751 kubelet[2973]: I0213 19:23:31.619705 2973 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:23:31.627151 kubelet[2973]: E0213 19:23:31.627085 2973 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.18.187:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.18.187:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:23:31.647388 kubelet[2973]: E0213 19:23:31.647336 2973 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:23:31.647388 kubelet[2973]: I0213 19:23:31.647386 2973 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:23:31.662734 kubelet[2973]: I0213 19:23:31.662630 2973 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:23:31.672406 kubelet[2973]: I0213 19:23:31.671968 2973 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:23:31.672927 kubelet[2973]: I0213 19:23:31.672404 2973 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-18-187","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:23:31.680846 kubelet[2973]: I0213 19:23:31.680798 2973 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:23:31.680846 kubelet[2973]: I0213 19:23:31.680849 2973 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 19:23:31.681058 kubelet[2973]: I0213 19:23:31.681032 2973 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:23:31.691480 kubelet[2973]: I0213 19:23:31.691357 2973 kubelet.go:446] "Attempting to sync node with API server" Feb 13 19:23:31.691480 kubelet[2973]: I0213 19:23:31.691404 2973 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:23:31.691480 kubelet[2973]: I0213 19:23:31.691435 2973 kubelet.go:352] "Adding apiserver pod source" Feb 13 19:23:31.691480 kubelet[2973]: I0213 19:23:31.691449 2973 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:23:31.710517 kubelet[2973]: W0213 19:23:31.709796 2973 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.18.187:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-187&limit=500&resourceVersion=0": dial tcp 172.31.18.187:6443: connect: connection refused Feb 13 19:23:31.711399 kubelet[2973]: E0213 19:23:31.710525 2973 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.18.187:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-187&limit=500&resourceVersion=0\": dial tcp 172.31.18.187:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:23:31.711625 kubelet[2973]: I0213 19:23:31.711404 2973 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:23:31.724015 kubelet[2973]: I0213 19:23:31.721938 2973 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:23:31.724193 kubelet[2973]: W0213 19:23:31.724102 2973 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:23:31.725064 kubelet[2973]: I0213 19:23:31.725018 2973 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 19:23:31.725064 kubelet[2973]: I0213 19:23:31.725062 2973 server.go:1287] "Started kubelet" Feb 13 19:23:31.734139 kubelet[2973]: W0213 19:23:31.732330 2973 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.18.187:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.18.187:6443: connect: connection refused Feb 13 19:23:31.734412 kubelet[2973]: E0213 19:23:31.734190 2973 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.18.187:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.18.187:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:23:31.746722 kubelet[2973]: E0213 19:23:31.742705 2973 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.18.187:6443/api/v1/namespaces/default/events\": dial tcp 172.31.18.187:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-18-187.1823daf177ae75bd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-18-187,UID:ip-172-31-18-187,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-18-187,},FirstTimestamp:2025-02-13 19:23:31.725039037 +0000 UTC m=+1.159590995,LastTimestamp:2025-02-13 19:23:31.725039037 +0000 UTC m=+1.159590995,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-18-187,}" Feb 13 19:23:31.746959 kubelet[2973]: I0213 19:23:31.746814 2973 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:23:31.749617 kubelet[2973]: I0213 19:23:31.749560 2973 server.go:490] "Adding debug handlers to kubelet server" Feb 13 19:23:31.755765 kubelet[2973]: I0213 19:23:31.755294 2973 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:23:31.755765 kubelet[2973]: I0213 19:23:31.755660 2973 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:23:31.770630 kubelet[2973]: I0213 19:23:31.763516 2973 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:23:31.770630 kubelet[2973]: I0213 19:23:31.763873 2973 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:23:31.782887 kubelet[2973]: E0213 19:23:31.782831 2973 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-18-187\" not found" Feb 13 19:23:31.787652 kubelet[2973]: E0213 19:23:31.787600 2973 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.187:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-187?timeout=10s\": dial tcp 172.31.18.187:6443: connect: connection refused" interval="200ms" Feb 13 19:23:31.787991 kubelet[2973]: I0213 19:23:31.787870 2973 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 19:23:31.792676 kubelet[2973]: I0213 19:23:31.791364 2973 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:23:31.792676 kubelet[2973]: I0213 19:23:31.791500 2973 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:23:31.792676 kubelet[2973]: W0213 19:23:31.792249 2973 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.18.187:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.187:6443: connect: connection refused Feb 13 19:23:31.792676 kubelet[2973]: E0213 19:23:31.792340 2973 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.18.187:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.18.187:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:23:31.802416 kubelet[2973]: I0213 19:23:31.798076 2973 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:23:31.815764 kubelet[2973]: E0213 19:23:31.811775 2973 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:23:31.824498 kubelet[2973]: I0213 19:23:31.818513 2973 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:23:31.824498 kubelet[2973]: I0213 19:23:31.818539 2973 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:23:31.869190 kubelet[2973]: I0213 19:23:31.867421 2973 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:23:31.871152 kubelet[2973]: I0213 19:23:31.871097 2973 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:23:31.871152 kubelet[2973]: I0213 19:23:31.871149 2973 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 19:23:31.871317 kubelet[2973]: I0213 19:23:31.871173 2973 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 19:23:31.871317 kubelet[2973]: I0213 19:23:31.871184 2973 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 19:23:31.871317 kubelet[2973]: E0213 19:23:31.871235 2973 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:23:31.874513 kubelet[2973]: W0213 19:23:31.873431 2973 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.18.187:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.187:6443: connect: connection refused Feb 13 19:23:31.874513 kubelet[2973]: E0213 19:23:31.873484 2973 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.18.187:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.18.187:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:23:31.880847 kubelet[2973]: I0213 19:23:31.880816 2973 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 19:23:31.880847 kubelet[2973]: I0213 19:23:31.880836 2973 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 19:23:31.880847 kubelet[2973]: I0213 19:23:31.880855 2973 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:23:31.886541 kubelet[2973]: I0213 19:23:31.886501 2973 policy_none.go:49] "None policy: Start" Feb 13 19:23:31.886541 kubelet[2973]: I0213 19:23:31.886531 2973 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 19:23:31.886541 kubelet[2973]: I0213 19:23:31.886548 2973 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:23:31.887977 kubelet[2973]: E0213 19:23:31.887944 2973 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-18-187\" not found" Feb 13 19:23:31.895682 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 19:23:31.910520 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 19:23:31.915720 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 19:23:31.923351 kubelet[2973]: I0213 19:23:31.923312 2973 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:23:31.923570 kubelet[2973]: I0213 19:23:31.923546 2973 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:23:31.923635 kubelet[2973]: I0213 19:23:31.923565 2973 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:23:31.924344 kubelet[2973]: I0213 19:23:31.924307 2973 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:23:31.927583 kubelet[2973]: E0213 19:23:31.927427 2973 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 19:23:31.927583 kubelet[2973]: E0213 19:23:31.927497 2973 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-18-187\" not found" Feb 13 19:23:31.986592 systemd[1]: Created slice kubepods-burstable-podc7d0687098d317ac29ebc84e6789f9d6.slice - libcontainer container kubepods-burstable-podc7d0687098d317ac29ebc84e6789f9d6.slice. Feb 13 19:23:31.988391 kubelet[2973]: E0213 19:23:31.988346 2973 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.187:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-187?timeout=10s\": dial tcp 172.31.18.187:6443: connect: connection refused" interval="400ms" Feb 13 19:23:32.007409 kubelet[2973]: E0213 19:23:32.006439 2973 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-187\" not found" node="ip-172-31-18-187" Feb 13 19:23:32.015839 systemd[1]: Created slice kubepods-burstable-pode81069c321c5f4a7b1a82c3307cef94f.slice - libcontainer container kubepods-burstable-pode81069c321c5f4a7b1a82c3307cef94f.slice. Feb 13 19:23:32.026881 kubelet[2973]: E0213 19:23:32.026851 2973 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-187\" not found" node="ip-172-31-18-187" Feb 13 19:23:32.036233 kubelet[2973]: I0213 19:23:32.035740 2973 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-18-187" Feb 13 19:23:32.036233 kubelet[2973]: E0213 19:23:32.036178 2973 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.31.18.187:6443/api/v1/nodes\": dial tcp 172.31.18.187:6443: connect: connection refused" node="ip-172-31-18-187" Feb 13 19:23:32.038413 systemd[1]: Created slice kubepods-burstable-podb59d6363d994a676cad324d0d1ee8b10.slice - libcontainer container kubepods-burstable-podb59d6363d994a676cad324d0d1ee8b10.slice. Feb 13 19:23:32.043313 kubelet[2973]: E0213 19:23:32.043275 2973 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-187\" not found" node="ip-172-31-18-187" Feb 13 19:23:32.093262 kubelet[2973]: I0213 19:23:32.093054 2973 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e81069c321c5f4a7b1a82c3307cef94f-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-187\" (UID: \"e81069c321c5f4a7b1a82c3307cef94f\") " pod="kube-system/kube-controller-manager-ip-172-31-18-187" Feb 13 19:23:32.093262 kubelet[2973]: I0213 19:23:32.093108 2973 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e81069c321c5f4a7b1a82c3307cef94f-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-187\" (UID: \"e81069c321c5f4a7b1a82c3307cef94f\") " pod="kube-system/kube-controller-manager-ip-172-31-18-187" Feb 13 19:23:32.093262 kubelet[2973]: I0213 19:23:32.093152 2973 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e81069c321c5f4a7b1a82c3307cef94f-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-187\" (UID: \"e81069c321c5f4a7b1a82c3307cef94f\") " pod="kube-system/kube-controller-manager-ip-172-31-18-187" Feb 13 19:23:32.093262 kubelet[2973]: I0213 19:23:32.093177 2973 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b59d6363d994a676cad324d0d1ee8b10-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-187\" (UID: \"b59d6363d994a676cad324d0d1ee8b10\") " pod="kube-system/kube-scheduler-ip-172-31-18-187" Feb 13 19:23:32.093262 kubelet[2973]: I0213 19:23:32.093198 2973 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c7d0687098d317ac29ebc84e6789f9d6-ca-certs\") pod \"kube-apiserver-ip-172-31-18-187\" (UID: \"c7d0687098d317ac29ebc84e6789f9d6\") " pod="kube-system/kube-apiserver-ip-172-31-18-187" Feb 13 19:23:32.093560 kubelet[2973]: I0213 19:23:32.093219 2973 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c7d0687098d317ac29ebc84e6789f9d6-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-187\" (UID: \"c7d0687098d317ac29ebc84e6789f9d6\") " pod="kube-system/kube-apiserver-ip-172-31-18-187" Feb 13 19:23:32.093560 kubelet[2973]: I0213 19:23:32.093244 2973 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c7d0687098d317ac29ebc84e6789f9d6-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-187\" (UID: \"c7d0687098d317ac29ebc84e6789f9d6\") " pod="kube-system/kube-apiserver-ip-172-31-18-187" Feb 13 19:23:32.093560 kubelet[2973]: I0213 19:23:32.093265 2973 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e81069c321c5f4a7b1a82c3307cef94f-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-187\" (UID: \"e81069c321c5f4a7b1a82c3307cef94f\") " pod="kube-system/kube-controller-manager-ip-172-31-18-187" Feb 13 19:23:32.093560 kubelet[2973]: I0213 19:23:32.093302 2973 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e81069c321c5f4a7b1a82c3307cef94f-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-187\" (UID: \"e81069c321c5f4a7b1a82c3307cef94f\") " pod="kube-system/kube-controller-manager-ip-172-31-18-187" Feb 13 19:23:32.238772 kubelet[2973]: I0213 19:23:32.238738 2973 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-18-187" Feb 13 19:23:32.239289 kubelet[2973]: E0213 19:23:32.239250 2973 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.31.18.187:6443/api/v1/nodes\": dial tcp 172.31.18.187:6443: connect: connection refused" node="ip-172-31-18-187" Feb 13 19:23:32.310506 containerd[1889]: time="2025-02-13T19:23:32.310452596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-187,Uid:c7d0687098d317ac29ebc84e6789f9d6,Namespace:kube-system,Attempt:0,}" Feb 13 19:23:32.341126 containerd[1889]: time="2025-02-13T19:23:32.341072013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-187,Uid:e81069c321c5f4a7b1a82c3307cef94f,Namespace:kube-system,Attempt:0,}" Feb 13 19:23:32.347242 containerd[1889]: time="2025-02-13T19:23:32.347125268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-187,Uid:b59d6363d994a676cad324d0d1ee8b10,Namespace:kube-system,Attempt:0,}" Feb 13 19:23:32.389055 kubelet[2973]: E0213 19:23:32.389005 2973 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.187:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-187?timeout=10s\": dial tcp 172.31.18.187:6443: connect: connection refused" interval="800ms" Feb 13 19:23:32.594441 kubelet[2973]: W0213 19:23:32.594280 2973 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.18.187:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-187&limit=500&resourceVersion=0": dial tcp 172.31.18.187:6443: connect: connection refused Feb 13 19:23:32.594441 kubelet[2973]: E0213 19:23:32.594363 2973 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.18.187:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-187&limit=500&resourceVersion=0\": dial tcp 172.31.18.187:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:23:32.642161 kubelet[2973]: I0213 19:23:32.642043 2973 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-18-187" Feb 13 19:23:32.642466 kubelet[2973]: E0213 19:23:32.642428 2973 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.31.18.187:6443/api/v1/nodes\": dial tcp 172.31.18.187:6443: connect: connection refused" node="ip-172-31-18-187" Feb 13 19:23:32.880991 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount986346746.mount: Deactivated successfully. Feb 13 19:23:32.894781 containerd[1889]: time="2025-02-13T19:23:32.894655805Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:23:32.898737 containerd[1889]: time="2025-02-13T19:23:32.898634237Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Feb 13 19:23:32.909994 containerd[1889]: time="2025-02-13T19:23:32.909935318Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:23:32.911336 containerd[1889]: time="2025-02-13T19:23:32.911289398Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:23:32.912213 containerd[1889]: time="2025-02-13T19:23:32.912006873Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:23:32.916172 containerd[1889]: time="2025-02-13T19:23:32.916105395Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:23:32.918474 containerd[1889]: time="2025-02-13T19:23:32.918427141Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:23:32.920618 containerd[1889]: time="2025-02-13T19:23:32.920429522Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:23:32.920618 containerd[1889]: time="2025-02-13T19:23:32.920506445Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 609.921636ms" Feb 13 19:23:32.930406 containerd[1889]: time="2025-02-13T19:23:32.928594846Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 587.390745ms" Feb 13 19:23:32.943736 containerd[1889]: time="2025-02-13T19:23:32.943683682Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 596.439597ms" Feb 13 19:23:33.173211 kubelet[2973]: W0213 19:23:33.173114 2973 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.18.187:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.18.187:6443: connect: connection refused Feb 13 19:23:33.173902 kubelet[2973]: E0213 19:23:33.173251 2973 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.18.187:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.18.187:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:23:33.189490 kubelet[2973]: E0213 19:23:33.189444 2973 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.187:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-187?timeout=10s\": dial tcp 172.31.18.187:6443: connect: connection refused" interval="1.6s" Feb 13 19:23:33.348167 kubelet[2973]: W0213 19:23:33.348061 2973 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.18.187:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.187:6443: connect: connection refused Feb 13 19:23:33.348395 kubelet[2973]: E0213 19:23:33.348178 2973 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.18.187:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.18.187:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:23:33.404955 containerd[1889]: time="2025-02-13T19:23:33.404826849Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:23:33.404955 containerd[1889]: time="2025-02-13T19:23:33.404901972Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:23:33.404955 containerd[1889]: time="2025-02-13T19:23:33.404925962Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:23:33.406464 containerd[1889]: time="2025-02-13T19:23:33.399426638Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:23:33.406464 containerd[1889]: time="2025-02-13T19:23:33.405342417Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:23:33.406464 containerd[1889]: time="2025-02-13T19:23:33.405370913Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:23:33.406464 containerd[1889]: time="2025-02-13T19:23:33.405465310Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:23:33.406464 containerd[1889]: time="2025-02-13T19:23:33.405027602Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:23:33.423542 containerd[1889]: time="2025-02-13T19:23:33.423192644Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:23:33.423542 containerd[1889]: time="2025-02-13T19:23:33.423264324Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:23:33.423542 containerd[1889]: time="2025-02-13T19:23:33.423281065Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:23:33.423542 containerd[1889]: time="2025-02-13T19:23:33.423400761Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:23:33.425424 kubelet[2973]: W0213 19:23:33.425389 2973 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.18.187:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.187:6443: connect: connection refused Feb 13 19:23:33.425666 kubelet[2973]: E0213 19:23:33.425609 2973 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.18.187:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.18.187:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:23:33.459279 kubelet[2973]: E0213 19:23:33.454329 2973 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.18.187:6443/api/v1/namespaces/default/events\": dial tcp 172.31.18.187:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-18-187.1823daf177ae75bd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-18-187,UID:ip-172-31-18-187,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-18-187,},FirstTimestamp:2025-02-13 19:23:31.725039037 +0000 UTC m=+1.159590995,LastTimestamp:2025-02-13 19:23:31.725039037 +0000 UTC m=+1.159590995,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-18-187,}" Feb 13 19:23:33.479458 kubelet[2973]: I0213 19:23:33.479120 2973 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-18-187" Feb 13 19:23:33.482391 kubelet[2973]: E0213 19:23:33.482232 2973 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.31.18.187:6443/api/v1/nodes\": dial tcp 172.31.18.187:6443: connect: connection refused" node="ip-172-31-18-187" Feb 13 19:23:33.539669 systemd[1]: Started cri-containerd-41fa92b2b175a5e7d7ef0f286882455625d9accdd2ff2c95cab8361cd17634e3.scope - libcontainer container 41fa92b2b175a5e7d7ef0f286882455625d9accdd2ff2c95cab8361cd17634e3. Feb 13 19:23:33.569451 systemd[1]: Started cri-containerd-780860e4417507cc4b6ead3b16effb821243811823bdf4d2a2b710ecdce8f5fc.scope - libcontainer container 780860e4417507cc4b6ead3b16effb821243811823bdf4d2a2b710ecdce8f5fc. Feb 13 19:23:33.577537 systemd[1]: Started cri-containerd-971f2dbae813113133cfbf82f3832f76753886a120cc2a944b66234830649e38.scope - libcontainer container 971f2dbae813113133cfbf82f3832f76753886a120cc2a944b66234830649e38. Feb 13 19:23:33.695974 containerd[1889]: time="2025-02-13T19:23:33.693583679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-187,Uid:e81069c321c5f4a7b1a82c3307cef94f,Namespace:kube-system,Attempt:0,} returns sandbox id \"780860e4417507cc4b6ead3b16effb821243811823bdf4d2a2b710ecdce8f5fc\"" Feb 13 19:23:33.704908 containerd[1889]: time="2025-02-13T19:23:33.704672054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-187,Uid:c7d0687098d317ac29ebc84e6789f9d6,Namespace:kube-system,Attempt:0,} returns sandbox id \"41fa92b2b175a5e7d7ef0f286882455625d9accdd2ff2c95cab8361cd17634e3\"" Feb 13 19:23:33.712862 containerd[1889]: time="2025-02-13T19:23:33.712804123Z" level=info msg="CreateContainer within sandbox \"41fa92b2b175a5e7d7ef0f286882455625d9accdd2ff2c95cab8361cd17634e3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 19:23:33.713281 kubelet[2973]: E0213 19:23:33.713228 2973 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.18.187:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.18.187:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:23:33.715615 containerd[1889]: time="2025-02-13T19:23:33.715564921Z" level=info msg="CreateContainer within sandbox \"780860e4417507cc4b6ead3b16effb821243811823bdf4d2a2b710ecdce8f5fc\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 19:23:33.754450 containerd[1889]: time="2025-02-13T19:23:33.754298373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-187,Uid:b59d6363d994a676cad324d0d1ee8b10,Namespace:kube-system,Attempt:0,} returns sandbox id \"971f2dbae813113133cfbf82f3832f76753886a120cc2a944b66234830649e38\"" Feb 13 19:23:33.758772 containerd[1889]: time="2025-02-13T19:23:33.758727219Z" level=info msg="CreateContainer within sandbox \"971f2dbae813113133cfbf82f3832f76753886a120cc2a944b66234830649e38\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 19:23:33.788048 containerd[1889]: time="2025-02-13T19:23:33.787997619Z" level=info msg="CreateContainer within sandbox \"780860e4417507cc4b6ead3b16effb821243811823bdf4d2a2b710ecdce8f5fc\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"de34058abfbb51cf7280d9fb5593818319ef135b319d49637125398bb031ae87\"" Feb 13 19:23:33.789696 containerd[1889]: time="2025-02-13T19:23:33.789664441Z" level=info msg="StartContainer for \"de34058abfbb51cf7280d9fb5593818319ef135b319d49637125398bb031ae87\"" Feb 13 19:23:33.794410 containerd[1889]: time="2025-02-13T19:23:33.794299211Z" level=info msg="CreateContainer within sandbox \"41fa92b2b175a5e7d7ef0f286882455625d9accdd2ff2c95cab8361cd17634e3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6dc109b2745ff0e42a3cbe8ba32d7398c00031e741a327dd2fea84ec734a9c1d\"" Feb 13 19:23:33.795161 containerd[1889]: time="2025-02-13T19:23:33.794900399Z" level=info msg="StartContainer for \"6dc109b2745ff0e42a3cbe8ba32d7398c00031e741a327dd2fea84ec734a9c1d\"" Feb 13 19:23:33.799052 containerd[1889]: time="2025-02-13T19:23:33.798434672Z" level=info msg="CreateContainer within sandbox \"971f2dbae813113133cfbf82f3832f76753886a120cc2a944b66234830649e38\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"203a7bd4ed7938730eb541ae6b08d5a439a0b6a2de6b3a90a616205f6cd7357c\"" Feb 13 19:23:33.801398 containerd[1889]: time="2025-02-13T19:23:33.801368370Z" level=info msg="StartContainer for \"203a7bd4ed7938730eb541ae6b08d5a439a0b6a2de6b3a90a616205f6cd7357c\"" Feb 13 19:23:33.900373 systemd[1]: Started cri-containerd-203a7bd4ed7938730eb541ae6b08d5a439a0b6a2de6b3a90a616205f6cd7357c.scope - libcontainer container 203a7bd4ed7938730eb541ae6b08d5a439a0b6a2de6b3a90a616205f6cd7357c. Feb 13 19:23:33.902734 systemd[1]: Started cri-containerd-6dc109b2745ff0e42a3cbe8ba32d7398c00031e741a327dd2fea84ec734a9c1d.scope - libcontainer container 6dc109b2745ff0e42a3cbe8ba32d7398c00031e741a327dd2fea84ec734a9c1d. Feb 13 19:23:33.904882 systemd[1]: Started cri-containerd-de34058abfbb51cf7280d9fb5593818319ef135b319d49637125398bb031ae87.scope - libcontainer container de34058abfbb51cf7280d9fb5593818319ef135b319d49637125398bb031ae87. Feb 13 19:23:34.013063 containerd[1889]: time="2025-02-13T19:23:34.012737636Z" level=info msg="StartContainer for \"de34058abfbb51cf7280d9fb5593818319ef135b319d49637125398bb031ae87\" returns successfully" Feb 13 19:23:34.013063 containerd[1889]: time="2025-02-13T19:23:34.012854581Z" level=info msg="StartContainer for \"6dc109b2745ff0e42a3cbe8ba32d7398c00031e741a327dd2fea84ec734a9c1d\" returns successfully" Feb 13 19:23:34.053041 containerd[1889]: time="2025-02-13T19:23:34.052601139Z" level=info msg="StartContainer for \"203a7bd4ed7938730eb541ae6b08d5a439a0b6a2de6b3a90a616205f6cd7357c\" returns successfully" Feb 13 19:23:34.327950 kubelet[2973]: W0213 19:23:34.324797 2973 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.18.187:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-187&limit=500&resourceVersion=0": dial tcp 172.31.18.187:6443: connect: connection refused Feb 13 19:23:34.327950 kubelet[2973]: E0213 19:23:34.324864 2973 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.18.187:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-187&limit=500&resourceVersion=0\": dial tcp 172.31.18.187:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:23:34.791169 kubelet[2973]: E0213 19:23:34.790626 2973 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.187:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-187?timeout=10s\": dial tcp 172.31.18.187:6443: connect: connection refused" interval="3.2s" Feb 13 19:23:34.906737 kubelet[2973]: E0213 19:23:34.906694 2973 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-187\" not found" node="ip-172-31-18-187" Feb 13 19:23:34.914938 kubelet[2973]: E0213 19:23:34.914399 2973 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-187\" not found" node="ip-172-31-18-187" Feb 13 19:23:34.918268 kubelet[2973]: E0213 19:23:34.916282 2973 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-187\" not found" node="ip-172-31-18-187" Feb 13 19:23:35.087895 kubelet[2973]: I0213 19:23:35.086819 2973 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-18-187" Feb 13 19:23:35.088144 kubelet[2973]: E0213 19:23:35.087860 2973 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.31.18.187:6443/api/v1/nodes\": dial tcp 172.31.18.187:6443: connect: connection refused" node="ip-172-31-18-187" Feb 13 19:23:35.918792 kubelet[2973]: E0213 19:23:35.918758 2973 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-187\" not found" node="ip-172-31-18-187" Feb 13 19:23:35.919668 kubelet[2973]: E0213 19:23:35.919636 2973 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-187\" not found" node="ip-172-31-18-187" Feb 13 19:23:35.920241 kubelet[2973]: E0213 19:23:35.920220 2973 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-187\" not found" node="ip-172-31-18-187" Feb 13 19:23:36.928179 kubelet[2973]: E0213 19:23:36.927199 2973 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-187\" not found" node="ip-172-31-18-187" Feb 13 19:23:36.928179 kubelet[2973]: E0213 19:23:36.927905 2973 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-187\" not found" node="ip-172-31-18-187" Feb 13 19:23:36.930887 kubelet[2973]: E0213 19:23:36.930605 2973 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-187\" not found" node="ip-172-31-18-187" Feb 13 19:23:37.993604 kubelet[2973]: E0213 19:23:37.993566 2973 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-18-187" not found Feb 13 19:23:37.998950 kubelet[2973]: E0213 19:23:37.998831 2973 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-18-187\" not found" node="ip-172-31-18-187" Feb 13 19:23:38.290838 kubelet[2973]: I0213 19:23:38.290720 2973 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-18-187" Feb 13 19:23:38.309021 kubelet[2973]: I0213 19:23:38.308946 2973 kubelet_node_status.go:79] "Successfully registered node" node="ip-172-31-18-187" Feb 13 19:23:38.309021 kubelet[2973]: E0213 19:23:38.309019 2973 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"ip-172-31-18-187\": node \"ip-172-31-18-187\" not found" Feb 13 19:23:38.313341 kubelet[2973]: E0213 19:23:38.313309 2973 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-18-187\" not found" Feb 13 19:23:38.413508 kubelet[2973]: E0213 19:23:38.413464 2973 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-18-187\" not found" Feb 13 19:23:38.521173 kubelet[2973]: E0213 19:23:38.516185 2973 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-18-187\" not found" Feb 13 19:23:38.616854 kubelet[2973]: E0213 19:23:38.616725 2973 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-18-187\" not found" Feb 13 19:23:38.717048 kubelet[2973]: E0213 19:23:38.716965 2973 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-18-187\" not found" Feb 13 19:23:38.817829 kubelet[2973]: E0213 19:23:38.817782 2973 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-18-187\" not found" Feb 13 19:23:38.918027 kubelet[2973]: E0213 19:23:38.917960 2973 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-18-187\" not found" Feb 13 19:23:39.026293 kubelet[2973]: E0213 19:23:39.026246 2973 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-18-187\" not found" Feb 13 19:23:39.126789 kubelet[2973]: E0213 19:23:39.126746 2973 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-18-187\" not found" Feb 13 19:23:39.228632 kubelet[2973]: E0213 19:23:39.226953 2973 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-18-187\" not found" Feb 13 19:23:39.327952 kubelet[2973]: E0213 19:23:39.327887 2973 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-18-187\" not found" Feb 13 19:23:39.428661 kubelet[2973]: E0213 19:23:39.428581 2973 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-18-187\" not found" Feb 13 19:23:39.529671 kubelet[2973]: E0213 19:23:39.529540 2973 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-18-187\" not found" Feb 13 19:23:39.629989 kubelet[2973]: E0213 19:23:39.629942 2973 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-18-187\" not found" Feb 13 19:23:39.733780 kubelet[2973]: E0213 19:23:39.733723 2973 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-18-187\" not found" Feb 13 19:23:39.834975 kubelet[2973]: E0213 19:23:39.834216 2973 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-18-187\" not found" Feb 13 19:23:39.935304 kubelet[2973]: E0213 19:23:39.935268 2973 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-18-187\" not found" Feb 13 19:23:40.035450 kubelet[2973]: E0213 19:23:40.035401 2973 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-18-187\" not found" Feb 13 19:23:40.100924 systemd[1]: Reloading requested from client PID 3248 ('systemctl') (unit session-9.scope)... Feb 13 19:23:40.101168 systemd[1]: Reloading... Feb 13 19:23:40.186959 kubelet[2973]: I0213 19:23:40.185046 2973 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-18-187" Feb 13 19:23:40.218624 kubelet[2973]: I0213 19:23:40.218592 2973 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-18-187" Feb 13 19:23:40.249310 kubelet[2973]: I0213 19:23:40.249277 2973 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-18-187" Feb 13 19:23:40.354246 zram_generator::config[3288]: No configuration found. Feb 13 19:23:40.570973 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:23:40.719536 kubelet[2973]: I0213 19:23:40.719496 2973 apiserver.go:52] "Watching apiserver" Feb 13 19:23:40.791942 kubelet[2973]: I0213 19:23:40.791909 2973 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:23:40.795952 systemd[1]: Reloading finished in 693 ms. Feb 13 19:23:40.892930 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:23:40.911335 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:23:40.911615 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:23:40.911688 systemd[1]: kubelet.service: Consumed 1.071s CPU time, 123.3M memory peak, 0B memory swap peak. Feb 13 19:23:40.917908 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:23:41.300606 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:23:41.314710 (kubelet)[3345]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:23:41.448185 kubelet[3345]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:23:41.448185 kubelet[3345]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 19:23:41.448185 kubelet[3345]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:23:41.448185 kubelet[3345]: I0213 19:23:41.447860 3345 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:23:41.458976 kubelet[3345]: I0213 19:23:41.458665 3345 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 19:23:41.458976 kubelet[3345]: I0213 19:23:41.458690 3345 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:23:41.460757 kubelet[3345]: I0213 19:23:41.460732 3345 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 19:23:41.467526 kubelet[3345]: I0213 19:23:41.467144 3345 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 19:23:41.475205 kubelet[3345]: I0213 19:23:41.475172 3345 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:23:41.485798 sudo[3359]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 13 19:23:41.486539 sudo[3359]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Feb 13 19:23:41.488755 kubelet[3345]: E0213 19:23:41.487413 3345 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:23:41.488755 kubelet[3345]: I0213 19:23:41.487439 3345 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:23:41.494355 kubelet[3345]: I0213 19:23:41.494322 3345 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:23:41.494886 kubelet[3345]: I0213 19:23:41.494854 3345 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:23:41.495330 kubelet[3345]: I0213 19:23:41.494992 3345 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-18-187","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:23:41.495330 kubelet[3345]: I0213 19:23:41.495220 3345 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:23:41.495330 kubelet[3345]: I0213 19:23:41.495234 3345 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 19:23:41.495330 kubelet[3345]: I0213 19:23:41.495293 3345 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:23:41.497008 kubelet[3345]: I0213 19:23:41.495798 3345 kubelet.go:446] "Attempting to sync node with API server" Feb 13 19:23:41.497008 kubelet[3345]: I0213 19:23:41.495817 3345 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:23:41.497224 kubelet[3345]: I0213 19:23:41.497212 3345 kubelet.go:352] "Adding apiserver pod source" Feb 13 19:23:41.497307 kubelet[3345]: I0213 19:23:41.497299 3345 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:23:41.498810 kubelet[3345]: I0213 19:23:41.498643 3345 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:23:41.500793 kubelet[3345]: I0213 19:23:41.500776 3345 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:23:41.504441 kubelet[3345]: I0213 19:23:41.504205 3345 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 19:23:41.510222 kubelet[3345]: I0213 19:23:41.509640 3345 server.go:1287] "Started kubelet" Feb 13 19:23:41.513889 kubelet[3345]: I0213 19:23:41.513862 3345 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:23:41.592225 kubelet[3345]: I0213 19:23:41.514296 3345 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:23:41.613290 kubelet[3345]: I0213 19:23:41.611106 3345 server.go:490] "Adding debug handlers to kubelet server" Feb 13 19:23:41.615881 kubelet[3345]: I0213 19:23:41.514809 3345 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:23:41.616701 kubelet[3345]: I0213 19:23:41.514374 3345 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:23:41.617275 kubelet[3345]: I0213 19:23:41.617087 3345 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:23:41.617618 kubelet[3345]: I0213 19:23:41.589105 3345 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:23:41.622910 kubelet[3345]: I0213 19:23:41.589087 3345 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 19:23:41.627856 kubelet[3345]: I0213 19:23:41.627667 3345 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:23:41.632157 kubelet[3345]: I0213 19:23:41.631995 3345 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:23:41.633434 kubelet[3345]: I0213 19:23:41.633055 3345 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:23:41.634323 kubelet[3345]: I0213 19:23:41.633787 3345 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:23:41.635754 kubelet[3345]: I0213 19:23:41.634882 3345 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:23:41.635754 kubelet[3345]: I0213 19:23:41.634938 3345 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 19:23:41.635754 kubelet[3345]: I0213 19:23:41.634966 3345 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 19:23:41.635754 kubelet[3345]: I0213 19:23:41.634979 3345 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 19:23:41.635754 kubelet[3345]: E0213 19:23:41.635041 3345 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:23:41.648249 kubelet[3345]: E0213 19:23:41.648095 3345 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:23:41.651000 kubelet[3345]: I0213 19:23:41.649922 3345 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:23:41.735290 kubelet[3345]: E0213 19:23:41.735255 3345 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 19:23:41.768098 kubelet[3345]: I0213 19:23:41.768061 3345 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 19:23:41.768532 kubelet[3345]: I0213 19:23:41.768470 3345 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 19:23:41.769199 kubelet[3345]: I0213 19:23:41.768669 3345 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:23:41.769199 kubelet[3345]: I0213 19:23:41.768921 3345 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 19:23:41.769199 kubelet[3345]: I0213 19:23:41.768935 3345 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 19:23:41.769199 kubelet[3345]: I0213 19:23:41.768961 3345 policy_none.go:49] "None policy: Start" Feb 13 19:23:41.769199 kubelet[3345]: I0213 19:23:41.768973 3345 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 19:23:41.769199 kubelet[3345]: I0213 19:23:41.768984 3345 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:23:41.769199 kubelet[3345]: I0213 19:23:41.769122 3345 state_mem.go:75] "Updated machine memory state" Feb 13 19:23:41.778558 kubelet[3345]: I0213 19:23:41.777532 3345 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:23:41.778558 kubelet[3345]: I0213 19:23:41.777729 3345 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:23:41.778558 kubelet[3345]: I0213 19:23:41.777741 3345 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:23:41.792866 kubelet[3345]: I0213 19:23:41.791830 3345 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:23:41.795921 kubelet[3345]: E0213 19:23:41.795893 3345 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 19:23:41.895754 kubelet[3345]: I0213 19:23:41.895656 3345 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-18-187" Feb 13 19:23:41.943361 kubelet[3345]: I0213 19:23:41.943325 3345 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-18-187" Feb 13 19:23:41.947906 kubelet[3345]: I0213 19:23:41.947177 3345 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-18-187" Feb 13 19:23:41.947906 kubelet[3345]: I0213 19:23:41.947568 3345 kubelet_node_status.go:125] "Node was previously registered" node="ip-172-31-18-187" Feb 13 19:23:41.947906 kubelet[3345]: I0213 19:23:41.947636 3345 kubelet_node_status.go:79] "Successfully registered node" node="ip-172-31-18-187" Feb 13 19:23:41.952207 kubelet[3345]: I0213 19:23:41.952174 3345 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-18-187" Feb 13 19:23:42.006120 kubelet[3345]: E0213 19:23:42.006081 3345 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-18-187\" already exists" pod="kube-system/kube-apiserver-ip-172-31-18-187" Feb 13 19:23:42.006289 kubelet[3345]: E0213 19:23:42.006199 3345 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-18-187\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-18-187" Feb 13 19:23:42.007334 kubelet[3345]: E0213 19:23:42.007308 3345 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-18-187\" already exists" pod="kube-system/kube-scheduler-ip-172-31-18-187" Feb 13 19:23:42.036682 kubelet[3345]: I0213 19:23:42.036586 3345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c7d0687098d317ac29ebc84e6789f9d6-ca-certs\") pod \"kube-apiserver-ip-172-31-18-187\" (UID: \"c7d0687098d317ac29ebc84e6789f9d6\") " pod="kube-system/kube-apiserver-ip-172-31-18-187" Feb 13 19:23:42.036823 kubelet[3345]: I0213 19:23:42.036687 3345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c7d0687098d317ac29ebc84e6789f9d6-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-187\" (UID: \"c7d0687098d317ac29ebc84e6789f9d6\") " pod="kube-system/kube-apiserver-ip-172-31-18-187" Feb 13 19:23:42.036823 kubelet[3345]: I0213 19:23:42.036730 3345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e81069c321c5f4a7b1a82c3307cef94f-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-187\" (UID: \"e81069c321c5f4a7b1a82c3307cef94f\") " pod="kube-system/kube-controller-manager-ip-172-31-18-187" Feb 13 19:23:42.036823 kubelet[3345]: I0213 19:23:42.036755 3345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e81069c321c5f4a7b1a82c3307cef94f-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-187\" (UID: \"e81069c321c5f4a7b1a82c3307cef94f\") " pod="kube-system/kube-controller-manager-ip-172-31-18-187" Feb 13 19:23:42.036823 kubelet[3345]: I0213 19:23:42.036777 3345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e81069c321c5f4a7b1a82c3307cef94f-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-187\" (UID: \"e81069c321c5f4a7b1a82c3307cef94f\") " pod="kube-system/kube-controller-manager-ip-172-31-18-187" Feb 13 19:23:42.037549 kubelet[3345]: I0213 19:23:42.036838 3345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b59d6363d994a676cad324d0d1ee8b10-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-187\" (UID: \"b59d6363d994a676cad324d0d1ee8b10\") " pod="kube-system/kube-scheduler-ip-172-31-18-187" Feb 13 19:23:42.037549 kubelet[3345]: I0213 19:23:42.036864 3345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e81069c321c5f4a7b1a82c3307cef94f-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-187\" (UID: \"e81069c321c5f4a7b1a82c3307cef94f\") " pod="kube-system/kube-controller-manager-ip-172-31-18-187" Feb 13 19:23:42.037549 kubelet[3345]: I0213 19:23:42.036890 3345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e81069c321c5f4a7b1a82c3307cef94f-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-187\" (UID: \"e81069c321c5f4a7b1a82c3307cef94f\") " pod="kube-system/kube-controller-manager-ip-172-31-18-187" Feb 13 19:23:42.037549 kubelet[3345]: I0213 19:23:42.036919 3345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c7d0687098d317ac29ebc84e6789f9d6-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-187\" (UID: \"c7d0687098d317ac29ebc84e6789f9d6\") " pod="kube-system/kube-apiserver-ip-172-31-18-187" Feb 13 19:23:42.501745 kubelet[3345]: I0213 19:23:42.501438 3345 apiserver.go:52] "Watching apiserver" Feb 13 19:23:42.524938 kubelet[3345]: I0213 19:23:42.524873 3345 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:23:42.711741 sudo[3359]: pam_unix(sudo:session): session closed for user root Feb 13 19:23:42.717201 kubelet[3345]: I0213 19:23:42.715501 3345 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-18-187" Feb 13 19:23:42.717201 kubelet[3345]: I0213 19:23:42.716006 3345 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-18-187" Feb 13 19:23:42.719161 kubelet[3345]: I0213 19:23:42.717955 3345 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-18-187" Feb 13 19:23:42.735368 kubelet[3345]: E0213 19:23:42.735327 3345 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-18-187\" already exists" pod="kube-system/kube-scheduler-ip-172-31-18-187" Feb 13 19:23:42.736932 kubelet[3345]: E0213 19:23:42.736715 3345 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-18-187\" already exists" pod="kube-system/kube-apiserver-ip-172-31-18-187" Feb 13 19:23:42.739289 kubelet[3345]: E0213 19:23:42.739057 3345 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-18-187\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-18-187" Feb 13 19:23:42.772667 kubelet[3345]: I0213 19:23:42.771480 3345 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-18-187" podStartSLOduration=2.771383513 podStartE2EDuration="2.771383513s" podCreationTimestamp="2025-02-13 19:23:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:23:42.769090096 +0000 UTC m=+1.441971849" watchObservedRunningTime="2025-02-13 19:23:42.771383513 +0000 UTC m=+1.444265235" Feb 13 19:23:42.798486 kubelet[3345]: I0213 19:23:42.798425 3345 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-18-187" podStartSLOduration=2.798407493 podStartE2EDuration="2.798407493s" podCreationTimestamp="2025-02-13 19:23:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:23:42.785677898 +0000 UTC m=+1.458559633" watchObservedRunningTime="2025-02-13 19:23:42.798407493 +0000 UTC m=+1.471289227" Feb 13 19:23:42.824936 kubelet[3345]: I0213 19:23:42.824776 3345 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-18-187" podStartSLOduration=2.824753018 podStartE2EDuration="2.824753018s" podCreationTimestamp="2025-02-13 19:23:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:23:42.799232347 +0000 UTC m=+1.472114081" watchObservedRunningTime="2025-02-13 19:23:42.824753018 +0000 UTC m=+1.497634753" Feb 13 19:23:44.953595 sudo[2227]: pam_unix(sudo:session): session closed for user root Feb 13 19:23:44.983840 sshd[2226]: Connection closed by 139.178.89.65 port 55836 Feb 13 19:23:44.982816 sshd-session[2224]: pam_unix(sshd:session): session closed for user core Feb 13 19:23:44.991047 systemd[1]: sshd@8-172.31.18.187:22-139.178.89.65:55836.service: Deactivated successfully. Feb 13 19:23:44.993644 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 19:23:44.993955 systemd[1]: session-9.scope: Consumed 5.053s CPU time, 137.8M memory peak, 0B memory swap peak. Feb 13 19:23:44.994745 systemd-logind[1871]: Session 9 logged out. Waiting for processes to exit. Feb 13 19:23:45.001316 systemd-logind[1871]: Removed session 9. Feb 13 19:23:45.691284 kubelet[3345]: I0213 19:23:45.691229 3345 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 19:23:45.693422 containerd[1889]: time="2025-02-13T19:23:45.692835104Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:23:45.695414 kubelet[3345]: I0213 19:23:45.693643 3345 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 19:23:46.417622 systemd[1]: Created slice kubepods-besteffort-pod66648e3b_7eaf_4137_ac01_350939208a57.slice - libcontainer container kubepods-besteffort-pod66648e3b_7eaf_4137_ac01_350939208a57.slice. Feb 13 19:23:46.433680 systemd[1]: Created slice kubepods-burstable-podc492be81_a1df_441c_9ddc_36fb9c692d0d.slice - libcontainer container kubepods-burstable-podc492be81_a1df_441c_9ddc_36fb9c692d0d.slice. Feb 13 19:23:46.478512 kubelet[3345]: I0213 19:23:46.478374 3345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c492be81-a1df-441c-9ddc-36fb9c692d0d-cilium-config-path\") pod \"cilium-vmddg\" (UID: \"c492be81-a1df-441c-9ddc-36fb9c692d0d\") " pod="kube-system/cilium-vmddg" Feb 13 19:23:46.480264 kubelet[3345]: I0213 19:23:46.480224 3345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c492be81-a1df-441c-9ddc-36fb9c692d0d-lib-modules\") pod \"cilium-vmddg\" (UID: \"c492be81-a1df-441c-9ddc-36fb9c692d0d\") " pod="kube-system/cilium-vmddg" Feb 13 19:23:46.480733 kubelet[3345]: I0213 19:23:46.480593 3345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c492be81-a1df-441c-9ddc-36fb9c692d0d-clustermesh-secrets\") pod \"cilium-vmddg\" (UID: \"c492be81-a1df-441c-9ddc-36fb9c692d0d\") " pod="kube-system/cilium-vmddg" Feb 13 19:23:46.481342 kubelet[3345]: I0213 19:23:46.480843 3345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/66648e3b-7eaf-4137-ac01-350939208a57-kube-proxy\") pod \"kube-proxy-njdlw\" (UID: \"66648e3b-7eaf-4137-ac01-350939208a57\") " pod="kube-system/kube-proxy-njdlw" Feb 13 19:23:46.481342 kubelet[3345]: I0213 19:23:46.480876 3345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/66648e3b-7eaf-4137-ac01-350939208a57-lib-modules\") pod \"kube-proxy-njdlw\" (UID: \"66648e3b-7eaf-4137-ac01-350939208a57\") " pod="kube-system/kube-proxy-njdlw" Feb 13 19:23:46.481342 kubelet[3345]: I0213 19:23:46.480902 3345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c492be81-a1df-441c-9ddc-36fb9c692d0d-host-proc-sys-net\") pod \"cilium-vmddg\" (UID: \"c492be81-a1df-441c-9ddc-36fb9c692d0d\") " pod="kube-system/cilium-vmddg" Feb 13 19:23:46.481342 kubelet[3345]: I0213 19:23:46.480927 3345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wmks\" (UniqueName: \"kubernetes.io/projected/66648e3b-7eaf-4137-ac01-350939208a57-kube-api-access-8wmks\") pod \"kube-proxy-njdlw\" (UID: \"66648e3b-7eaf-4137-ac01-350939208a57\") " pod="kube-system/kube-proxy-njdlw" Feb 13 19:23:46.481342 kubelet[3345]: I0213 19:23:46.480953 3345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c492be81-a1df-441c-9ddc-36fb9c692d0d-etc-cni-netd\") pod \"cilium-vmddg\" (UID: \"c492be81-a1df-441c-9ddc-36fb9c692d0d\") " pod="kube-system/cilium-vmddg" Feb 13 19:23:46.481342 kubelet[3345]: I0213 19:23:46.480977 3345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c492be81-a1df-441c-9ddc-36fb9c692d0d-bpf-maps\") pod \"cilium-vmddg\" (UID: \"c492be81-a1df-441c-9ddc-36fb9c692d0d\") " pod="kube-system/cilium-vmddg" Feb 13 19:23:46.481625 kubelet[3345]: I0213 19:23:46.480996 3345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c492be81-a1df-441c-9ddc-36fb9c692d0d-cilium-cgroup\") pod \"cilium-vmddg\" (UID: \"c492be81-a1df-441c-9ddc-36fb9c692d0d\") " pod="kube-system/cilium-vmddg" Feb 13 19:23:46.481625 kubelet[3345]: I0213 19:23:46.481022 3345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c492be81-a1df-441c-9ddc-36fb9c692d0d-cni-path\") pod \"cilium-vmddg\" (UID: \"c492be81-a1df-441c-9ddc-36fb9c692d0d\") " pod="kube-system/cilium-vmddg" Feb 13 19:23:46.481625 kubelet[3345]: I0213 19:23:46.481047 3345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c492be81-a1df-441c-9ddc-36fb9c692d0d-cilium-run\") pod \"cilium-vmddg\" (UID: \"c492be81-a1df-441c-9ddc-36fb9c692d0d\") " pod="kube-system/cilium-vmddg" Feb 13 19:23:46.481625 kubelet[3345]: I0213 19:23:46.481067 3345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c492be81-a1df-441c-9ddc-36fb9c692d0d-xtables-lock\") pod \"cilium-vmddg\" (UID: \"c492be81-a1df-441c-9ddc-36fb9c692d0d\") " pod="kube-system/cilium-vmddg" Feb 13 19:23:46.481625 kubelet[3345]: I0213 19:23:46.481088 3345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c492be81-a1df-441c-9ddc-36fb9c692d0d-host-proc-sys-kernel\") pod \"cilium-vmddg\" (UID: \"c492be81-a1df-441c-9ddc-36fb9c692d0d\") " pod="kube-system/cilium-vmddg" Feb 13 19:23:46.481625 kubelet[3345]: I0213 19:23:46.481114 3345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/66648e3b-7eaf-4137-ac01-350939208a57-xtables-lock\") pod \"kube-proxy-njdlw\" (UID: \"66648e3b-7eaf-4137-ac01-350939208a57\") " pod="kube-system/kube-proxy-njdlw" Feb 13 19:23:46.481967 kubelet[3345]: I0213 19:23:46.481148 3345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c492be81-a1df-441c-9ddc-36fb9c692d0d-hostproc\") pod \"cilium-vmddg\" (UID: \"c492be81-a1df-441c-9ddc-36fb9c692d0d\") " pod="kube-system/cilium-vmddg" Feb 13 19:23:46.481967 kubelet[3345]: I0213 19:23:46.481181 3345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c492be81-a1df-441c-9ddc-36fb9c692d0d-hubble-tls\") pod \"cilium-vmddg\" (UID: \"c492be81-a1df-441c-9ddc-36fb9c692d0d\") " pod="kube-system/cilium-vmddg" Feb 13 19:23:46.481967 kubelet[3345]: I0213 19:23:46.481208 3345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn2st\" (UniqueName: \"kubernetes.io/projected/c492be81-a1df-441c-9ddc-36fb9c692d0d-kube-api-access-rn2st\") pod \"cilium-vmddg\" (UID: \"c492be81-a1df-441c-9ddc-36fb9c692d0d\") " pod="kube-system/cilium-vmddg" Feb 13 19:23:46.724933 systemd[1]: Created slice kubepods-besteffort-podffb44e99_4525_4809_996a_6200aa62fac8.slice - libcontainer container kubepods-besteffort-podffb44e99_4525_4809_996a_6200aa62fac8.slice. Feb 13 19:23:46.740084 containerd[1889]: time="2025-02-13T19:23:46.735524337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-njdlw,Uid:66648e3b-7eaf-4137-ac01-350939208a57,Namespace:kube-system,Attempt:0,}" Feb 13 19:23:46.740084 containerd[1889]: time="2025-02-13T19:23:46.739797421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vmddg,Uid:c492be81-a1df-441c-9ddc-36fb9c692d0d,Namespace:kube-system,Attempt:0,}" Feb 13 19:23:46.783049 kubelet[3345]: I0213 19:23:46.782999 3345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ftml\" (UniqueName: \"kubernetes.io/projected/ffb44e99-4525-4809-996a-6200aa62fac8-kube-api-access-9ftml\") pod \"cilium-operator-6c4d7847fc-sxzl5\" (UID: \"ffb44e99-4525-4809-996a-6200aa62fac8\") " pod="kube-system/cilium-operator-6c4d7847fc-sxzl5" Feb 13 19:23:46.784336 kubelet[3345]: I0213 19:23:46.784244 3345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ffb44e99-4525-4809-996a-6200aa62fac8-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-sxzl5\" (UID: \"ffb44e99-4525-4809-996a-6200aa62fac8\") " pod="kube-system/cilium-operator-6c4d7847fc-sxzl5" Feb 13 19:23:46.809359 containerd[1889]: time="2025-02-13T19:23:46.809172661Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:23:46.809359 containerd[1889]: time="2025-02-13T19:23:46.809272345Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:23:46.809680 containerd[1889]: time="2025-02-13T19:23:46.809594645Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:23:46.810585 containerd[1889]: time="2025-02-13T19:23:46.810523540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:23:46.815207 containerd[1889]: time="2025-02-13T19:23:46.814148556Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:23:46.815207 containerd[1889]: time="2025-02-13T19:23:46.814275421Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:23:46.815207 containerd[1889]: time="2025-02-13T19:23:46.814346053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:23:46.815207 containerd[1889]: time="2025-02-13T19:23:46.814535815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:23:46.841337 systemd[1]: Started cri-containerd-0c0055718e29cafb7e7afdf166a6c847aa0b41948170811318ed9d77a50e6880.scope - libcontainer container 0c0055718e29cafb7e7afdf166a6c847aa0b41948170811318ed9d77a50e6880. Feb 13 19:23:46.844584 systemd[1]: Started cri-containerd-71975bb0b05fd2ae92e8dd35252bf4c9d8bd0617a0efe7304fc7b9d91ed861f0.scope - libcontainer container 71975bb0b05fd2ae92e8dd35252bf4c9d8bd0617a0efe7304fc7b9d91ed861f0. Feb 13 19:23:46.924983 containerd[1889]: time="2025-02-13T19:23:46.924931225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vmddg,Uid:c492be81-a1df-441c-9ddc-36fb9c692d0d,Namespace:kube-system,Attempt:0,} returns sandbox id \"0c0055718e29cafb7e7afdf166a6c847aa0b41948170811318ed9d77a50e6880\"" Feb 13 19:23:46.930164 containerd[1889]: time="2025-02-13T19:23:46.930113848Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 19:23:46.934239 containerd[1889]: time="2025-02-13T19:23:46.933320846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-njdlw,Uid:66648e3b-7eaf-4137-ac01-350939208a57,Namespace:kube-system,Attempt:0,} returns sandbox id \"71975bb0b05fd2ae92e8dd35252bf4c9d8bd0617a0efe7304fc7b9d91ed861f0\"" Feb 13 19:23:46.946460 containerd[1889]: time="2025-02-13T19:23:46.945207465Z" level=info msg="CreateContainer within sandbox \"71975bb0b05fd2ae92e8dd35252bf4c9d8bd0617a0efe7304fc7b9d91ed861f0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:23:47.004774 containerd[1889]: time="2025-02-13T19:23:47.002203321Z" level=info msg="CreateContainer within sandbox \"71975bb0b05fd2ae92e8dd35252bf4c9d8bd0617a0efe7304fc7b9d91ed861f0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"413085dac6cfb1144190e720492881873549b2dfe19a4b33de51aefdf903f0d4\"" Feb 13 19:23:47.004774 containerd[1889]: time="2025-02-13T19:23:47.003591435Z" level=info msg="StartContainer for \"413085dac6cfb1144190e720492881873549b2dfe19a4b33de51aefdf903f0d4\"" Feb 13 19:23:47.036044 containerd[1889]: time="2025-02-13T19:23:47.035998738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-sxzl5,Uid:ffb44e99-4525-4809-996a-6200aa62fac8,Namespace:kube-system,Attempt:0,}" Feb 13 19:23:47.068485 systemd[1]: Started cri-containerd-413085dac6cfb1144190e720492881873549b2dfe19a4b33de51aefdf903f0d4.scope - libcontainer container 413085dac6cfb1144190e720492881873549b2dfe19a4b33de51aefdf903f0d4. Feb 13 19:23:47.102672 containerd[1889]: time="2025-02-13T19:23:47.102300948Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:23:47.102966 containerd[1889]: time="2025-02-13T19:23:47.102623742Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:23:47.102966 containerd[1889]: time="2025-02-13T19:23:47.102678055Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:23:47.102966 containerd[1889]: time="2025-02-13T19:23:47.102861760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:23:47.154028 systemd[1]: Started cri-containerd-c1879380053ede6e850a9931ba220cf5bd415497cee6c4265a288aaebfadda97.scope - libcontainer container c1879380053ede6e850a9931ba220cf5bd415497cee6c4265a288aaebfadda97. Feb 13 19:23:47.173444 containerd[1889]: time="2025-02-13T19:23:47.173395337Z" level=info msg="StartContainer for \"413085dac6cfb1144190e720492881873549b2dfe19a4b33de51aefdf903f0d4\" returns successfully" Feb 13 19:23:47.224378 containerd[1889]: time="2025-02-13T19:23:47.224261691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-sxzl5,Uid:ffb44e99-4525-4809-996a-6200aa62fac8,Namespace:kube-system,Attempt:0,} returns sandbox id \"c1879380053ede6e850a9931ba220cf5bd415497cee6c4265a288aaebfadda97\"" Feb 13 19:23:53.389063 kubelet[3345]: I0213 19:23:53.388981 3345 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-njdlw" podStartSLOduration=7.38895137 podStartE2EDuration="7.38895137s" podCreationTimestamp="2025-02-13 19:23:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:23:47.780084825 +0000 UTC m=+6.452966560" watchObservedRunningTime="2025-02-13 19:23:53.38895137 +0000 UTC m=+12.061833106" Feb 13 19:23:55.010556 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3024784443.mount: Deactivated successfully. Feb 13 19:23:58.771448 containerd[1889]: time="2025-02-13T19:23:58.771393709Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:23:58.773011 containerd[1889]: time="2025-02-13T19:23:58.772965922Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Feb 13 19:23:58.795862 containerd[1889]: time="2025-02-13T19:23:58.795566582Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:23:58.797608 containerd[1889]: time="2025-02-13T19:23:58.797397147Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 11.865669909s" Feb 13 19:23:58.797608 containerd[1889]: time="2025-02-13T19:23:58.797447844Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 13 19:23:58.799658 containerd[1889]: time="2025-02-13T19:23:58.799412935Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 19:23:58.800603 containerd[1889]: time="2025-02-13T19:23:58.800250526Z" level=info msg="CreateContainer within sandbox \"0c0055718e29cafb7e7afdf166a6c847aa0b41948170811318ed9d77a50e6880\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 19:23:58.916094 containerd[1889]: time="2025-02-13T19:23:58.916047639Z" level=info msg="CreateContainer within sandbox \"0c0055718e29cafb7e7afdf166a6c847aa0b41948170811318ed9d77a50e6880\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"963df4b808e2653fd669c373fc913c1c1a5f2e687f034049bdbfbc1529c16fc9\"" Feb 13 19:23:58.918314 containerd[1889]: time="2025-02-13T19:23:58.918222538Z" level=info msg="StartContainer for \"963df4b808e2653fd669c373fc913c1c1a5f2e687f034049bdbfbc1529c16fc9\"" Feb 13 19:23:59.285361 systemd[1]: Started cri-containerd-963df4b808e2653fd669c373fc913c1c1a5f2e687f034049bdbfbc1529c16fc9.scope - libcontainer container 963df4b808e2653fd669c373fc913c1c1a5f2e687f034049bdbfbc1529c16fc9. Feb 13 19:23:59.367567 containerd[1889]: time="2025-02-13T19:23:59.367502642Z" level=info msg="StartContainer for \"963df4b808e2653fd669c373fc913c1c1a5f2e687f034049bdbfbc1529c16fc9\" returns successfully" Feb 13 19:23:59.389058 systemd[1]: cri-containerd-963df4b808e2653fd669c373fc913c1c1a5f2e687f034049bdbfbc1529c16fc9.scope: Deactivated successfully. Feb 13 19:23:59.669700 containerd[1889]: time="2025-02-13T19:23:59.647388635Z" level=info msg="shim disconnected" id=963df4b808e2653fd669c373fc913c1c1a5f2e687f034049bdbfbc1529c16fc9 namespace=k8s.io Feb 13 19:23:59.670052 containerd[1889]: time="2025-02-13T19:23:59.669702628Z" level=warning msg="cleaning up after shim disconnected" id=963df4b808e2653fd669c373fc913c1c1a5f2e687f034049bdbfbc1529c16fc9 namespace=k8s.io Feb 13 19:23:59.670052 containerd[1889]: time="2025-02-13T19:23:59.669721644Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:23:59.886195 containerd[1889]: time="2025-02-13T19:23:59.885979730Z" level=info msg="CreateContainer within sandbox \"0c0055718e29cafb7e7afdf166a6c847aa0b41948170811318ed9d77a50e6880\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 19:23:59.906943 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-963df4b808e2653fd669c373fc913c1c1a5f2e687f034049bdbfbc1529c16fc9-rootfs.mount: Deactivated successfully. Feb 13 19:23:59.932230 containerd[1889]: time="2025-02-13T19:23:59.932062159Z" level=info msg="CreateContainer within sandbox \"0c0055718e29cafb7e7afdf166a6c847aa0b41948170811318ed9d77a50e6880\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c0589803726607e2fa037b8b6b1a67d374c63fbe893cca5f03097e541955a40c\"" Feb 13 19:23:59.933022 containerd[1889]: time="2025-02-13T19:23:59.932915895Z" level=info msg="StartContainer for \"c0589803726607e2fa037b8b6b1a67d374c63fbe893cca5f03097e541955a40c\"" Feb 13 19:23:59.984351 systemd[1]: Started cri-containerd-c0589803726607e2fa037b8b6b1a67d374c63fbe893cca5f03097e541955a40c.scope - libcontainer container c0589803726607e2fa037b8b6b1a67d374c63fbe893cca5f03097e541955a40c. Feb 13 19:24:00.060069 containerd[1889]: time="2025-02-13T19:24:00.059957154Z" level=info msg="StartContainer for \"c0589803726607e2fa037b8b6b1a67d374c63fbe893cca5f03097e541955a40c\" returns successfully" Feb 13 19:24:00.068785 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:24:00.069598 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:24:00.070288 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:24:00.075615 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:24:00.077803 systemd[1]: cri-containerd-c0589803726607e2fa037b8b6b1a67d374c63fbe893cca5f03097e541955a40c.scope: Deactivated successfully. Feb 13 19:24:00.136832 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:24:00.149522 containerd[1889]: time="2025-02-13T19:24:00.149418696Z" level=info msg="shim disconnected" id=c0589803726607e2fa037b8b6b1a67d374c63fbe893cca5f03097e541955a40c namespace=k8s.io Feb 13 19:24:00.150156 containerd[1889]: time="2025-02-13T19:24:00.149999204Z" level=warning msg="cleaning up after shim disconnected" id=c0589803726607e2fa037b8b6b1a67d374c63fbe893cca5f03097e541955a40c namespace=k8s.io Feb 13 19:24:00.150156 containerd[1889]: time="2025-02-13T19:24:00.150023902Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:24:00.905484 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c0589803726607e2fa037b8b6b1a67d374c63fbe893cca5f03097e541955a40c-rootfs.mount: Deactivated successfully. Feb 13 19:24:00.919062 containerd[1889]: time="2025-02-13T19:24:00.918556472Z" level=info msg="CreateContainer within sandbox \"0c0055718e29cafb7e7afdf166a6c847aa0b41948170811318ed9d77a50e6880\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 19:24:00.999029 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4088477808.mount: Deactivated successfully. Feb 13 19:24:01.033566 containerd[1889]: time="2025-02-13T19:24:01.033519785Z" level=info msg="CreateContainer within sandbox \"0c0055718e29cafb7e7afdf166a6c847aa0b41948170811318ed9d77a50e6880\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"cf213306d9c3926f0dbde8f9bde8b59adf6bb52bda088e7bfb80f79fe19223b6\"" Feb 13 19:24:01.042392 containerd[1889]: time="2025-02-13T19:24:01.042346983Z" level=info msg="StartContainer for \"cf213306d9c3926f0dbde8f9bde8b59adf6bb52bda088e7bfb80f79fe19223b6\"" Feb 13 19:24:01.132141 systemd[1]: Started cri-containerd-cf213306d9c3926f0dbde8f9bde8b59adf6bb52bda088e7bfb80f79fe19223b6.scope - libcontainer container cf213306d9c3926f0dbde8f9bde8b59adf6bb52bda088e7bfb80f79fe19223b6. Feb 13 19:24:01.307906 containerd[1889]: time="2025-02-13T19:24:01.307857006Z" level=info msg="StartContainer for \"cf213306d9c3926f0dbde8f9bde8b59adf6bb52bda088e7bfb80f79fe19223b6\" returns successfully" Feb 13 19:24:01.320291 systemd[1]: cri-containerd-cf213306d9c3926f0dbde8f9bde8b59adf6bb52bda088e7bfb80f79fe19223b6.scope: Deactivated successfully. Feb 13 19:24:01.675405 containerd[1889]: time="2025-02-13T19:24:01.675116542Z" level=info msg="shim disconnected" id=cf213306d9c3926f0dbde8f9bde8b59adf6bb52bda088e7bfb80f79fe19223b6 namespace=k8s.io Feb 13 19:24:01.675405 containerd[1889]: time="2025-02-13T19:24:01.675207306Z" level=warning msg="cleaning up after shim disconnected" id=cf213306d9c3926f0dbde8f9bde8b59adf6bb52bda088e7bfb80f79fe19223b6 namespace=k8s.io Feb 13 19:24:01.675405 containerd[1889]: time="2025-02-13T19:24:01.675221790Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:24:01.914306 systemd[1]: run-containerd-runc-k8s.io-cf213306d9c3926f0dbde8f9bde8b59adf6bb52bda088e7bfb80f79fe19223b6-runc.mKlI1X.mount: Deactivated successfully. Feb 13 19:24:01.914456 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cf213306d9c3926f0dbde8f9bde8b59adf6bb52bda088e7bfb80f79fe19223b6-rootfs.mount: Deactivated successfully. Feb 13 19:24:02.025554 containerd[1889]: time="2025-02-13T19:24:02.017433743Z" level=info msg="CreateContainer within sandbox \"0c0055718e29cafb7e7afdf166a6c847aa0b41948170811318ed9d77a50e6880\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 19:24:02.085884 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3009514683.mount: Deactivated successfully. Feb 13 19:24:02.138024 containerd[1889]: time="2025-02-13T19:24:02.137959538Z" level=info msg="CreateContainer within sandbox \"0c0055718e29cafb7e7afdf166a6c847aa0b41948170811318ed9d77a50e6880\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4a2fa56ebe0b3d88fc00398adfbadf63acb3bf6d8e4759adf5824a4e57e41608\"" Feb 13 19:24:02.173700 containerd[1889]: time="2025-02-13T19:24:02.157756347Z" level=info msg="StartContainer for \"4a2fa56ebe0b3d88fc00398adfbadf63acb3bf6d8e4759adf5824a4e57e41608\"" Feb 13 19:24:02.194933 containerd[1889]: time="2025-02-13T19:24:02.184615270Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:24:02.231215 containerd[1889]: time="2025-02-13T19:24:02.231152350Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Feb 13 19:24:02.328519 containerd[1889]: time="2025-02-13T19:24:02.325819632Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:24:02.342307 containerd[1889]: time="2025-02-13T19:24:02.342228846Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.542772708s" Feb 13 19:24:02.342307 containerd[1889]: time="2025-02-13T19:24:02.342302147Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 13 19:24:02.347496 containerd[1889]: time="2025-02-13T19:24:02.347453944Z" level=info msg="CreateContainer within sandbox \"c1879380053ede6e850a9931ba220cf5bd415497cee6c4265a288aaebfadda97\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 19:24:02.375474 systemd[1]: Started cri-containerd-4a2fa56ebe0b3d88fc00398adfbadf63acb3bf6d8e4759adf5824a4e57e41608.scope - libcontainer container 4a2fa56ebe0b3d88fc00398adfbadf63acb3bf6d8e4759adf5824a4e57e41608. Feb 13 19:24:02.390216 containerd[1889]: time="2025-02-13T19:24:02.390172229Z" level=info msg="CreateContainer within sandbox \"c1879380053ede6e850a9931ba220cf5bd415497cee6c4265a288aaebfadda97\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"ca1323d02ab6f48280503a1e6455cfbdf8f6d9da9f952a3e9575b1c88df5b36d\"" Feb 13 19:24:02.393844 containerd[1889]: time="2025-02-13T19:24:02.392108455Z" level=info msg="StartContainer for \"ca1323d02ab6f48280503a1e6455cfbdf8f6d9da9f952a3e9575b1c88df5b36d\"" Feb 13 19:24:02.435751 systemd[1]: cri-containerd-4a2fa56ebe0b3d88fc00398adfbadf63acb3bf6d8e4759adf5824a4e57e41608.scope: Deactivated successfully. Feb 13 19:24:02.439456 containerd[1889]: time="2025-02-13T19:24:02.439410422Z" level=info msg="StartContainer for \"4a2fa56ebe0b3d88fc00398adfbadf63acb3bf6d8e4759adf5824a4e57e41608\" returns successfully" Feb 13 19:24:02.454430 systemd[1]: Started cri-containerd-ca1323d02ab6f48280503a1e6455cfbdf8f6d9da9f952a3e9575b1c88df5b36d.scope - libcontainer container ca1323d02ab6f48280503a1e6455cfbdf8f6d9da9f952a3e9575b1c88df5b36d. Feb 13 19:24:02.516254 containerd[1889]: time="2025-02-13T19:24:02.516146364Z" level=info msg="StartContainer for \"ca1323d02ab6f48280503a1e6455cfbdf8f6d9da9f952a3e9575b1c88df5b36d\" returns successfully" Feb 13 19:24:02.531529 containerd[1889]: time="2025-02-13T19:24:02.531347641Z" level=info msg="shim disconnected" id=4a2fa56ebe0b3d88fc00398adfbadf63acb3bf6d8e4759adf5824a4e57e41608 namespace=k8s.io Feb 13 19:24:02.534526 containerd[1889]: time="2025-02-13T19:24:02.531530767Z" level=warning msg="cleaning up after shim disconnected" id=4a2fa56ebe0b3d88fc00398adfbadf63acb3bf6d8e4759adf5824a4e57e41608 namespace=k8s.io Feb 13 19:24:02.534526 containerd[1889]: time="2025-02-13T19:24:02.531547764Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:24:02.924470 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4a2fa56ebe0b3d88fc00398adfbadf63acb3bf6d8e4759adf5824a4e57e41608-rootfs.mount: Deactivated successfully. Feb 13 19:24:02.962081 containerd[1889]: time="2025-02-13T19:24:02.962040964Z" level=info msg="CreateContainer within sandbox \"0c0055718e29cafb7e7afdf166a6c847aa0b41948170811318ed9d77a50e6880\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 19:24:03.013012 containerd[1889]: time="2025-02-13T19:24:03.012148299Z" level=info msg="CreateContainer within sandbox \"0c0055718e29cafb7e7afdf166a6c847aa0b41948170811318ed9d77a50e6880\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"faa38fa5cd97bf87e9e351dec59e379ddece6265795ecc7b4452a3c52251d6af\"" Feb 13 19:24:03.016678 containerd[1889]: time="2025-02-13T19:24:03.015935736Z" level=info msg="StartContainer for \"faa38fa5cd97bf87e9e351dec59e379ddece6265795ecc7b4452a3c52251d6af\"" Feb 13 19:24:03.226030 systemd[1]: Started cri-containerd-faa38fa5cd97bf87e9e351dec59e379ddece6265795ecc7b4452a3c52251d6af.scope - libcontainer container faa38fa5cd97bf87e9e351dec59e379ddece6265795ecc7b4452a3c52251d6af. Feb 13 19:24:03.272145 kubelet[3345]: I0213 19:24:03.271707 3345 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-sxzl5" podStartSLOduration=2.154279551 podStartE2EDuration="17.271681332s" podCreationTimestamp="2025-02-13 19:23:46 +0000 UTC" firstStartedPulling="2025-02-13 19:23:47.225759719 +0000 UTC m=+5.898641434" lastFinishedPulling="2025-02-13 19:24:02.343161493 +0000 UTC m=+21.016043215" observedRunningTime="2025-02-13 19:24:02.987025559 +0000 UTC m=+21.659907293" watchObservedRunningTime="2025-02-13 19:24:03.271681332 +0000 UTC m=+21.944563068" Feb 13 19:24:03.546344 containerd[1889]: time="2025-02-13T19:24:03.545889251Z" level=info msg="StartContainer for \"faa38fa5cd97bf87e9e351dec59e379ddece6265795ecc7b4452a3c52251d6af\" returns successfully" Feb 13 19:24:03.906701 systemd[1]: run-containerd-runc-k8s.io-faa38fa5cd97bf87e9e351dec59e379ddece6265795ecc7b4452a3c52251d6af-runc.Pn4eTV.mount: Deactivated successfully. Feb 13 19:24:04.184107 kubelet[3345]: I0213 19:24:04.184073 3345 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Feb 13 19:24:04.421604 kubelet[3345]: I0213 19:24:04.421565 3345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27g66\" (UniqueName: \"kubernetes.io/projected/2a1aac28-7159-4ba4-85cf-f302c65ee3da-kube-api-access-27g66\") pod \"coredns-668d6bf9bc-t5w5s\" (UID: \"2a1aac28-7159-4ba4-85cf-f302c65ee3da\") " pod="kube-system/coredns-668d6bf9bc-t5w5s" Feb 13 19:24:04.422102 kubelet[3345]: I0213 19:24:04.421623 3345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a1aac28-7159-4ba4-85cf-f302c65ee3da-config-volume\") pod \"coredns-668d6bf9bc-t5w5s\" (UID: \"2a1aac28-7159-4ba4-85cf-f302c65ee3da\") " pod="kube-system/coredns-668d6bf9bc-t5w5s" Feb 13 19:24:04.429235 systemd[1]: Created slice kubepods-burstable-pod2a1aac28_7159_4ba4_85cf_f302c65ee3da.slice - libcontainer container kubepods-burstable-pod2a1aac28_7159_4ba4_85cf_f302c65ee3da.slice. Feb 13 19:24:04.466760 systemd[1]: Created slice kubepods-burstable-pod393d9b64_0531_42a9_ba41_5767b1b48de1.slice - libcontainer container kubepods-burstable-pod393d9b64_0531_42a9_ba41_5767b1b48de1.slice. Feb 13 19:24:04.525714 kubelet[3345]: I0213 19:24:04.524284 3345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/393d9b64-0531-42a9-ba41-5767b1b48de1-config-volume\") pod \"coredns-668d6bf9bc-r4q6x\" (UID: \"393d9b64-0531-42a9-ba41-5767b1b48de1\") " pod="kube-system/coredns-668d6bf9bc-r4q6x" Feb 13 19:24:04.525714 kubelet[3345]: I0213 19:24:04.524350 3345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlwcs\" (UniqueName: \"kubernetes.io/projected/393d9b64-0531-42a9-ba41-5767b1b48de1-kube-api-access-zlwcs\") pod \"coredns-668d6bf9bc-r4q6x\" (UID: \"393d9b64-0531-42a9-ba41-5767b1b48de1\") " pod="kube-system/coredns-668d6bf9bc-r4q6x" Feb 13 19:24:04.762095 containerd[1889]: time="2025-02-13T19:24:04.761634858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t5w5s,Uid:2a1aac28-7159-4ba4-85cf-f302c65ee3da,Namespace:kube-system,Attempt:0,}" Feb 13 19:24:04.777689 containerd[1889]: time="2025-02-13T19:24:04.776992680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-r4q6x,Uid:393d9b64-0531-42a9-ba41-5767b1b48de1,Namespace:kube-system,Attempt:0,}" Feb 13 19:24:09.262443 systemd-networkd[1798]: cilium_host: Link UP Feb 13 19:24:09.262928 (udev-worker)[4134]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:24:09.263431 systemd-networkd[1798]: cilium_net: Link UP Feb 13 19:24:09.267612 systemd-networkd[1798]: cilium_net: Gained carrier Feb 13 19:24:09.267877 systemd-networkd[1798]: cilium_host: Gained carrier Feb 13 19:24:09.268724 systemd-networkd[1798]: cilium_net: Gained IPv6LL Feb 13 19:24:09.269975 systemd-networkd[1798]: cilium_host: Gained IPv6LL Feb 13 19:24:09.270667 (udev-worker)[4168]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:24:09.604043 (udev-worker)[4174]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:24:09.613456 systemd-networkd[1798]: cilium_vxlan: Link UP Feb 13 19:24:09.613467 systemd-networkd[1798]: cilium_vxlan: Gained carrier Feb 13 19:24:10.919343 systemd-networkd[1798]: cilium_vxlan: Gained IPv6LL Feb 13 19:24:12.107186 kernel: NET: Registered PF_ALG protocol family Feb 13 19:24:13.778005 systemd-networkd[1798]: lxc_health: Link UP Feb 13 19:24:13.790321 systemd-networkd[1798]: lxc_health: Gained carrier Feb 13 19:24:14.446493 (udev-worker)[4501]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:24:14.452645 systemd-networkd[1798]: lxc2782312c9add: Link UP Feb 13 19:24:14.459390 kernel: eth0: renamed from tmpa1ea6 Feb 13 19:24:14.462744 systemd-networkd[1798]: lxc908631a68fa9: Link UP Feb 13 19:24:14.479489 (udev-worker)[4175]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:24:14.480692 systemd-networkd[1798]: lxc2782312c9add: Gained carrier Feb 13 19:24:14.487176 kernel: eth0: renamed from tmp2e2ad Feb 13 19:24:14.500075 systemd-networkd[1798]: lxc908631a68fa9: Gained carrier Feb 13 19:24:14.836883 kubelet[3345]: I0213 19:24:14.836654 3345 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-vmddg" podStartSLOduration=16.965771173 podStartE2EDuration="28.836631891s" podCreationTimestamp="2025-02-13 19:23:46 +0000 UTC" firstStartedPulling="2025-02-13 19:23:46.927806015 +0000 UTC m=+5.600687741" lastFinishedPulling="2025-02-13 19:23:58.798666728 +0000 UTC m=+17.471548459" observedRunningTime="2025-02-13 19:24:05.015962238 +0000 UTC m=+23.688843975" watchObservedRunningTime="2025-02-13 19:24:14.836631891 +0000 UTC m=+33.509513645" Feb 13 19:24:15.271313 systemd-networkd[1798]: lxc_health: Gained IPv6LL Feb 13 19:24:15.655672 systemd-networkd[1798]: lxc908631a68fa9: Gained IPv6LL Feb 13 19:24:15.783706 systemd-networkd[1798]: lxc2782312c9add: Gained IPv6LL Feb 13 19:24:18.442677 ntpd[1864]: Listen normally on 7 cilium_host 192.168.0.178:123 Feb 13 19:24:18.444042 ntpd[1864]: 13 Feb 19:24:18 ntpd[1864]: Listen normally on 7 cilium_host 192.168.0.178:123 Feb 13 19:24:18.444042 ntpd[1864]: 13 Feb 19:24:18 ntpd[1864]: Listen normally on 8 cilium_net [fe80::28f5:b2ff:feb1:1762%4]:123 Feb 13 19:24:18.444042 ntpd[1864]: 13 Feb 19:24:18 ntpd[1864]: Listen normally on 9 cilium_host [fe80::7c07:7aff:fe1d:72f1%5]:123 Feb 13 19:24:18.444042 ntpd[1864]: 13 Feb 19:24:18 ntpd[1864]: Listen normally on 10 cilium_vxlan [fe80::1094:9cff:fee6:178c%6]:123 Feb 13 19:24:18.444042 ntpd[1864]: 13 Feb 19:24:18 ntpd[1864]: Listen normally on 11 lxc_health [fe80::2499:8dff:fe63:cf8b%8]:123 Feb 13 19:24:18.444042 ntpd[1864]: 13 Feb 19:24:18 ntpd[1864]: Listen normally on 12 lxc2782312c9add [fe80::281d:17ff:fe6d:bb28%10]:123 Feb 13 19:24:18.444042 ntpd[1864]: 13 Feb 19:24:18 ntpd[1864]: Listen normally on 13 lxc908631a68fa9 [fe80::107a:6aff:fe10:17d%12]:123 Feb 13 19:24:18.442781 ntpd[1864]: Listen normally on 8 cilium_net [fe80::28f5:b2ff:feb1:1762%4]:123 Feb 13 19:24:18.442841 ntpd[1864]: Listen normally on 9 cilium_host [fe80::7c07:7aff:fe1d:72f1%5]:123 Feb 13 19:24:18.442882 ntpd[1864]: Listen normally on 10 cilium_vxlan [fe80::1094:9cff:fee6:178c%6]:123 Feb 13 19:24:18.442931 ntpd[1864]: Listen normally on 11 lxc_health [fe80::2499:8dff:fe63:cf8b%8]:123 Feb 13 19:24:18.442969 ntpd[1864]: Listen normally on 12 lxc2782312c9add [fe80::281d:17ff:fe6d:bb28%10]:123 Feb 13 19:24:18.443009 ntpd[1864]: Listen normally on 13 lxc908631a68fa9 [fe80::107a:6aff:fe10:17d%12]:123 Feb 13 19:24:22.197156 containerd[1889]: time="2025-02-13T19:24:22.194724190Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:24:22.197156 containerd[1889]: time="2025-02-13T19:24:22.194819635Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:24:22.197156 containerd[1889]: time="2025-02-13T19:24:22.194843702Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:24:22.200147 containerd[1889]: time="2025-02-13T19:24:22.198751046Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:24:22.200147 containerd[1889]: time="2025-02-13T19:24:22.198981982Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:24:22.200147 containerd[1889]: time="2025-02-13T19:24:22.199008089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:24:22.200147 containerd[1889]: time="2025-02-13T19:24:22.199210431Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:24:22.201930 containerd[1889]: time="2025-02-13T19:24:22.201449810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:24:22.284942 systemd[1]: Started cri-containerd-a1ea6bcb066e2043fc28d3e1d3e5ffd6f79b77e67b4d51726cd8e494c3d51ca4.scope - libcontainer container a1ea6bcb066e2043fc28d3e1d3e5ffd6f79b77e67b4d51726cd8e494c3d51ca4. Feb 13 19:24:22.312373 systemd[1]: Started cri-containerd-2e2ade47217aaf05bf5e6dd1dbc5144e748dad4dc14173ad3d3162810b1b284b.scope - libcontainer container 2e2ade47217aaf05bf5e6dd1dbc5144e748dad4dc14173ad3d3162810b1b284b. Feb 13 19:24:22.392419 containerd[1889]: time="2025-02-13T19:24:22.392337639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t5w5s,Uid:2a1aac28-7159-4ba4-85cf-f302c65ee3da,Namespace:kube-system,Attempt:0,} returns sandbox id \"a1ea6bcb066e2043fc28d3e1d3e5ffd6f79b77e67b4d51726cd8e494c3d51ca4\"" Feb 13 19:24:22.399419 containerd[1889]: time="2025-02-13T19:24:22.399317208Z" level=info msg="CreateContainer within sandbox \"a1ea6bcb066e2043fc28d3e1d3e5ffd6f79b77e67b4d51726cd8e494c3d51ca4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:24:22.441028 containerd[1889]: time="2025-02-13T19:24:22.440924652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-r4q6x,Uid:393d9b64-0531-42a9-ba41-5767b1b48de1,Namespace:kube-system,Attempt:0,} returns sandbox id \"2e2ade47217aaf05bf5e6dd1dbc5144e748dad4dc14173ad3d3162810b1b284b\"" Feb 13 19:24:22.463245 containerd[1889]: time="2025-02-13T19:24:22.462775371Z" level=info msg="CreateContainer within sandbox \"2e2ade47217aaf05bf5e6dd1dbc5144e748dad4dc14173ad3d3162810b1b284b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:24:22.605340 containerd[1889]: time="2025-02-13T19:24:22.605285511Z" level=info msg="CreateContainer within sandbox \"a1ea6bcb066e2043fc28d3e1d3e5ffd6f79b77e67b4d51726cd8e494c3d51ca4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"84d0df74ac7a6eee44f19b6f35c0498bad1de07855e56ddcc02b4733c0ec62b3\"" Feb 13 19:24:22.606828 containerd[1889]: time="2025-02-13T19:24:22.606299244Z" level=info msg="StartContainer for \"84d0df74ac7a6eee44f19b6f35c0498bad1de07855e56ddcc02b4733c0ec62b3\"" Feb 13 19:24:22.611364 containerd[1889]: time="2025-02-13T19:24:22.609207193Z" level=info msg="CreateContainer within sandbox \"2e2ade47217aaf05bf5e6dd1dbc5144e748dad4dc14173ad3d3162810b1b284b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6904acea35f69eb119c72cca34b2deed3387608f071578834ea52ea47a9bdb2c\"" Feb 13 19:24:22.611510 containerd[1889]: time="2025-02-13T19:24:22.611401935Z" level=info msg="StartContainer for \"6904acea35f69eb119c72cca34b2deed3387608f071578834ea52ea47a9bdb2c\"" Feb 13 19:24:22.749798 systemd[1]: Started cri-containerd-6904acea35f69eb119c72cca34b2deed3387608f071578834ea52ea47a9bdb2c.scope - libcontainer container 6904acea35f69eb119c72cca34b2deed3387608f071578834ea52ea47a9bdb2c. Feb 13 19:24:22.771427 systemd[1]: Started cri-containerd-84d0df74ac7a6eee44f19b6f35c0498bad1de07855e56ddcc02b4733c0ec62b3.scope - libcontainer container 84d0df74ac7a6eee44f19b6f35c0498bad1de07855e56ddcc02b4733c0ec62b3. Feb 13 19:24:22.865540 containerd[1889]: time="2025-02-13T19:24:22.859945888Z" level=info msg="StartContainer for \"6904acea35f69eb119c72cca34b2deed3387608f071578834ea52ea47a9bdb2c\" returns successfully" Feb 13 19:24:22.890937 containerd[1889]: time="2025-02-13T19:24:22.890350442Z" level=info msg="StartContainer for \"84d0df74ac7a6eee44f19b6f35c0498bad1de07855e56ddcc02b4733c0ec62b3\" returns successfully" Feb 13 19:24:23.091027 kubelet[3345]: I0213 19:24:23.090877 3345 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-r4q6x" podStartSLOduration=37.090854883 podStartE2EDuration="37.090854883s" podCreationTimestamp="2025-02-13 19:23:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:24:23.087769098 +0000 UTC m=+41.760650876" watchObservedRunningTime="2025-02-13 19:24:23.090854883 +0000 UTC m=+41.763736616" Feb 13 19:24:23.213386 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4109900051.mount: Deactivated successfully. Feb 13 19:24:23.659899 systemd[1]: Started sshd@9-172.31.18.187:22-139.178.89.65:44190.service - OpenSSH per-connection server daemon (139.178.89.65:44190). Feb 13 19:24:23.926389 sshd[4713]: Accepted publickey for core from 139.178.89.65 port 44190 ssh2: RSA SHA256:KGbcKF8vZ4+NPkSlme0qB32HGnqAN+vlwaFvbJSvXYQ Feb 13 19:24:23.994534 sshd-session[4713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:24:24.039166 systemd-logind[1871]: New session 10 of user core. Feb 13 19:24:24.070863 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 19:24:24.161422 kubelet[3345]: I0213 19:24:24.161075 3345 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-t5w5s" podStartSLOduration=38.161053583 podStartE2EDuration="38.161053583s" podCreationTimestamp="2025-02-13 19:23:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:24:23.123871313 +0000 UTC m=+41.796753048" watchObservedRunningTime="2025-02-13 19:24:24.161053583 +0000 UTC m=+42.833935319" Feb 13 19:24:25.259272 sshd[4715]: Connection closed by 139.178.89.65 port 44190 Feb 13 19:24:25.260617 sshd-session[4713]: pam_unix(sshd:session): session closed for user core Feb 13 19:24:25.272354 systemd[1]: sshd@9-172.31.18.187:22-139.178.89.65:44190.service: Deactivated successfully. Feb 13 19:24:25.278685 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 19:24:25.279957 systemd-logind[1871]: Session 10 logged out. Waiting for processes to exit. Feb 13 19:24:25.299531 systemd-logind[1871]: Removed session 10. Feb 13 19:24:30.300554 systemd[1]: Started sshd@10-172.31.18.187:22-139.178.89.65:45802.service - OpenSSH per-connection server daemon (139.178.89.65:45802). Feb 13 19:24:30.521178 sshd[4736]: Accepted publickey for core from 139.178.89.65 port 45802 ssh2: RSA SHA256:KGbcKF8vZ4+NPkSlme0qB32HGnqAN+vlwaFvbJSvXYQ Feb 13 19:24:30.522226 sshd-session[4736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:24:30.535238 systemd-logind[1871]: New session 11 of user core. Feb 13 19:24:30.543546 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 19:24:30.805084 sshd[4738]: Connection closed by 139.178.89.65 port 45802 Feb 13 19:24:30.805900 sshd-session[4736]: pam_unix(sshd:session): session closed for user core Feb 13 19:24:30.811758 systemd[1]: sshd@10-172.31.18.187:22-139.178.89.65:45802.service: Deactivated successfully. Feb 13 19:24:30.814494 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 19:24:30.816644 systemd-logind[1871]: Session 11 logged out. Waiting for processes to exit. Feb 13 19:24:30.817931 systemd-logind[1871]: Removed session 11. Feb 13 19:24:35.854276 systemd[1]: Started sshd@11-172.31.18.187:22-139.178.89.65:46048.service - OpenSSH per-connection server daemon (139.178.89.65:46048). Feb 13 19:24:36.071582 sshd[4750]: Accepted publickey for core from 139.178.89.65 port 46048 ssh2: RSA SHA256:KGbcKF8vZ4+NPkSlme0qB32HGnqAN+vlwaFvbJSvXYQ Feb 13 19:24:36.075833 sshd-session[4750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:24:36.101216 systemd-logind[1871]: New session 12 of user core. Feb 13 19:24:36.108371 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 19:24:36.438080 sshd[4752]: Connection closed by 139.178.89.65 port 46048 Feb 13 19:24:36.436852 sshd-session[4750]: pam_unix(sshd:session): session closed for user core Feb 13 19:24:36.452676 systemd[1]: sshd@11-172.31.18.187:22-139.178.89.65:46048.service: Deactivated successfully. Feb 13 19:24:36.456298 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 19:24:36.459427 systemd-logind[1871]: Session 12 logged out. Waiting for processes to exit. Feb 13 19:24:36.461562 systemd-logind[1871]: Removed session 12. Feb 13 19:24:41.463832 systemd[1]: Started sshd@12-172.31.18.187:22-139.178.89.65:46056.service - OpenSSH per-connection server daemon (139.178.89.65:46056). Feb 13 19:24:41.699640 sshd[4765]: Accepted publickey for core from 139.178.89.65 port 46056 ssh2: RSA SHA256:KGbcKF8vZ4+NPkSlme0qB32HGnqAN+vlwaFvbJSvXYQ Feb 13 19:24:41.707767 sshd-session[4765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:24:41.735951 systemd-logind[1871]: New session 13 of user core. Feb 13 19:24:41.739856 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 19:24:42.063722 sshd[4769]: Connection closed by 139.178.89.65 port 46056 Feb 13 19:24:42.068261 sshd-session[4765]: pam_unix(sshd:session): session closed for user core Feb 13 19:24:42.102682 systemd[1]: sshd@12-172.31.18.187:22-139.178.89.65:46056.service: Deactivated successfully. Feb 13 19:24:42.113890 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 19:24:42.117967 systemd-logind[1871]: Session 13 logged out. Waiting for processes to exit. Feb 13 19:24:42.131326 systemd[1]: Started sshd@13-172.31.18.187:22-139.178.89.65:46062.service - OpenSSH per-connection server daemon (139.178.89.65:46062). Feb 13 19:24:42.135614 systemd-logind[1871]: Removed session 13. Feb 13 19:24:42.413641 sshd[4780]: Accepted publickey for core from 139.178.89.65 port 46062 ssh2: RSA SHA256:KGbcKF8vZ4+NPkSlme0qB32HGnqAN+vlwaFvbJSvXYQ Feb 13 19:24:42.417360 sshd-session[4780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:24:42.439562 systemd-logind[1871]: New session 14 of user core. Feb 13 19:24:42.454376 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 19:24:42.889967 sshd[4782]: Connection closed by 139.178.89.65 port 46062 Feb 13 19:24:42.891685 sshd-session[4780]: pam_unix(sshd:session): session closed for user core Feb 13 19:24:42.902464 systemd-logind[1871]: Session 14 logged out. Waiting for processes to exit. Feb 13 19:24:42.913327 systemd[1]: sshd@13-172.31.18.187:22-139.178.89.65:46062.service: Deactivated successfully. Feb 13 19:24:42.925066 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 19:24:42.948312 systemd-logind[1871]: Removed session 14. Feb 13 19:24:42.961024 systemd[1]: Started sshd@14-172.31.18.187:22-139.178.89.65:46064.service - OpenSSH per-connection server daemon (139.178.89.65:46064). Feb 13 19:24:43.159411 sshd[4790]: Accepted publickey for core from 139.178.89.65 port 46064 ssh2: RSA SHA256:KGbcKF8vZ4+NPkSlme0qB32HGnqAN+vlwaFvbJSvXYQ Feb 13 19:24:43.160699 sshd-session[4790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:24:43.169646 systemd-logind[1871]: New session 15 of user core. Feb 13 19:24:43.175896 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 19:24:43.469924 sshd[4792]: Connection closed by 139.178.89.65 port 46064 Feb 13 19:24:43.476326 sshd-session[4790]: pam_unix(sshd:session): session closed for user core Feb 13 19:24:43.488717 systemd[1]: sshd@14-172.31.18.187:22-139.178.89.65:46064.service: Deactivated successfully. Feb 13 19:24:43.501949 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 19:24:43.503195 systemd-logind[1871]: Session 15 logged out. Waiting for processes to exit. Feb 13 19:24:43.510496 systemd-logind[1871]: Removed session 15. Feb 13 19:24:48.524617 systemd[1]: Started sshd@15-172.31.18.187:22-139.178.89.65:47602.service - OpenSSH per-connection server daemon (139.178.89.65:47602). Feb 13 19:24:48.747108 sshd[4804]: Accepted publickey for core from 139.178.89.65 port 47602 ssh2: RSA SHA256:KGbcKF8vZ4+NPkSlme0qB32HGnqAN+vlwaFvbJSvXYQ Feb 13 19:24:48.748117 sshd-session[4804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:24:48.755923 systemd-logind[1871]: New session 16 of user core. Feb 13 19:24:48.767516 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 19:24:49.049886 sshd[4808]: Connection closed by 139.178.89.65 port 47602 Feb 13 19:24:49.055200 sshd-session[4804]: pam_unix(sshd:session): session closed for user core Feb 13 19:24:49.067595 systemd[1]: sshd@15-172.31.18.187:22-139.178.89.65:47602.service: Deactivated successfully. Feb 13 19:24:49.081897 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 19:24:49.089281 systemd-logind[1871]: Session 16 logged out. Waiting for processes to exit. Feb 13 19:24:49.095796 systemd-logind[1871]: Removed session 16. Feb 13 19:24:54.096247 systemd[1]: Started sshd@16-172.31.18.187:22-139.178.89.65:47606.service - OpenSSH per-connection server daemon (139.178.89.65:47606). Feb 13 19:24:54.340979 sshd[4823]: Accepted publickey for core from 139.178.89.65 port 47606 ssh2: RSA SHA256:KGbcKF8vZ4+NPkSlme0qB32HGnqAN+vlwaFvbJSvXYQ Feb 13 19:24:54.342978 sshd-session[4823]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:24:54.353064 systemd-logind[1871]: New session 17 of user core. Feb 13 19:24:54.356836 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 19:24:54.613052 sshd[4825]: Connection closed by 139.178.89.65 port 47606 Feb 13 19:24:54.615087 sshd-session[4823]: pam_unix(sshd:session): session closed for user core Feb 13 19:24:54.618807 systemd[1]: sshd@16-172.31.18.187:22-139.178.89.65:47606.service: Deactivated successfully. Feb 13 19:24:54.622181 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 19:24:54.624816 systemd-logind[1871]: Session 17 logged out. Waiting for processes to exit. Feb 13 19:24:54.626781 systemd-logind[1871]: Removed session 17. Feb 13 19:24:59.659120 systemd[1]: Started sshd@17-172.31.18.187:22-139.178.89.65:36022.service - OpenSSH per-connection server daemon (139.178.89.65:36022). Feb 13 19:24:59.870366 sshd[4836]: Accepted publickey for core from 139.178.89.65 port 36022 ssh2: RSA SHA256:KGbcKF8vZ4+NPkSlme0qB32HGnqAN+vlwaFvbJSvXYQ Feb 13 19:24:59.872079 sshd-session[4836]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:24:59.881552 systemd-logind[1871]: New session 18 of user core. Feb 13 19:24:59.889382 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 19:25:00.236644 sshd[4838]: Connection closed by 139.178.89.65 port 36022 Feb 13 19:25:00.238304 sshd-session[4836]: pam_unix(sshd:session): session closed for user core Feb 13 19:25:00.244171 systemd-logind[1871]: Session 18 logged out. Waiting for processes to exit. Feb 13 19:25:00.245162 systemd[1]: sshd@17-172.31.18.187:22-139.178.89.65:36022.service: Deactivated successfully. Feb 13 19:25:00.247285 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 19:25:00.248374 systemd-logind[1871]: Removed session 18. Feb 13 19:25:00.283627 systemd[1]: Started sshd@18-172.31.18.187:22-139.178.89.65:36034.service - OpenSSH per-connection server daemon (139.178.89.65:36034). Feb 13 19:25:00.466185 sshd[4849]: Accepted publickey for core from 139.178.89.65 port 36034 ssh2: RSA SHA256:KGbcKF8vZ4+NPkSlme0qB32HGnqAN+vlwaFvbJSvXYQ Feb 13 19:25:00.468913 sshd-session[4849]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:25:00.481661 systemd-logind[1871]: New session 19 of user core. Feb 13 19:25:00.490766 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 19:25:01.290622 sshd[4851]: Connection closed by 139.178.89.65 port 36034 Feb 13 19:25:01.293619 sshd-session[4849]: pam_unix(sshd:session): session closed for user core Feb 13 19:25:01.306180 systemd-logind[1871]: Session 19 logged out. Waiting for processes to exit. Feb 13 19:25:01.307313 systemd[1]: sshd@18-172.31.18.187:22-139.178.89.65:36034.service: Deactivated successfully. Feb 13 19:25:01.310452 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 19:25:01.354511 systemd[1]: Started sshd@19-172.31.18.187:22-139.178.89.65:36036.service - OpenSSH per-connection server daemon (139.178.89.65:36036). Feb 13 19:25:01.367948 systemd-logind[1871]: Removed session 19. Feb 13 19:25:01.699521 sshd[4859]: Accepted publickey for core from 139.178.89.65 port 36036 ssh2: RSA SHA256:KGbcKF8vZ4+NPkSlme0qB32HGnqAN+vlwaFvbJSvXYQ Feb 13 19:25:01.701406 sshd-session[4859]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:25:01.714408 systemd-logind[1871]: New session 20 of user core. Feb 13 19:25:01.724420 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 19:25:03.683573 sshd[4861]: Connection closed by 139.178.89.65 port 36036 Feb 13 19:25:03.686384 sshd-session[4859]: pam_unix(sshd:session): session closed for user core Feb 13 19:25:03.694475 systemd-logind[1871]: Session 20 logged out. Waiting for processes to exit. Feb 13 19:25:03.697436 systemd[1]: sshd@19-172.31.18.187:22-139.178.89.65:36036.service: Deactivated successfully. Feb 13 19:25:03.701089 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 19:25:03.735550 systemd-logind[1871]: Removed session 20. Feb 13 19:25:03.747534 systemd[1]: Started sshd@20-172.31.18.187:22-139.178.89.65:36044.service - OpenSSH per-connection server daemon (139.178.89.65:36044). Feb 13 19:25:04.019253 sshd[4876]: Accepted publickey for core from 139.178.89.65 port 36044 ssh2: RSA SHA256:KGbcKF8vZ4+NPkSlme0qB32HGnqAN+vlwaFvbJSvXYQ Feb 13 19:25:04.019702 sshd-session[4876]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:25:04.052213 systemd-logind[1871]: New session 21 of user core. Feb 13 19:25:04.061421 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 19:25:04.650845 sshd[4880]: Connection closed by 139.178.89.65 port 36044 Feb 13 19:25:04.653049 sshd-session[4876]: pam_unix(sshd:session): session closed for user core Feb 13 19:25:04.657792 systemd-logind[1871]: Session 21 logged out. Waiting for processes to exit. Feb 13 19:25:04.659540 systemd[1]: sshd@20-172.31.18.187:22-139.178.89.65:36044.service: Deactivated successfully. Feb 13 19:25:04.662402 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 19:25:04.664357 systemd-logind[1871]: Removed session 21. Feb 13 19:25:04.688596 systemd[1]: Started sshd@21-172.31.18.187:22-139.178.89.65:50218.service - OpenSSH per-connection server daemon (139.178.89.65:50218). Feb 13 19:25:04.859272 sshd[4889]: Accepted publickey for core from 139.178.89.65 port 50218 ssh2: RSA SHA256:KGbcKF8vZ4+NPkSlme0qB32HGnqAN+vlwaFvbJSvXYQ Feb 13 19:25:04.862329 sshd-session[4889]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:25:04.869382 systemd-logind[1871]: New session 22 of user core. Feb 13 19:25:04.874426 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 19:25:05.118965 sshd[4891]: Connection closed by 139.178.89.65 port 50218 Feb 13 19:25:05.120552 sshd-session[4889]: pam_unix(sshd:session): session closed for user core Feb 13 19:25:05.124396 systemd[1]: sshd@21-172.31.18.187:22-139.178.89.65:50218.service: Deactivated successfully. Feb 13 19:25:05.127318 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 19:25:05.129358 systemd-logind[1871]: Session 22 logged out. Waiting for processes to exit. Feb 13 19:25:05.131458 systemd-logind[1871]: Removed session 22. Feb 13 19:25:10.155628 systemd[1]: Started sshd@22-172.31.18.187:22-139.178.89.65:50222.service - OpenSSH per-connection server daemon (139.178.89.65:50222). Feb 13 19:25:10.373019 sshd[4902]: Accepted publickey for core from 139.178.89.65 port 50222 ssh2: RSA SHA256:KGbcKF8vZ4+NPkSlme0qB32HGnqAN+vlwaFvbJSvXYQ Feb 13 19:25:10.373795 sshd-session[4902]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:25:10.394114 systemd-logind[1871]: New session 23 of user core. Feb 13 19:25:10.406514 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 19:25:10.691570 sshd[4904]: Connection closed by 139.178.89.65 port 50222 Feb 13 19:25:10.692301 sshd-session[4902]: pam_unix(sshd:session): session closed for user core Feb 13 19:25:10.715290 systemd[1]: sshd@22-172.31.18.187:22-139.178.89.65:50222.service: Deactivated successfully. Feb 13 19:25:10.726517 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 19:25:10.729204 systemd-logind[1871]: Session 23 logged out. Waiting for processes to exit. Feb 13 19:25:10.734309 systemd-logind[1871]: Removed session 23. Feb 13 19:25:15.731881 systemd[1]: Started sshd@23-172.31.18.187:22-139.178.89.65:47486.service - OpenSSH per-connection server daemon (139.178.89.65:47486). Feb 13 19:25:15.926605 sshd[4917]: Accepted publickey for core from 139.178.89.65 port 47486 ssh2: RSA SHA256:KGbcKF8vZ4+NPkSlme0qB32HGnqAN+vlwaFvbJSvXYQ Feb 13 19:25:15.928247 sshd-session[4917]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:25:15.938253 systemd-logind[1871]: New session 24 of user core. Feb 13 19:25:15.944369 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 19:25:16.151927 sshd[4919]: Connection closed by 139.178.89.65 port 47486 Feb 13 19:25:16.153838 sshd-session[4917]: pam_unix(sshd:session): session closed for user core Feb 13 19:25:16.158600 systemd-logind[1871]: Session 24 logged out. Waiting for processes to exit. Feb 13 19:25:16.159486 systemd[1]: sshd@23-172.31.18.187:22-139.178.89.65:47486.service: Deactivated successfully. Feb 13 19:25:16.163409 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 19:25:16.165059 systemd-logind[1871]: Removed session 24. Feb 13 19:25:21.191804 systemd[1]: Started sshd@24-172.31.18.187:22-139.178.89.65:47490.service - OpenSSH per-connection server daemon (139.178.89.65:47490). Feb 13 19:25:21.375001 sshd[4932]: Accepted publickey for core from 139.178.89.65 port 47490 ssh2: RSA SHA256:KGbcKF8vZ4+NPkSlme0qB32HGnqAN+vlwaFvbJSvXYQ Feb 13 19:25:21.376946 sshd-session[4932]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:25:21.383536 systemd-logind[1871]: New session 25 of user core. Feb 13 19:25:21.393468 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 19:25:21.618547 sshd[4934]: Connection closed by 139.178.89.65 port 47490 Feb 13 19:25:21.621076 sshd-session[4932]: pam_unix(sshd:session): session closed for user core Feb 13 19:25:21.631513 systemd[1]: sshd@24-172.31.18.187:22-139.178.89.65:47490.service: Deactivated successfully. Feb 13 19:25:21.638202 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 19:25:21.651717 systemd-logind[1871]: Session 25 logged out. Waiting for processes to exit. Feb 13 19:25:21.660371 systemd-logind[1871]: Removed session 25. Feb 13 19:25:26.666855 systemd[1]: Started sshd@25-172.31.18.187:22-139.178.89.65:40258.service - OpenSSH per-connection server daemon (139.178.89.65:40258). Feb 13 19:25:26.901567 sshd[4945]: Accepted publickey for core from 139.178.89.65 port 40258 ssh2: RSA SHA256:KGbcKF8vZ4+NPkSlme0qB32HGnqAN+vlwaFvbJSvXYQ Feb 13 19:25:26.904255 sshd-session[4945]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:25:26.928796 systemd-logind[1871]: New session 26 of user core. Feb 13 19:25:26.937091 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 19:25:27.193080 sshd[4947]: Connection closed by 139.178.89.65 port 40258 Feb 13 19:25:27.194395 sshd-session[4945]: pam_unix(sshd:session): session closed for user core Feb 13 19:25:27.202433 systemd[1]: sshd@25-172.31.18.187:22-139.178.89.65:40258.service: Deactivated successfully. Feb 13 19:25:27.205775 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 19:25:27.207918 systemd-logind[1871]: Session 26 logged out. Waiting for processes to exit. Feb 13 19:25:27.243033 systemd[1]: Started sshd@26-172.31.18.187:22-139.178.89.65:40268.service - OpenSSH per-connection server daemon (139.178.89.65:40268). Feb 13 19:25:27.246786 systemd-logind[1871]: Removed session 26. Feb 13 19:25:27.450806 sshd[4958]: Accepted publickey for core from 139.178.89.65 port 40268 ssh2: RSA SHA256:KGbcKF8vZ4+NPkSlme0qB32HGnqAN+vlwaFvbJSvXYQ Feb 13 19:25:27.451438 sshd-session[4958]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:25:27.461231 systemd-logind[1871]: New session 27 of user core. Feb 13 19:25:27.469622 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 19:25:30.031805 systemd[1]: run-containerd-runc-k8s.io-faa38fa5cd97bf87e9e351dec59e379ddece6265795ecc7b4452a3c52251d6af-runc.o2j9hW.mount: Deactivated successfully. Feb 13 19:25:30.067466 containerd[1889]: time="2025-02-13T19:25:30.067418381Z" level=info msg="StopContainer for \"ca1323d02ab6f48280503a1e6455cfbdf8f6d9da9f952a3e9575b1c88df5b36d\" with timeout 30 (s)" Feb 13 19:25:30.074471 containerd[1889]: time="2025-02-13T19:25:30.074299030Z" level=info msg="Stop container \"ca1323d02ab6f48280503a1e6455cfbdf8f6d9da9f952a3e9575b1c88df5b36d\" with signal terminated" Feb 13 19:25:30.082947 containerd[1889]: time="2025-02-13T19:25:30.081472762Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:25:30.109220 containerd[1889]: time="2025-02-13T19:25:30.109072819Z" level=info msg="StopContainer for \"faa38fa5cd97bf87e9e351dec59e379ddece6265795ecc7b4452a3c52251d6af\" with timeout 2 (s)" Feb 13 19:25:30.113867 containerd[1889]: time="2025-02-13T19:25:30.113807026Z" level=info msg="Stop container \"faa38fa5cd97bf87e9e351dec59e379ddece6265795ecc7b4452a3c52251d6af\" with signal terminated" Feb 13 19:25:30.114464 systemd[1]: cri-containerd-ca1323d02ab6f48280503a1e6455cfbdf8f6d9da9f952a3e9575b1c88df5b36d.scope: Deactivated successfully. Feb 13 19:25:30.131637 systemd-networkd[1798]: lxc_health: Link DOWN Feb 13 19:25:30.131650 systemd-networkd[1798]: lxc_health: Lost carrier Feb 13 19:25:30.165863 systemd[1]: cri-containerd-faa38fa5cd97bf87e9e351dec59e379ddece6265795ecc7b4452a3c52251d6af.scope: Deactivated successfully. Feb 13 19:25:30.166603 systemd[1]: cri-containerd-faa38fa5cd97bf87e9e351dec59e379ddece6265795ecc7b4452a3c52251d6af.scope: Consumed 9.099s CPU time. Feb 13 19:25:30.192605 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ca1323d02ab6f48280503a1e6455cfbdf8f6d9da9f952a3e9575b1c88df5b36d-rootfs.mount: Deactivated successfully. Feb 13 19:25:30.219018 containerd[1889]: time="2025-02-13T19:25:30.218813245Z" level=info msg="shim disconnected" id=ca1323d02ab6f48280503a1e6455cfbdf8f6d9da9f952a3e9575b1c88df5b36d namespace=k8s.io Feb 13 19:25:30.219018 containerd[1889]: time="2025-02-13T19:25:30.219019988Z" level=warning msg="cleaning up after shim disconnected" id=ca1323d02ab6f48280503a1e6455cfbdf8f6d9da9f952a3e9575b1c88df5b36d namespace=k8s.io Feb 13 19:25:30.219529 containerd[1889]: time="2025-02-13T19:25:30.219034065Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:25:30.262139 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-faa38fa5cd97bf87e9e351dec59e379ddece6265795ecc7b4452a3c52251d6af-rootfs.mount: Deactivated successfully. Feb 13 19:25:30.275159 containerd[1889]: time="2025-02-13T19:25:30.274991373Z" level=info msg="shim disconnected" id=faa38fa5cd97bf87e9e351dec59e379ddece6265795ecc7b4452a3c52251d6af namespace=k8s.io Feb 13 19:25:30.275159 containerd[1889]: time="2025-02-13T19:25:30.275147488Z" level=warning msg="cleaning up after shim disconnected" id=faa38fa5cd97bf87e9e351dec59e379ddece6265795ecc7b4452a3c52251d6af namespace=k8s.io Feb 13 19:25:30.275159 containerd[1889]: time="2025-02-13T19:25:30.275164090Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:25:30.281557 containerd[1889]: time="2025-02-13T19:25:30.280901293Z" level=warning msg="cleanup warnings time=\"2025-02-13T19:25:30Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 19:25:30.288405 containerd[1889]: time="2025-02-13T19:25:30.288292458Z" level=info msg="StopContainer for \"ca1323d02ab6f48280503a1e6455cfbdf8f6d9da9f952a3e9575b1c88df5b36d\" returns successfully" Feb 13 19:25:30.304515 containerd[1889]: time="2025-02-13T19:25:30.303877255Z" level=info msg="StopPodSandbox for \"c1879380053ede6e850a9931ba220cf5bd415497cee6c4265a288aaebfadda97\"" Feb 13 19:25:30.307724 containerd[1889]: time="2025-02-13T19:25:30.304328507Z" level=info msg="Container to stop \"ca1323d02ab6f48280503a1e6455cfbdf8f6d9da9f952a3e9575b1c88df5b36d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:25:30.312006 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c1879380053ede6e850a9931ba220cf5bd415497cee6c4265a288aaebfadda97-shm.mount: Deactivated successfully. Feb 13 19:25:30.312599 containerd[1889]: time="2025-02-13T19:25:30.312559005Z" level=info msg="StopContainer for \"faa38fa5cd97bf87e9e351dec59e379ddece6265795ecc7b4452a3c52251d6af\" returns successfully" Feb 13 19:25:30.314035 containerd[1889]: time="2025-02-13T19:25:30.313665354Z" level=info msg="StopPodSandbox for \"0c0055718e29cafb7e7afdf166a6c847aa0b41948170811318ed9d77a50e6880\"" Feb 13 19:25:30.314035 containerd[1889]: time="2025-02-13T19:25:30.313710788Z" level=info msg="Container to stop \"963df4b808e2653fd669c373fc913c1c1a5f2e687f034049bdbfbc1529c16fc9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:25:30.314035 containerd[1889]: time="2025-02-13T19:25:30.313750365Z" level=info msg="Container to stop \"c0589803726607e2fa037b8b6b1a67d374c63fbe893cca5f03097e541955a40c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:25:30.314035 containerd[1889]: time="2025-02-13T19:25:30.313761829Z" level=info msg="Container to stop \"4a2fa56ebe0b3d88fc00398adfbadf63acb3bf6d8e4759adf5824a4e57e41608\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:25:30.314035 containerd[1889]: time="2025-02-13T19:25:30.313774560Z" level=info msg="Container to stop \"faa38fa5cd97bf87e9e351dec59e379ddece6265795ecc7b4452a3c52251d6af\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:25:30.314035 containerd[1889]: time="2025-02-13T19:25:30.313786584Z" level=info msg="Container to stop \"cf213306d9c3926f0dbde8f9bde8b59adf6bb52bda088e7bfb80f79fe19223b6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:25:30.353676 systemd[1]: cri-containerd-0c0055718e29cafb7e7afdf166a6c847aa0b41948170811318ed9d77a50e6880.scope: Deactivated successfully. Feb 13 19:25:30.366874 systemd[1]: cri-containerd-c1879380053ede6e850a9931ba220cf5bd415497cee6c4265a288aaebfadda97.scope: Deactivated successfully. Feb 13 19:25:30.473575 containerd[1889]: time="2025-02-13T19:25:30.473491265Z" level=info msg="shim disconnected" id=0c0055718e29cafb7e7afdf166a6c847aa0b41948170811318ed9d77a50e6880 namespace=k8s.io Feb 13 19:25:30.473864 containerd[1889]: time="2025-02-13T19:25:30.473614591Z" level=warning msg="cleaning up after shim disconnected" id=0c0055718e29cafb7e7afdf166a6c847aa0b41948170811318ed9d77a50e6880 namespace=k8s.io Feb 13 19:25:30.473864 containerd[1889]: time="2025-02-13T19:25:30.473629480Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:25:30.474238 containerd[1889]: time="2025-02-13T19:25:30.474191955Z" level=info msg="shim disconnected" id=c1879380053ede6e850a9931ba220cf5bd415497cee6c4265a288aaebfadda97 namespace=k8s.io Feb 13 19:25:30.478974 containerd[1889]: time="2025-02-13T19:25:30.474811004Z" level=warning msg="cleaning up after shim disconnected" id=c1879380053ede6e850a9931ba220cf5bd415497cee6c4265a288aaebfadda97 namespace=k8s.io Feb 13 19:25:30.478974 containerd[1889]: time="2025-02-13T19:25:30.474833623Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:25:30.543126 containerd[1889]: time="2025-02-13T19:25:30.542917562Z" level=info msg="TearDown network for sandbox \"0c0055718e29cafb7e7afdf166a6c847aa0b41948170811318ed9d77a50e6880\" successfully" Feb 13 19:25:30.543126 containerd[1889]: time="2025-02-13T19:25:30.542958213Z" level=info msg="StopPodSandbox for \"0c0055718e29cafb7e7afdf166a6c847aa0b41948170811318ed9d77a50e6880\" returns successfully" Feb 13 19:25:30.546486 containerd[1889]: time="2025-02-13T19:25:30.546260436Z" level=info msg="TearDown network for sandbox \"c1879380053ede6e850a9931ba220cf5bd415497cee6c4265a288aaebfadda97\" successfully" Feb 13 19:25:30.546486 containerd[1889]: time="2025-02-13T19:25:30.546294317Z" level=info msg="StopPodSandbox for \"c1879380053ede6e850a9931ba220cf5bd415497cee6c4265a288aaebfadda97\" returns successfully" Feb 13 19:25:30.627724 kubelet[3345]: I0213 19:25:30.627589 3345 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c492be81-a1df-441c-9ddc-36fb9c692d0d-xtables-lock\") pod \"c492be81-a1df-441c-9ddc-36fb9c692d0d\" (UID: \"c492be81-a1df-441c-9ddc-36fb9c692d0d\") " Feb 13 19:25:30.627724 kubelet[3345]: I0213 19:25:30.627648 3345 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ftml\" (UniqueName: \"kubernetes.io/projected/ffb44e99-4525-4809-996a-6200aa62fac8-kube-api-access-9ftml\") pod \"ffb44e99-4525-4809-996a-6200aa62fac8\" (UID: \"ffb44e99-4525-4809-996a-6200aa62fac8\") " Feb 13 19:25:30.627724 kubelet[3345]: I0213 19:25:30.627671 3345 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c492be81-a1df-441c-9ddc-36fb9c692d0d-cilium-run\") pod \"c492be81-a1df-441c-9ddc-36fb9c692d0d\" (UID: \"c492be81-a1df-441c-9ddc-36fb9c692d0d\") " Feb 13 19:25:30.627724 kubelet[3345]: I0213 19:25:30.627700 3345 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c492be81-a1df-441c-9ddc-36fb9c692d0d-clustermesh-secrets\") pod \"c492be81-a1df-441c-9ddc-36fb9c692d0d\" (UID: \"c492be81-a1df-441c-9ddc-36fb9c692d0d\") " Feb 13 19:25:30.627724 kubelet[3345]: I0213 19:25:30.627724 3345 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rn2st\" (UniqueName: \"kubernetes.io/projected/c492be81-a1df-441c-9ddc-36fb9c692d0d-kube-api-access-rn2st\") pod \"c492be81-a1df-441c-9ddc-36fb9c692d0d\" (UID: \"c492be81-a1df-441c-9ddc-36fb9c692d0d\") " Feb 13 19:25:30.628429 kubelet[3345]: I0213 19:25:30.627744 3345 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c492be81-a1df-441c-9ddc-36fb9c692d0d-host-proc-sys-net\") pod \"c492be81-a1df-441c-9ddc-36fb9c692d0d\" (UID: \"c492be81-a1df-441c-9ddc-36fb9c692d0d\") " Feb 13 19:25:30.628429 kubelet[3345]: I0213 19:25:30.627764 3345 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c492be81-a1df-441c-9ddc-36fb9c692d0d-bpf-maps\") pod \"c492be81-a1df-441c-9ddc-36fb9c692d0d\" (UID: \"c492be81-a1df-441c-9ddc-36fb9c692d0d\") " Feb 13 19:25:30.628429 kubelet[3345]: I0213 19:25:30.627786 3345 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c492be81-a1df-441c-9ddc-36fb9c692d0d-cni-path\") pod \"c492be81-a1df-441c-9ddc-36fb9c692d0d\" (UID: \"c492be81-a1df-441c-9ddc-36fb9c692d0d\") " Feb 13 19:25:30.628429 kubelet[3345]: I0213 19:25:30.627810 3345 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c492be81-a1df-441c-9ddc-36fb9c692d0d-hubble-tls\") pod \"c492be81-a1df-441c-9ddc-36fb9c692d0d\" (UID: \"c492be81-a1df-441c-9ddc-36fb9c692d0d\") " Feb 13 19:25:30.628429 kubelet[3345]: I0213 19:25:30.627840 3345 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c492be81-a1df-441c-9ddc-36fb9c692d0d-cilium-config-path\") pod \"c492be81-a1df-441c-9ddc-36fb9c692d0d\" (UID: \"c492be81-a1df-441c-9ddc-36fb9c692d0d\") " Feb 13 19:25:30.628429 kubelet[3345]: I0213 19:25:30.627863 3345 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c492be81-a1df-441c-9ddc-36fb9c692d0d-hostproc\") pod \"c492be81-a1df-441c-9ddc-36fb9c692d0d\" (UID: \"c492be81-a1df-441c-9ddc-36fb9c692d0d\") " Feb 13 19:25:30.628719 kubelet[3345]: I0213 19:25:30.627890 3345 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c492be81-a1df-441c-9ddc-36fb9c692d0d-etc-cni-netd\") pod \"c492be81-a1df-441c-9ddc-36fb9c692d0d\" (UID: \"c492be81-a1df-441c-9ddc-36fb9c692d0d\") " Feb 13 19:25:30.628719 kubelet[3345]: I0213 19:25:30.627911 3345 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c492be81-a1df-441c-9ddc-36fb9c692d0d-cilium-cgroup\") pod \"c492be81-a1df-441c-9ddc-36fb9c692d0d\" (UID: \"c492be81-a1df-441c-9ddc-36fb9c692d0d\") " Feb 13 19:25:30.628719 kubelet[3345]: I0213 19:25:30.627932 3345 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c492be81-a1df-441c-9ddc-36fb9c692d0d-host-proc-sys-kernel\") pod \"c492be81-a1df-441c-9ddc-36fb9c692d0d\" (UID: \"c492be81-a1df-441c-9ddc-36fb9c692d0d\") " Feb 13 19:25:30.628719 kubelet[3345]: I0213 19:25:30.628035 3345 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ffb44e99-4525-4809-996a-6200aa62fac8-cilium-config-path\") pod \"ffb44e99-4525-4809-996a-6200aa62fac8\" (UID: \"ffb44e99-4525-4809-996a-6200aa62fac8\") " Feb 13 19:25:30.628719 kubelet[3345]: I0213 19:25:30.628069 3345 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c492be81-a1df-441c-9ddc-36fb9c692d0d-lib-modules\") pod \"c492be81-a1df-441c-9ddc-36fb9c692d0d\" (UID: \"c492be81-a1df-441c-9ddc-36fb9c692d0d\") " Feb 13 19:25:30.642174 kubelet[3345]: I0213 19:25:30.638620 3345 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c492be81-a1df-441c-9ddc-36fb9c692d0d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c492be81-a1df-441c-9ddc-36fb9c692d0d" (UID: "c492be81-a1df-441c-9ddc-36fb9c692d0d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:25:30.642174 kubelet[3345]: I0213 19:25:30.640944 3345 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c492be81-a1df-441c-9ddc-36fb9c692d0d-cni-path" (OuterVolumeSpecName: "cni-path") pod "c492be81-a1df-441c-9ddc-36fb9c692d0d" (UID: "c492be81-a1df-441c-9ddc-36fb9c692d0d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:25:30.644943 kubelet[3345]: I0213 19:25:30.644092 3345 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c492be81-a1df-441c-9ddc-36fb9c692d0d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c492be81-a1df-441c-9ddc-36fb9c692d0d" (UID: "c492be81-a1df-441c-9ddc-36fb9c692d0d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 13 19:25:30.644943 kubelet[3345]: I0213 19:25:30.644198 3345 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c492be81-a1df-441c-9ddc-36fb9c692d0d-hostproc" (OuterVolumeSpecName: "hostproc") pod "c492be81-a1df-441c-9ddc-36fb9c692d0d" (UID: "c492be81-a1df-441c-9ddc-36fb9c692d0d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:25:30.644943 kubelet[3345]: I0213 19:25:30.644226 3345 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c492be81-a1df-441c-9ddc-36fb9c692d0d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c492be81-a1df-441c-9ddc-36fb9c692d0d" (UID: "c492be81-a1df-441c-9ddc-36fb9c692d0d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:25:30.644943 kubelet[3345]: I0213 19:25:30.644249 3345 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c492be81-a1df-441c-9ddc-36fb9c692d0d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c492be81-a1df-441c-9ddc-36fb9c692d0d" (UID: "c492be81-a1df-441c-9ddc-36fb9c692d0d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:25:30.644943 kubelet[3345]: I0213 19:25:30.644271 3345 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c492be81-a1df-441c-9ddc-36fb9c692d0d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c492be81-a1df-441c-9ddc-36fb9c692d0d" (UID: "c492be81-a1df-441c-9ddc-36fb9c692d0d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:25:30.647055 kubelet[3345]: I0213 19:25:30.647004 3345 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffb44e99-4525-4809-996a-6200aa62fac8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ffb44e99-4525-4809-996a-6200aa62fac8" (UID: "ffb44e99-4525-4809-996a-6200aa62fac8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 13 19:25:30.647055 kubelet[3345]: I0213 19:25:30.638624 3345 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c492be81-a1df-441c-9ddc-36fb9c692d0d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c492be81-a1df-441c-9ddc-36fb9c692d0d" (UID: "c492be81-a1df-441c-9ddc-36fb9c692d0d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:25:30.653899 kubelet[3345]: I0213 19:25:30.653687 3345 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c492be81-a1df-441c-9ddc-36fb9c692d0d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c492be81-a1df-441c-9ddc-36fb9c692d0d" (UID: "c492be81-a1df-441c-9ddc-36fb9c692d0d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:25:30.653899 kubelet[3345]: I0213 19:25:30.653749 3345 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c492be81-a1df-441c-9ddc-36fb9c692d0d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c492be81-a1df-441c-9ddc-36fb9c692d0d" (UID: "c492be81-a1df-441c-9ddc-36fb9c692d0d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:25:30.655538 kubelet[3345]: I0213 19:25:30.655117 3345 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c492be81-a1df-441c-9ddc-36fb9c692d0d-kube-api-access-rn2st" (OuterVolumeSpecName: "kube-api-access-rn2st") pod "c492be81-a1df-441c-9ddc-36fb9c692d0d" (UID: "c492be81-a1df-441c-9ddc-36fb9c692d0d"). InnerVolumeSpecName "kube-api-access-rn2st". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 13 19:25:30.655538 kubelet[3345]: I0213 19:25:30.655219 3345 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c492be81-a1df-441c-9ddc-36fb9c692d0d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c492be81-a1df-441c-9ddc-36fb9c692d0d" (UID: "c492be81-a1df-441c-9ddc-36fb9c692d0d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:25:30.657702 kubelet[3345]: I0213 19:25:30.657553 3345 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c492be81-a1df-441c-9ddc-36fb9c692d0d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c492be81-a1df-441c-9ddc-36fb9c692d0d" (UID: "c492be81-a1df-441c-9ddc-36fb9c692d0d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 13 19:25:30.657702 kubelet[3345]: I0213 19:25:30.657657 3345 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffb44e99-4525-4809-996a-6200aa62fac8-kube-api-access-9ftml" (OuterVolumeSpecName: "kube-api-access-9ftml") pod "ffb44e99-4525-4809-996a-6200aa62fac8" (UID: "ffb44e99-4525-4809-996a-6200aa62fac8"). InnerVolumeSpecName "kube-api-access-9ftml". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 13 19:25:30.659057 kubelet[3345]: I0213 19:25:30.659020 3345 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c492be81-a1df-441c-9ddc-36fb9c692d0d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c492be81-a1df-441c-9ddc-36fb9c692d0d" (UID: "c492be81-a1df-441c-9ddc-36fb9c692d0d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 13 19:25:30.729637 kubelet[3345]: I0213 19:25:30.729478 3345 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c492be81-a1df-441c-9ddc-36fb9c692d0d-cilium-config-path\") on node \"ip-172-31-18-187\" DevicePath \"\"" Feb 13 19:25:30.729637 kubelet[3345]: I0213 19:25:30.729627 3345 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c492be81-a1df-441c-9ddc-36fb9c692d0d-hostproc\") on node \"ip-172-31-18-187\" DevicePath \"\"" Feb 13 19:25:30.729637 kubelet[3345]: I0213 19:25:30.729643 3345 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c492be81-a1df-441c-9ddc-36fb9c692d0d-host-proc-sys-kernel\") on node \"ip-172-31-18-187\" DevicePath \"\"" Feb 13 19:25:30.729878 kubelet[3345]: I0213 19:25:30.729654 3345 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ffb44e99-4525-4809-996a-6200aa62fac8-cilium-config-path\") on node \"ip-172-31-18-187\" DevicePath \"\"" Feb 13 19:25:30.729878 kubelet[3345]: I0213 19:25:30.729666 3345 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c492be81-a1df-441c-9ddc-36fb9c692d0d-etc-cni-netd\") on node \"ip-172-31-18-187\" DevicePath \"\"" Feb 13 19:25:30.729878 kubelet[3345]: I0213 19:25:30.729677 3345 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c492be81-a1df-441c-9ddc-36fb9c692d0d-cilium-cgroup\") on node \"ip-172-31-18-187\" DevicePath \"\"" Feb 13 19:25:30.729878 kubelet[3345]: I0213 19:25:30.729690 3345 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c492be81-a1df-441c-9ddc-36fb9c692d0d-lib-modules\") on node \"ip-172-31-18-187\" DevicePath \"\"" Feb 13 19:25:30.729878 kubelet[3345]: I0213 19:25:30.729700 3345 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c492be81-a1df-441c-9ddc-36fb9c692d0d-xtables-lock\") on node \"ip-172-31-18-187\" DevicePath \"\"" Feb 13 19:25:30.729878 kubelet[3345]: I0213 19:25:30.729711 3345 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9ftml\" (UniqueName: \"kubernetes.io/projected/ffb44e99-4525-4809-996a-6200aa62fac8-kube-api-access-9ftml\") on node \"ip-172-31-18-187\" DevicePath \"\"" Feb 13 19:25:30.729878 kubelet[3345]: I0213 19:25:30.729723 3345 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c492be81-a1df-441c-9ddc-36fb9c692d0d-cilium-run\") on node \"ip-172-31-18-187\" DevicePath \"\"" Feb 13 19:25:30.729878 kubelet[3345]: I0213 19:25:30.729736 3345 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c492be81-a1df-441c-9ddc-36fb9c692d0d-clustermesh-secrets\") on node \"ip-172-31-18-187\" DevicePath \"\"" Feb 13 19:25:30.730094 kubelet[3345]: I0213 19:25:30.729749 3345 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rn2st\" (UniqueName: \"kubernetes.io/projected/c492be81-a1df-441c-9ddc-36fb9c692d0d-kube-api-access-rn2st\") on node \"ip-172-31-18-187\" DevicePath \"\"" Feb 13 19:25:30.730094 kubelet[3345]: I0213 19:25:30.729760 3345 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c492be81-a1df-441c-9ddc-36fb9c692d0d-host-proc-sys-net\") on node \"ip-172-31-18-187\" DevicePath \"\"" Feb 13 19:25:30.730094 kubelet[3345]: I0213 19:25:30.729772 3345 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c492be81-a1df-441c-9ddc-36fb9c692d0d-bpf-maps\") on node \"ip-172-31-18-187\" DevicePath \"\"" Feb 13 19:25:30.730094 kubelet[3345]: I0213 19:25:30.729783 3345 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c492be81-a1df-441c-9ddc-36fb9c692d0d-cni-path\") on node \"ip-172-31-18-187\" DevicePath \"\"" Feb 13 19:25:30.730094 kubelet[3345]: I0213 19:25:30.729793 3345 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c492be81-a1df-441c-9ddc-36fb9c692d0d-hubble-tls\") on node \"ip-172-31-18-187\" DevicePath \"\"" Feb 13 19:25:31.022333 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c1879380053ede6e850a9931ba220cf5bd415497cee6c4265a288aaebfadda97-rootfs.mount: Deactivated successfully. Feb 13 19:25:31.022473 systemd[1]: var-lib-kubelet-pods-ffb44e99\x2d4525\x2d4809\x2d996a\x2d6200aa62fac8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9ftml.mount: Deactivated successfully. Feb 13 19:25:31.022570 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0c0055718e29cafb7e7afdf166a6c847aa0b41948170811318ed9d77a50e6880-rootfs.mount: Deactivated successfully. Feb 13 19:25:31.022650 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0c0055718e29cafb7e7afdf166a6c847aa0b41948170811318ed9d77a50e6880-shm.mount: Deactivated successfully. Feb 13 19:25:31.022734 systemd[1]: var-lib-kubelet-pods-c492be81\x2da1df\x2d441c\x2d9ddc\x2d36fb9c692d0d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drn2st.mount: Deactivated successfully. Feb 13 19:25:31.022850 systemd[1]: var-lib-kubelet-pods-c492be81\x2da1df\x2d441c\x2d9ddc\x2d36fb9c692d0d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 19:25:31.022942 systemd[1]: var-lib-kubelet-pods-c492be81\x2da1df\x2d441c\x2d9ddc\x2d36fb9c692d0d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 19:25:31.437315 systemd[1]: Removed slice kubepods-besteffort-podffb44e99_4525_4809_996a_6200aa62fac8.slice - libcontainer container kubepods-besteffort-podffb44e99_4525_4809_996a_6200aa62fac8.slice. Feb 13 19:25:31.478260 kubelet[3345]: I0213 19:25:31.475290 3345 scope.go:117] "RemoveContainer" containerID="ca1323d02ab6f48280503a1e6455cfbdf8f6d9da9f952a3e9575b1c88df5b36d" Feb 13 19:25:31.485707 systemd[1]: Removed slice kubepods-burstable-podc492be81_a1df_441c_9ddc_36fb9c692d0d.slice - libcontainer container kubepods-burstable-podc492be81_a1df_441c_9ddc_36fb9c692d0d.slice. Feb 13 19:25:31.485868 systemd[1]: kubepods-burstable-podc492be81_a1df_441c_9ddc_36fb9c692d0d.slice: Consumed 9.208s CPU time. Feb 13 19:25:31.552081 containerd[1889]: time="2025-02-13T19:25:31.548478809Z" level=info msg="RemoveContainer for \"ca1323d02ab6f48280503a1e6455cfbdf8f6d9da9f952a3e9575b1c88df5b36d\"" Feb 13 19:25:31.567784 containerd[1889]: time="2025-02-13T19:25:31.567598992Z" level=info msg="RemoveContainer for \"ca1323d02ab6f48280503a1e6455cfbdf8f6d9da9f952a3e9575b1c88df5b36d\" returns successfully" Feb 13 19:25:31.572012 kubelet[3345]: I0213 19:25:31.571981 3345 scope.go:117] "RemoveContainer" containerID="faa38fa5cd97bf87e9e351dec59e379ddece6265795ecc7b4452a3c52251d6af" Feb 13 19:25:31.574226 containerd[1889]: time="2025-02-13T19:25:31.574059169Z" level=info msg="RemoveContainer for \"faa38fa5cd97bf87e9e351dec59e379ddece6265795ecc7b4452a3c52251d6af\"" Feb 13 19:25:31.588427 containerd[1889]: time="2025-02-13T19:25:31.588195392Z" level=info msg="RemoveContainer for \"faa38fa5cd97bf87e9e351dec59e379ddece6265795ecc7b4452a3c52251d6af\" returns successfully" Feb 13 19:25:31.588793 kubelet[3345]: I0213 19:25:31.588763 3345 scope.go:117] "RemoveContainer" containerID="4a2fa56ebe0b3d88fc00398adfbadf63acb3bf6d8e4759adf5824a4e57e41608" Feb 13 19:25:31.594202 containerd[1889]: time="2025-02-13T19:25:31.594154296Z" level=info msg="RemoveContainer for \"4a2fa56ebe0b3d88fc00398adfbadf63acb3bf6d8e4759adf5824a4e57e41608\"" Feb 13 19:25:31.608451 containerd[1889]: time="2025-02-13T19:25:31.608398888Z" level=info msg="RemoveContainer for \"4a2fa56ebe0b3d88fc00398adfbadf63acb3bf6d8e4759adf5824a4e57e41608\" returns successfully" Feb 13 19:25:31.608717 kubelet[3345]: I0213 19:25:31.608690 3345 scope.go:117] "RemoveContainer" containerID="cf213306d9c3926f0dbde8f9bde8b59adf6bb52bda088e7bfb80f79fe19223b6" Feb 13 19:25:31.610526 containerd[1889]: time="2025-02-13T19:25:31.610370900Z" level=info msg="RemoveContainer for \"cf213306d9c3926f0dbde8f9bde8b59adf6bb52bda088e7bfb80f79fe19223b6\"" Feb 13 19:25:31.619610 containerd[1889]: time="2025-02-13T19:25:31.619565986Z" level=info msg="RemoveContainer for \"cf213306d9c3926f0dbde8f9bde8b59adf6bb52bda088e7bfb80f79fe19223b6\" returns successfully" Feb 13 19:25:31.619927 kubelet[3345]: I0213 19:25:31.619895 3345 scope.go:117] "RemoveContainer" containerID="c0589803726607e2fa037b8b6b1a67d374c63fbe893cca5f03097e541955a40c" Feb 13 19:25:31.621187 containerd[1889]: time="2025-02-13T19:25:31.621120457Z" level=info msg="RemoveContainer for \"c0589803726607e2fa037b8b6b1a67d374c63fbe893cca5f03097e541955a40c\"" Feb 13 19:25:31.626141 containerd[1889]: time="2025-02-13T19:25:31.626096939Z" level=info msg="RemoveContainer for \"c0589803726607e2fa037b8b6b1a67d374c63fbe893cca5f03097e541955a40c\" returns successfully" Feb 13 19:25:31.626424 kubelet[3345]: I0213 19:25:31.626394 3345 scope.go:117] "RemoveContainer" containerID="963df4b808e2653fd669c373fc913c1c1a5f2e687f034049bdbfbc1529c16fc9" Feb 13 19:25:31.627680 containerd[1889]: time="2025-02-13T19:25:31.627646606Z" level=info msg="RemoveContainer for \"963df4b808e2653fd669c373fc913c1c1a5f2e687f034049bdbfbc1529c16fc9\"" Feb 13 19:25:31.636961 containerd[1889]: time="2025-02-13T19:25:31.636903969Z" level=info msg="RemoveContainer for \"963df4b808e2653fd669c373fc913c1c1a5f2e687f034049bdbfbc1529c16fc9\" returns successfully" Feb 13 19:25:31.668791 kubelet[3345]: I0213 19:25:31.648451 3345 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c492be81-a1df-441c-9ddc-36fb9c692d0d" path="/var/lib/kubelet/pods/c492be81-a1df-441c-9ddc-36fb9c692d0d/volumes" Feb 13 19:25:31.670542 kubelet[3345]: I0213 19:25:31.670469 3345 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffb44e99-4525-4809-996a-6200aa62fac8" path="/var/lib/kubelet/pods/ffb44e99-4525-4809-996a-6200aa62fac8/volumes" Feb 13 19:25:31.855421 kubelet[3345]: E0213 19:25:31.855298 3345 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 19:25:31.922315 sshd[4960]: Connection closed by 139.178.89.65 port 40268 Feb 13 19:25:31.923375 sshd-session[4958]: pam_unix(sshd:session): session closed for user core Feb 13 19:25:31.928170 systemd-logind[1871]: Session 27 logged out. Waiting for processes to exit. Feb 13 19:25:31.929592 systemd[1]: sshd@26-172.31.18.187:22-139.178.89.65:40268.service: Deactivated successfully. Feb 13 19:25:31.933070 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 19:25:31.934422 systemd-logind[1871]: Removed session 27. Feb 13 19:25:31.964579 systemd[1]: Started sshd@27-172.31.18.187:22-139.178.89.65:40276.service - OpenSSH per-connection server daemon (139.178.89.65:40276). Feb 13 19:25:32.175601 sshd[5119]: Accepted publickey for core from 139.178.89.65 port 40276 ssh2: RSA SHA256:KGbcKF8vZ4+NPkSlme0qB32HGnqAN+vlwaFvbJSvXYQ Feb 13 19:25:32.177499 sshd-session[5119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:25:32.184301 systemd-logind[1871]: New session 28 of user core. Feb 13 19:25:32.190352 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 13 19:25:32.442616 ntpd[1864]: Deleting interface #11 lxc_health, fe80::2499:8dff:fe63:cf8b%8#123, interface stats: received=0, sent=0, dropped=0, active_time=74 secs Feb 13 19:25:32.443075 ntpd[1864]: 13 Feb 19:25:32 ntpd[1864]: Deleting interface #11 lxc_health, fe80::2499:8dff:fe63:cf8b%8#123, interface stats: received=0, sent=0, dropped=0, active_time=74 secs Feb 13 19:25:33.183459 sshd[5121]: Connection closed by 139.178.89.65 port 40276 Feb 13 19:25:33.185467 sshd-session[5119]: pam_unix(sshd:session): session closed for user core Feb 13 19:25:33.193583 systemd[1]: sshd@27-172.31.18.187:22-139.178.89.65:40276.service: Deactivated successfully. Feb 13 19:25:33.202943 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 19:25:33.205569 systemd-logind[1871]: Session 28 logged out. Waiting for processes to exit. Feb 13 19:25:33.232255 systemd[1]: Started sshd@28-172.31.18.187:22-139.178.89.65:40280.service - OpenSSH per-connection server daemon (139.178.89.65:40280). Feb 13 19:25:33.234087 systemd-logind[1871]: Removed session 28. Feb 13 19:25:33.287953 kubelet[3345]: I0213 19:25:33.287910 3345 memory_manager.go:355] "RemoveStaleState removing state" podUID="ffb44e99-4525-4809-996a-6200aa62fac8" containerName="cilium-operator" Feb 13 19:25:33.287953 kubelet[3345]: I0213 19:25:33.287958 3345 memory_manager.go:355] "RemoveStaleState removing state" podUID="c492be81-a1df-441c-9ddc-36fb9c692d0d" containerName="cilium-agent" Feb 13 19:25:33.322306 systemd[1]: Created slice kubepods-burstable-podfcb5c795_4a8b_47f1_b06d_6471b533398a.slice - libcontainer container kubepods-burstable-podfcb5c795_4a8b_47f1_b06d_6471b533398a.slice. Feb 13 19:25:33.374166 kubelet[3345]: I0213 19:25:33.370064 3345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fcb5c795-4a8b-47f1-b06d-6471b533398a-cilium-run\") pod \"cilium-87kt5\" (UID: \"fcb5c795-4a8b-47f1-b06d-6471b533398a\") " pod="kube-system/cilium-87kt5" Feb 13 19:25:33.374166 kubelet[3345]: I0213 19:25:33.370120 3345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fcb5c795-4a8b-47f1-b06d-6471b533398a-hostproc\") pod \"cilium-87kt5\" (UID: \"fcb5c795-4a8b-47f1-b06d-6471b533398a\") " pod="kube-system/cilium-87kt5" Feb 13 19:25:33.374166 kubelet[3345]: I0213 19:25:33.370164 3345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8r78m\" (UniqueName: \"kubernetes.io/projected/fcb5c795-4a8b-47f1-b06d-6471b533398a-kube-api-access-8r78m\") pod \"cilium-87kt5\" (UID: \"fcb5c795-4a8b-47f1-b06d-6471b533398a\") " pod="kube-system/cilium-87kt5" Feb 13 19:25:33.374166 kubelet[3345]: I0213 19:25:33.370260 3345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fcb5c795-4a8b-47f1-b06d-6471b533398a-etc-cni-netd\") pod \"cilium-87kt5\" (UID: \"fcb5c795-4a8b-47f1-b06d-6471b533398a\") " pod="kube-system/cilium-87kt5" Feb 13 19:25:33.374166 kubelet[3345]: I0213 19:25:33.370287 3345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fcb5c795-4a8b-47f1-b06d-6471b533398a-lib-modules\") pod \"cilium-87kt5\" (UID: \"fcb5c795-4a8b-47f1-b06d-6471b533398a\") " pod="kube-system/cilium-87kt5" Feb 13 19:25:33.374166 kubelet[3345]: I0213 19:25:33.370348 3345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fcb5c795-4a8b-47f1-b06d-6471b533398a-cilium-config-path\") pod \"cilium-87kt5\" (UID: \"fcb5c795-4a8b-47f1-b06d-6471b533398a\") " pod="kube-system/cilium-87kt5" Feb 13 19:25:33.374534 kubelet[3345]: I0213 19:25:33.370376 3345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/fcb5c795-4a8b-47f1-b06d-6471b533398a-cilium-ipsec-secrets\") pod \"cilium-87kt5\" (UID: \"fcb5c795-4a8b-47f1-b06d-6471b533398a\") " pod="kube-system/cilium-87kt5" Feb 13 19:25:33.374534 kubelet[3345]: I0213 19:25:33.370430 3345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fcb5c795-4a8b-47f1-b06d-6471b533398a-host-proc-sys-net\") pod \"cilium-87kt5\" (UID: \"fcb5c795-4a8b-47f1-b06d-6471b533398a\") " pod="kube-system/cilium-87kt5" Feb 13 19:25:33.374534 kubelet[3345]: I0213 19:25:33.370460 3345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fcb5c795-4a8b-47f1-b06d-6471b533398a-hubble-tls\") pod \"cilium-87kt5\" (UID: \"fcb5c795-4a8b-47f1-b06d-6471b533398a\") " pod="kube-system/cilium-87kt5" Feb 13 19:25:33.374534 kubelet[3345]: I0213 19:25:33.371141 3345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fcb5c795-4a8b-47f1-b06d-6471b533398a-bpf-maps\") pod \"cilium-87kt5\" (UID: \"fcb5c795-4a8b-47f1-b06d-6471b533398a\") " pod="kube-system/cilium-87kt5" Feb 13 19:25:33.374534 kubelet[3345]: I0213 19:25:33.371177 3345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fcb5c795-4a8b-47f1-b06d-6471b533398a-cilium-cgroup\") pod \"cilium-87kt5\" (UID: \"fcb5c795-4a8b-47f1-b06d-6471b533398a\") " pod="kube-system/cilium-87kt5" Feb 13 19:25:33.374534 kubelet[3345]: I0213 19:25:33.371217 3345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fcb5c795-4a8b-47f1-b06d-6471b533398a-clustermesh-secrets\") pod \"cilium-87kt5\" (UID: \"fcb5c795-4a8b-47f1-b06d-6471b533398a\") " pod="kube-system/cilium-87kt5" Feb 13 19:25:33.374782 kubelet[3345]: I0213 19:25:33.371314 3345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fcb5c795-4a8b-47f1-b06d-6471b533398a-cni-path\") pod \"cilium-87kt5\" (UID: \"fcb5c795-4a8b-47f1-b06d-6471b533398a\") " pod="kube-system/cilium-87kt5" Feb 13 19:25:33.374782 kubelet[3345]: I0213 19:25:33.371350 3345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fcb5c795-4a8b-47f1-b06d-6471b533398a-xtables-lock\") pod \"cilium-87kt5\" (UID: \"fcb5c795-4a8b-47f1-b06d-6471b533398a\") " pod="kube-system/cilium-87kt5" Feb 13 19:25:33.374782 kubelet[3345]: I0213 19:25:33.371403 3345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fcb5c795-4a8b-47f1-b06d-6471b533398a-host-proc-sys-kernel\") pod \"cilium-87kt5\" (UID: \"fcb5c795-4a8b-47f1-b06d-6471b533398a\") " pod="kube-system/cilium-87kt5" Feb 13 19:25:33.445891 sshd[5130]: Accepted publickey for core from 139.178.89.65 port 40280 ssh2: RSA SHA256:KGbcKF8vZ4+NPkSlme0qB32HGnqAN+vlwaFvbJSvXYQ Feb 13 19:25:33.447921 sshd-session[5130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:25:33.453124 systemd-logind[1871]: New session 29 of user core. Feb 13 19:25:33.456378 systemd[1]: Started session-29.scope - Session 29 of User core. Feb 13 19:25:33.573908 sshd[5133]: Connection closed by 139.178.89.65 port 40280 Feb 13 19:25:33.575418 sshd-session[5130]: pam_unix(sshd:session): session closed for user core Feb 13 19:25:33.581258 systemd[1]: sshd@28-172.31.18.187:22-139.178.89.65:40280.service: Deactivated successfully. Feb 13 19:25:33.588697 systemd[1]: session-29.scope: Deactivated successfully. Feb 13 19:25:33.590297 systemd-logind[1871]: Session 29 logged out. Waiting for processes to exit. Feb 13 19:25:33.591960 systemd-logind[1871]: Removed session 29. Feb 13 19:25:33.614514 systemd[1]: Started sshd@29-172.31.18.187:22-139.178.89.65:40290.service - OpenSSH per-connection server daemon (139.178.89.65:40290). Feb 13 19:25:33.673832 containerd[1889]: time="2025-02-13T19:25:33.673787428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-87kt5,Uid:fcb5c795-4a8b-47f1-b06d-6471b533398a,Namespace:kube-system,Attempt:0,}" Feb 13 19:25:33.716183 containerd[1889]: time="2025-02-13T19:25:33.715790732Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:25:33.716183 containerd[1889]: time="2025-02-13T19:25:33.715855674Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:25:33.716183 containerd[1889]: time="2025-02-13T19:25:33.715877546Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:25:33.716614 containerd[1889]: time="2025-02-13T19:25:33.716559826Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:25:33.750662 systemd[1]: Started cri-containerd-e171ed3eb034cf71bf893a225c8c6a088d9d3b710dfed7745a03d2d7bf913d24.scope - libcontainer container e171ed3eb034cf71bf893a225c8c6a088d9d3b710dfed7745a03d2d7bf913d24. Feb 13 19:25:33.787364 containerd[1889]: time="2025-02-13T19:25:33.787321602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-87kt5,Uid:fcb5c795-4a8b-47f1-b06d-6471b533398a,Namespace:kube-system,Attempt:0,} returns sandbox id \"e171ed3eb034cf71bf893a225c8c6a088d9d3b710dfed7745a03d2d7bf913d24\"" Feb 13 19:25:33.792594 containerd[1889]: time="2025-02-13T19:25:33.792544309Z" level=info msg="CreateContainer within sandbox \"e171ed3eb034cf71bf893a225c8c6a088d9d3b710dfed7745a03d2d7bf913d24\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 19:25:33.796489 sshd[5143]: Accepted publickey for core from 139.178.89.65 port 40290 ssh2: RSA SHA256:KGbcKF8vZ4+NPkSlme0qB32HGnqAN+vlwaFvbJSvXYQ Feb 13 19:25:33.800918 sshd-session[5143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:25:33.811453 systemd-logind[1871]: New session 30 of user core. Feb 13 19:25:33.818354 systemd[1]: Started session-30.scope - Session 30 of User core. Feb 13 19:25:33.828120 containerd[1889]: time="2025-02-13T19:25:33.827715026Z" level=info msg="CreateContainer within sandbox \"e171ed3eb034cf71bf893a225c8c6a088d9d3b710dfed7745a03d2d7bf913d24\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"84e1667b55bbb1b6b2b373912f853d323181f9569774b990ccbb649d9841a117\"" Feb 13 19:25:33.831772 containerd[1889]: time="2025-02-13T19:25:33.831571421Z" level=info msg="StartContainer for \"84e1667b55bbb1b6b2b373912f853d323181f9569774b990ccbb649d9841a117\"" Feb 13 19:25:33.878788 systemd[1]: Started cri-containerd-84e1667b55bbb1b6b2b373912f853d323181f9569774b990ccbb649d9841a117.scope - libcontainer container 84e1667b55bbb1b6b2b373912f853d323181f9569774b990ccbb649d9841a117. Feb 13 19:25:33.942573 containerd[1889]: time="2025-02-13T19:25:33.942526602Z" level=info msg="StartContainer for \"84e1667b55bbb1b6b2b373912f853d323181f9569774b990ccbb649d9841a117\" returns successfully" Feb 13 19:25:34.021758 systemd[1]: cri-containerd-84e1667b55bbb1b6b2b373912f853d323181f9569774b990ccbb649d9841a117.scope: Deactivated successfully. Feb 13 19:25:34.110684 containerd[1889]: time="2025-02-13T19:25:34.109993675Z" level=info msg="shim disconnected" id=84e1667b55bbb1b6b2b373912f853d323181f9569774b990ccbb649d9841a117 namespace=k8s.io Feb 13 19:25:34.110684 containerd[1889]: time="2025-02-13T19:25:34.110330544Z" level=warning msg="cleaning up after shim disconnected" id=84e1667b55bbb1b6b2b373912f853d323181f9569774b990ccbb649d9841a117 namespace=k8s.io Feb 13 19:25:34.110684 containerd[1889]: time="2025-02-13T19:25:34.110346149Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:25:34.547795 containerd[1889]: time="2025-02-13T19:25:34.547746105Z" level=info msg="CreateContainer within sandbox \"e171ed3eb034cf71bf893a225c8c6a088d9d3b710dfed7745a03d2d7bf913d24\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 19:25:34.586101 kubelet[3345]: I0213 19:25:34.585907 3345 setters.go:602] "Node became not ready" node="ip-172-31-18-187" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T19:25:34Z","lastTransitionTime":"2025-02-13T19:25:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 13 19:25:34.592196 containerd[1889]: time="2025-02-13T19:25:34.590253724Z" level=info msg="CreateContainer within sandbox \"e171ed3eb034cf71bf893a225c8c6a088d9d3b710dfed7745a03d2d7bf913d24\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c2d4848975400c750ca131f2c3eddd0e94963287bdf9bde858db4fc7f264dd84\"" Feb 13 19:25:34.592747 containerd[1889]: time="2025-02-13T19:25:34.592714318Z" level=info msg="StartContainer for \"c2d4848975400c750ca131f2c3eddd0e94963287bdf9bde858db4fc7f264dd84\"" Feb 13 19:25:34.688443 systemd[1]: Started cri-containerd-c2d4848975400c750ca131f2c3eddd0e94963287bdf9bde858db4fc7f264dd84.scope - libcontainer container c2d4848975400c750ca131f2c3eddd0e94963287bdf9bde858db4fc7f264dd84. Feb 13 19:25:34.819916 containerd[1889]: time="2025-02-13T19:25:34.819627660Z" level=info msg="StartContainer for \"c2d4848975400c750ca131f2c3eddd0e94963287bdf9bde858db4fc7f264dd84\" returns successfully" Feb 13 19:25:34.839379 systemd[1]: cri-containerd-c2d4848975400c750ca131f2c3eddd0e94963287bdf9bde858db4fc7f264dd84.scope: Deactivated successfully. Feb 13 19:25:34.879322 containerd[1889]: time="2025-02-13T19:25:34.879256427Z" level=info msg="shim disconnected" id=c2d4848975400c750ca131f2c3eddd0e94963287bdf9bde858db4fc7f264dd84 namespace=k8s.io Feb 13 19:25:34.879678 containerd[1889]: time="2025-02-13T19:25:34.879339421Z" level=warning msg="cleaning up after shim disconnected" id=c2d4848975400c750ca131f2c3eddd0e94963287bdf9bde858db4fc7f264dd84 namespace=k8s.io Feb 13 19:25:34.879678 containerd[1889]: time="2025-02-13T19:25:34.879436741Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:25:35.500247 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c2d4848975400c750ca131f2c3eddd0e94963287bdf9bde858db4fc7f264dd84-rootfs.mount: Deactivated successfully. Feb 13 19:25:35.550311 containerd[1889]: time="2025-02-13T19:25:35.549924217Z" level=info msg="CreateContainer within sandbox \"e171ed3eb034cf71bf893a225c8c6a088d9d3b710dfed7745a03d2d7bf913d24\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 19:25:35.591962 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1652982876.mount: Deactivated successfully. Feb 13 19:25:35.592988 containerd[1889]: time="2025-02-13T19:25:35.590448975Z" level=info msg="CreateContainer within sandbox \"e171ed3eb034cf71bf893a225c8c6a088d9d3b710dfed7745a03d2d7bf913d24\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"befaaabaa27aa43555e9b25f06afe8b6625abd79b82796616189a385e2265de9\"" Feb 13 19:25:35.598294 containerd[1889]: time="2025-02-13T19:25:35.597426158Z" level=info msg="StartContainer for \"befaaabaa27aa43555e9b25f06afe8b6625abd79b82796616189a385e2265de9\"" Feb 13 19:25:35.654358 systemd[1]: Started cri-containerd-befaaabaa27aa43555e9b25f06afe8b6625abd79b82796616189a385e2265de9.scope - libcontainer container befaaabaa27aa43555e9b25f06afe8b6625abd79b82796616189a385e2265de9. Feb 13 19:25:35.695687 containerd[1889]: time="2025-02-13T19:25:35.695646658Z" level=info msg="StartContainer for \"befaaabaa27aa43555e9b25f06afe8b6625abd79b82796616189a385e2265de9\" returns successfully" Feb 13 19:25:35.752349 systemd[1]: cri-containerd-befaaabaa27aa43555e9b25f06afe8b6625abd79b82796616189a385e2265de9.scope: Deactivated successfully. Feb 13 19:25:35.832427 containerd[1889]: time="2025-02-13T19:25:35.832338344Z" level=info msg="shim disconnected" id=befaaabaa27aa43555e9b25f06afe8b6625abd79b82796616189a385e2265de9 namespace=k8s.io Feb 13 19:25:35.832427 containerd[1889]: time="2025-02-13T19:25:35.832403532Z" level=warning msg="cleaning up after shim disconnected" id=befaaabaa27aa43555e9b25f06afe8b6625abd79b82796616189a385e2265de9 namespace=k8s.io Feb 13 19:25:35.832427 containerd[1889]: time="2025-02-13T19:25:35.832415943Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:25:36.503383 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-befaaabaa27aa43555e9b25f06afe8b6625abd79b82796616189a385e2265de9-rootfs.mount: Deactivated successfully. Feb 13 19:25:36.566060 containerd[1889]: time="2025-02-13T19:25:36.565934360Z" level=info msg="CreateContainer within sandbox \"e171ed3eb034cf71bf893a225c8c6a088d9d3b710dfed7745a03d2d7bf913d24\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 19:25:36.610365 containerd[1889]: time="2025-02-13T19:25:36.609970909Z" level=info msg="CreateContainer within sandbox \"e171ed3eb034cf71bf893a225c8c6a088d9d3b710dfed7745a03d2d7bf913d24\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6593465effe946d238950fc3108fd4d304aafaf18d0210676026002a64500f0b\"" Feb 13 19:25:36.614182 containerd[1889]: time="2025-02-13T19:25:36.613400503Z" level=info msg="StartContainer for \"6593465effe946d238950fc3108fd4d304aafaf18d0210676026002a64500f0b\"" Feb 13 19:25:36.678479 systemd[1]: Started cri-containerd-6593465effe946d238950fc3108fd4d304aafaf18d0210676026002a64500f0b.scope - libcontainer container 6593465effe946d238950fc3108fd4d304aafaf18d0210676026002a64500f0b. Feb 13 19:25:36.723408 systemd[1]: cri-containerd-6593465effe946d238950fc3108fd4d304aafaf18d0210676026002a64500f0b.scope: Deactivated successfully. Feb 13 19:25:36.726522 containerd[1889]: time="2025-02-13T19:25:36.726109420Z" level=info msg="StartContainer for \"6593465effe946d238950fc3108fd4d304aafaf18d0210676026002a64500f0b\" returns successfully" Feb 13 19:25:36.765436 containerd[1889]: time="2025-02-13T19:25:36.765275203Z" level=info msg="shim disconnected" id=6593465effe946d238950fc3108fd4d304aafaf18d0210676026002a64500f0b namespace=k8s.io Feb 13 19:25:36.765436 containerd[1889]: time="2025-02-13T19:25:36.765338832Z" level=warning msg="cleaning up after shim disconnected" id=6593465effe946d238950fc3108fd4d304aafaf18d0210676026002a64500f0b namespace=k8s.io Feb 13 19:25:36.765436 containerd[1889]: time="2025-02-13T19:25:36.765352256Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:25:36.866343 kubelet[3345]: E0213 19:25:36.865409 3345 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 19:25:37.500753 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6593465effe946d238950fc3108fd4d304aafaf18d0210676026002a64500f0b-rootfs.mount: Deactivated successfully. Feb 13 19:25:37.582747 containerd[1889]: time="2025-02-13T19:25:37.582694823Z" level=info msg="CreateContainer within sandbox \"e171ed3eb034cf71bf893a225c8c6a088d9d3b710dfed7745a03d2d7bf913d24\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 19:25:37.627628 containerd[1889]: time="2025-02-13T19:25:37.627575335Z" level=info msg="CreateContainer within sandbox \"e171ed3eb034cf71bf893a225c8c6a088d9d3b710dfed7745a03d2d7bf913d24\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6dd00f45d51adb225cfeeeb14ef5f6719591c8e93f460bdf113484f4b8d165c3\"" Feb 13 19:25:37.628258 containerd[1889]: time="2025-02-13T19:25:37.628229848Z" level=info msg="StartContainer for \"6dd00f45d51adb225cfeeeb14ef5f6719591c8e93f460bdf113484f4b8d165c3\"" Feb 13 19:25:37.680389 systemd[1]: Started cri-containerd-6dd00f45d51adb225cfeeeb14ef5f6719591c8e93f460bdf113484f4b8d165c3.scope - libcontainer container 6dd00f45d51adb225cfeeeb14ef5f6719591c8e93f460bdf113484f4b8d165c3. Feb 13 19:25:37.724328 containerd[1889]: time="2025-02-13T19:25:37.724185197Z" level=info msg="StartContainer for \"6dd00f45d51adb225cfeeeb14ef5f6719591c8e93f460bdf113484f4b8d165c3\" returns successfully" Feb 13 19:25:38.502699 systemd[1]: run-containerd-runc-k8s.io-6dd00f45d51adb225cfeeeb14ef5f6719591c8e93f460bdf113484f4b8d165c3-runc.iQzTJm.mount: Deactivated successfully. Feb 13 19:25:38.585170 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 13 19:25:41.638182 kubelet[3345]: E0213 19:25:41.637257 3345 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-r4q6x" podUID="393d9b64-0531-42a9-ba41-5767b1b48de1" Feb 13 19:25:41.662508 containerd[1889]: time="2025-02-13T19:25:41.658915237Z" level=info msg="StopPodSandbox for \"c1879380053ede6e850a9931ba220cf5bd415497cee6c4265a288aaebfadda97\"" Feb 13 19:25:41.662508 containerd[1889]: time="2025-02-13T19:25:41.659186556Z" level=info msg="TearDown network for sandbox \"c1879380053ede6e850a9931ba220cf5bd415497cee6c4265a288aaebfadda97\" successfully" Feb 13 19:25:41.662508 containerd[1889]: time="2025-02-13T19:25:41.659228285Z" level=info msg="StopPodSandbox for \"c1879380053ede6e850a9931ba220cf5bd415497cee6c4265a288aaebfadda97\" returns successfully" Feb 13 19:25:41.662508 containerd[1889]: time="2025-02-13T19:25:41.659905791Z" level=info msg="RemovePodSandbox for \"c1879380053ede6e850a9931ba220cf5bd415497cee6c4265a288aaebfadda97\"" Feb 13 19:25:41.671404 containerd[1889]: time="2025-02-13T19:25:41.668549640Z" level=info msg="Forcibly stopping sandbox \"c1879380053ede6e850a9931ba220cf5bd415497cee6c4265a288aaebfadda97\"" Feb 13 19:25:41.671404 containerd[1889]: time="2025-02-13T19:25:41.670991906Z" level=info msg="TearDown network for sandbox \"c1879380053ede6e850a9931ba220cf5bd415497cee6c4265a288aaebfadda97\" successfully" Feb 13 19:25:41.683866 containerd[1889]: time="2025-02-13T19:25:41.683463226Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c1879380053ede6e850a9931ba220cf5bd415497cee6c4265a288aaebfadda97\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:25:41.683866 containerd[1889]: time="2025-02-13T19:25:41.683609407Z" level=info msg="RemovePodSandbox \"c1879380053ede6e850a9931ba220cf5bd415497cee6c4265a288aaebfadda97\" returns successfully" Feb 13 19:25:41.685776 containerd[1889]: time="2025-02-13T19:25:41.685744121Z" level=info msg="StopPodSandbox for \"0c0055718e29cafb7e7afdf166a6c847aa0b41948170811318ed9d77a50e6880\"" Feb 13 19:25:41.685910 containerd[1889]: time="2025-02-13T19:25:41.685845047Z" level=info msg="TearDown network for sandbox \"0c0055718e29cafb7e7afdf166a6c847aa0b41948170811318ed9d77a50e6880\" successfully" Feb 13 19:25:41.685910 containerd[1889]: time="2025-02-13T19:25:41.685860578Z" level=info msg="StopPodSandbox for \"0c0055718e29cafb7e7afdf166a6c847aa0b41948170811318ed9d77a50e6880\" returns successfully" Feb 13 19:25:41.686680 containerd[1889]: time="2025-02-13T19:25:41.686354751Z" level=info msg="RemovePodSandbox for \"0c0055718e29cafb7e7afdf166a6c847aa0b41948170811318ed9d77a50e6880\"" Feb 13 19:25:41.686680 containerd[1889]: time="2025-02-13T19:25:41.686377019Z" level=info msg="Forcibly stopping sandbox \"0c0055718e29cafb7e7afdf166a6c847aa0b41948170811318ed9d77a50e6880\"" Feb 13 19:25:41.686680 containerd[1889]: time="2025-02-13T19:25:41.686436230Z" level=info msg="TearDown network for sandbox \"0c0055718e29cafb7e7afdf166a6c847aa0b41948170811318ed9d77a50e6880\" successfully" Feb 13 19:25:41.694412 containerd[1889]: time="2025-02-13T19:25:41.694354499Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0c0055718e29cafb7e7afdf166a6c847aa0b41948170811318ed9d77a50e6880\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:25:41.694744 containerd[1889]: time="2025-02-13T19:25:41.694430202Z" level=info msg="RemovePodSandbox \"0c0055718e29cafb7e7afdf166a6c847aa0b41948170811318ed9d77a50e6880\" returns successfully" Feb 13 19:25:42.213661 systemd-networkd[1798]: lxc_health: Link UP Feb 13 19:25:42.220883 systemd-networkd[1798]: lxc_health: Gained carrier Feb 13 19:25:42.227558 (udev-worker)[5993]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:25:43.655328 systemd-networkd[1798]: lxc_health: Gained IPv6LL Feb 13 19:25:43.730072 kubelet[3345]: I0213 19:25:43.729488 3345 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-87kt5" podStartSLOduration=10.729465455 podStartE2EDuration="10.729465455s" podCreationTimestamp="2025-02-13 19:25:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:25:38.614122924 +0000 UTC m=+117.287004657" watchObservedRunningTime="2025-02-13 19:25:43.729465455 +0000 UTC m=+122.402347191" Feb 13 19:25:46.442834 ntpd[1864]: Listen normally on 14 lxc_health [fe80::5caf:cdff:fef1:6f3c%14]:123 Feb 13 19:25:46.444308 ntpd[1864]: 13 Feb 19:25:46 ntpd[1864]: Listen normally on 14 lxc_health [fe80::5caf:cdff:fef1:6f3c%14]:123 Feb 13 19:25:47.915738 systemd[1]: run-containerd-runc-k8s.io-6dd00f45d51adb225cfeeeb14ef5f6719591c8e93f460bdf113484f4b8d165c3-runc.5nQE4c.mount: Deactivated successfully. Feb 13 19:25:50.116940 systemd[1]: run-containerd-runc-k8s.io-6dd00f45d51adb225cfeeeb14ef5f6719591c8e93f460bdf113484f4b8d165c3-runc.fLK7d8.mount: Deactivated successfully. Feb 13 19:25:50.251802 sshd[5186]: Connection closed by 139.178.89.65 port 40290 Feb 13 19:25:50.254262 sshd-session[5143]: pam_unix(sshd:session): session closed for user core Feb 13 19:25:50.257967 systemd[1]: sshd@29-172.31.18.187:22-139.178.89.65:40290.service: Deactivated successfully. Feb 13 19:25:50.261238 systemd[1]: session-30.scope: Deactivated successfully. Feb 13 19:25:50.263651 systemd-logind[1871]: Session 30 logged out. Waiting for processes to exit. Feb 13 19:25:50.265645 systemd-logind[1871]: Removed session 30. Feb 13 19:26:04.479723 kubelet[3345]: E0213 19:26:04.479676 3345 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io ip-172-31-18-187)" Feb 13 19:26:05.854842 systemd[1]: cri-containerd-de34058abfbb51cf7280d9fb5593818319ef135b319d49637125398bb031ae87.scope: Deactivated successfully. Feb 13 19:26:05.855712 systemd[1]: cri-containerd-de34058abfbb51cf7280d9fb5593818319ef135b319d49637125398bb031ae87.scope: Consumed 3.066s CPU time, 24.1M memory peak, 0B memory swap peak. Feb 13 19:26:05.968031 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-de34058abfbb51cf7280d9fb5593818319ef135b319d49637125398bb031ae87-rootfs.mount: Deactivated successfully. Feb 13 19:26:05.983243 containerd[1889]: time="2025-02-13T19:26:05.983164010Z" level=info msg="shim disconnected" id=de34058abfbb51cf7280d9fb5593818319ef135b319d49637125398bb031ae87 namespace=k8s.io Feb 13 19:26:05.983243 containerd[1889]: time="2025-02-13T19:26:05.983237419Z" level=warning msg="cleaning up after shim disconnected" id=de34058abfbb51cf7280d9fb5593818319ef135b319d49637125398bb031ae87 namespace=k8s.io Feb 13 19:26:05.983243 containerd[1889]: time="2025-02-13T19:26:05.983249223Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:26:06.658810 kubelet[3345]: I0213 19:26:06.658770 3345 scope.go:117] "RemoveContainer" containerID="de34058abfbb51cf7280d9fb5593818319ef135b319d49637125398bb031ae87" Feb 13 19:26:06.677052 containerd[1889]: time="2025-02-13T19:26:06.676973432Z" level=info msg="CreateContainer within sandbox \"780860e4417507cc4b6ead3b16effb821243811823bdf4d2a2b710ecdce8f5fc\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Feb 13 19:26:06.720939 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount682580861.mount: Deactivated successfully. Feb 13 19:26:06.722799 containerd[1889]: time="2025-02-13T19:26:06.722499039Z" level=info msg="CreateContainer within sandbox \"780860e4417507cc4b6ead3b16effb821243811823bdf4d2a2b710ecdce8f5fc\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"3a2aacfb712d6d5f24621d1916cb4ad156935026dfba157b4ada568764e7b307\"" Feb 13 19:26:06.727369 containerd[1889]: time="2025-02-13T19:26:06.727329251Z" level=info msg="StartContainer for \"3a2aacfb712d6d5f24621d1916cb4ad156935026dfba157b4ada568764e7b307\"" Feb 13 19:26:06.806560 systemd[1]: Started cri-containerd-3a2aacfb712d6d5f24621d1916cb4ad156935026dfba157b4ada568764e7b307.scope - libcontainer container 3a2aacfb712d6d5f24621d1916cb4ad156935026dfba157b4ada568764e7b307. Feb 13 19:26:06.878817 containerd[1889]: time="2025-02-13T19:26:06.878773691Z" level=info msg="StartContainer for \"3a2aacfb712d6d5f24621d1916cb4ad156935026dfba157b4ada568764e7b307\" returns successfully" Feb 13 19:26:09.953522 systemd[1]: cri-containerd-203a7bd4ed7938730eb541ae6b08d5a439a0b6a2de6b3a90a616205f6cd7357c.scope: Deactivated successfully. Feb 13 19:26:09.954701 systemd[1]: cri-containerd-203a7bd4ed7938730eb541ae6b08d5a439a0b6a2de6b3a90a616205f6cd7357c.scope: Consumed 2.636s CPU time, 18.8M memory peak, 0B memory swap peak. Feb 13 19:26:10.019472 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-203a7bd4ed7938730eb541ae6b08d5a439a0b6a2de6b3a90a616205f6cd7357c-rootfs.mount: Deactivated successfully. Feb 13 19:26:10.057728 containerd[1889]: time="2025-02-13T19:26:10.057655142Z" level=info msg="shim disconnected" id=203a7bd4ed7938730eb541ae6b08d5a439a0b6a2de6b3a90a616205f6cd7357c namespace=k8s.io Feb 13 19:26:10.057728 containerd[1889]: time="2025-02-13T19:26:10.057710892Z" level=warning msg="cleaning up after shim disconnected" id=203a7bd4ed7938730eb541ae6b08d5a439a0b6a2de6b3a90a616205f6cd7357c namespace=k8s.io Feb 13 19:26:10.057728 containerd[1889]: time="2025-02-13T19:26:10.057723022Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:26:10.682713 kubelet[3345]: I0213 19:26:10.676586 3345 scope.go:117] "RemoveContainer" containerID="203a7bd4ed7938730eb541ae6b08d5a439a0b6a2de6b3a90a616205f6cd7357c" Feb 13 19:26:10.688226 containerd[1889]: time="2025-02-13T19:26:10.688181123Z" level=info msg="CreateContainer within sandbox \"971f2dbae813113133cfbf82f3832f76753886a120cc2a944b66234830649e38\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Feb 13 19:26:10.731492 containerd[1889]: time="2025-02-13T19:26:10.731144444Z" level=info msg="CreateContainer within sandbox \"971f2dbae813113133cfbf82f3832f76753886a120cc2a944b66234830649e38\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"71e7780b35afd12f7a22a5e7f499139fab4c95c16c8628757c2b5d7173c52fba\"" Feb 13 19:26:10.734077 containerd[1889]: time="2025-02-13T19:26:10.734039263Z" level=info msg="StartContainer for \"71e7780b35afd12f7a22a5e7f499139fab4c95c16c8628757c2b5d7173c52fba\"" Feb 13 19:26:10.814352 systemd[1]: Started cri-containerd-71e7780b35afd12f7a22a5e7f499139fab4c95c16c8628757c2b5d7173c52fba.scope - libcontainer container 71e7780b35afd12f7a22a5e7f499139fab4c95c16c8628757c2b5d7173c52fba. Feb 13 19:26:10.868225 containerd[1889]: time="2025-02-13T19:26:10.868168506Z" level=info msg="StartContainer for \"71e7780b35afd12f7a22a5e7f499139fab4c95c16c8628757c2b5d7173c52fba\" returns successfully" Feb 13 19:26:14.481044 kubelet[3345]: E0213 19:26:14.480304 3345 controller.go:195] "Failed to update lease" err="Put \"https://172.31.18.187:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-187?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 13 19:26:24.481188 kubelet[3345]: E0213 19:26:24.481007 3345 controller.go:195] "Failed to update lease" err="Put \"https://172.31.18.187:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-187?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"