Jan 30 04:39:16.889557 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 09:29:54 -00 2025 Jan 30 04:39:16.889577 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 30 04:39:16.889588 kernel: BIOS-provided physical RAM map: Jan 30 04:39:16.889594 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 30 04:39:16.889599 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 30 04:39:16.889604 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 30 04:39:16.889610 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable Jan 30 04:39:16.889616 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved Jan 30 04:39:16.889623 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 30 04:39:16.889628 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 30 04:39:16.889634 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 30 04:39:16.889639 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 30 04:39:16.889644 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 30 04:39:16.889650 kernel: NX (Execute Disable) protection: active Jan 30 04:39:16.889659 kernel: APIC: Static calls initialized Jan 30 04:39:16.889665 kernel: SMBIOS 3.0.0 present. Jan 30 04:39:16.889671 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Jan 30 04:39:16.889677 kernel: Hypervisor detected: KVM Jan 30 04:39:16.889682 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 30 04:39:16.889688 kernel: kvm-clock: using sched offset of 3019799818 cycles Jan 30 04:39:16.889694 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 30 04:39:16.889700 kernel: tsc: Detected 2445.404 MHz processor Jan 30 04:39:16.889706 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 04:39:16.889713 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 04:39:16.889721 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Jan 30 04:39:16.889727 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 30 04:39:16.889733 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 04:39:16.889739 kernel: Using GB pages for direct mapping Jan 30 04:39:16.889745 kernel: ACPI: Early table checksum verification disabled Jan 30 04:39:16.889763 kernel: ACPI: RSDP 0x00000000000F51F0 000014 (v00 BOCHS ) Jan 30 04:39:16.889769 kernel: ACPI: RSDT 0x000000007CFE265D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 04:39:16.889775 kernel: ACPI: FACP 0x000000007CFE244D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 04:39:16.889781 kernel: ACPI: DSDT 0x000000007CFE0040 00240D (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 04:39:16.889789 kernel: ACPI: FACS 0x000000007CFE0000 000040 Jan 30 04:39:16.889795 kernel: ACPI: APIC 0x000000007CFE2541 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 04:39:16.889801 kernel: ACPI: HPET 0x000000007CFE25C1 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 04:39:16.889807 kernel: ACPI: MCFG 0x000000007CFE25F9 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 04:39:16.889813 kernel: ACPI: WAET 0x000000007CFE2635 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 04:39:16.889819 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe244d-0x7cfe2540] Jan 30 04:39:16.889825 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe244c] Jan 30 04:39:16.889836 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] Jan 30 04:39:16.889842 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2541-0x7cfe25c0] Jan 30 04:39:16.889848 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25c1-0x7cfe25f8] Jan 30 04:39:16.889854 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe25f9-0x7cfe2634] Jan 30 04:39:16.889860 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe2635-0x7cfe265c] Jan 30 04:39:16.889866 kernel: No NUMA configuration found Jan 30 04:39:16.889872 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] Jan 30 04:39:16.889881 kernel: NODE_DATA(0) allocated [mem 0x7cfd6000-0x7cfdbfff] Jan 30 04:39:16.889887 kernel: Zone ranges: Jan 30 04:39:16.889893 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 04:39:16.889899 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] Jan 30 04:39:16.889905 kernel: Normal empty Jan 30 04:39:16.889911 kernel: Movable zone start for each node Jan 30 04:39:16.889917 kernel: Early memory node ranges Jan 30 04:39:16.889923 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 30 04:39:16.889929 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] Jan 30 04:39:16.890179 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] Jan 30 04:39:16.890188 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 04:39:16.890195 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 30 04:39:16.890201 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 30 04:39:16.890207 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 30 04:39:16.890214 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 30 04:39:16.890220 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 30 04:39:16.890226 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 30 04:39:16.890232 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 30 04:39:16.890241 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 04:39:16.890248 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 30 04:39:16.890254 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 30 04:39:16.890260 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 04:39:16.890266 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 30 04:39:16.890272 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 30 04:39:16.890278 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 30 04:39:16.890285 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 30 04:39:16.890291 kernel: Booting paravirtualized kernel on KVM Jan 30 04:39:16.890297 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 04:39:16.890305 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 30 04:39:16.890312 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 30 04:39:16.890318 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 30 04:39:16.890324 kernel: pcpu-alloc: [0] 0 1 Jan 30 04:39:16.890330 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 30 04:39:16.890337 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 30 04:39:16.890344 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 04:39:16.890350 kernel: random: crng init done Jan 30 04:39:16.890358 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 04:39:16.890365 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 30 04:39:16.890371 kernel: Fallback order for Node 0: 0 Jan 30 04:39:16.890377 kernel: Built 1 zonelists, mobility grouping on. Total pages: 503708 Jan 30 04:39:16.890383 kernel: Policy zone: DMA32 Jan 30 04:39:16.890389 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 04:39:16.890397 kernel: Memory: 1920004K/2047464K available (14336K kernel code, 2301K rwdata, 22800K rodata, 43320K init, 1752K bss, 127200K reserved, 0K cma-reserved) Jan 30 04:39:16.890408 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 30 04:39:16.890425 kernel: ftrace: allocating 37893 entries in 149 pages Jan 30 04:39:16.890441 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 04:39:16.890451 kernel: Dynamic Preempt: voluntary Jan 30 04:39:16.890461 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 04:39:16.890473 kernel: rcu: RCU event tracing is enabled. Jan 30 04:39:16.890483 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 30 04:39:16.890489 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 04:39:16.890495 kernel: Rude variant of Tasks RCU enabled. Jan 30 04:39:16.890502 kernel: Tracing variant of Tasks RCU enabled. Jan 30 04:39:16.890508 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 04:39:16.890517 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 30 04:39:16.890523 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 30 04:39:16.890530 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 04:39:16.890536 kernel: Console: colour VGA+ 80x25 Jan 30 04:39:16.890542 kernel: printk: console [tty0] enabled Jan 30 04:39:16.890548 kernel: printk: console [ttyS0] enabled Jan 30 04:39:16.890554 kernel: ACPI: Core revision 20230628 Jan 30 04:39:16.890560 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 30 04:39:16.890567 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 04:39:16.890575 kernel: x2apic enabled Jan 30 04:39:16.890582 kernel: APIC: Switched APIC routing to: physical x2apic Jan 30 04:39:16.890588 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 30 04:39:16.890594 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 30 04:39:16.890600 kernel: Calibrating delay loop (skipped) preset value.. 4890.80 BogoMIPS (lpj=2445404) Jan 30 04:39:16.890607 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 30 04:39:16.890613 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 30 04:39:16.890619 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 30 04:39:16.890634 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 04:39:16.890641 kernel: Spectre V2 : Mitigation: Retpolines Jan 30 04:39:16.890647 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 04:39:16.890654 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 30 04:39:16.890663 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 30 04:39:16.890670 kernel: RETBleed: Mitigation: untrained return thunk Jan 30 04:39:16.890676 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 30 04:39:16.890683 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 30 04:39:16.890689 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 30 04:39:16.890699 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 30 04:39:16.890706 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 30 04:39:16.890712 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 04:39:16.890719 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 04:39:16.890725 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 04:39:16.890732 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 04:39:16.890738 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 30 04:39:16.890745 kernel: Freeing SMP alternatives memory: 32K Jan 30 04:39:16.890770 kernel: pid_max: default: 32768 minimum: 301 Jan 30 04:39:16.890777 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 04:39:16.890783 kernel: landlock: Up and running. Jan 30 04:39:16.890789 kernel: SELinux: Initializing. Jan 30 04:39:16.890796 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 30 04:39:16.890803 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 30 04:39:16.890809 kernel: smpboot: CPU0: AMD EPYC Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 30 04:39:16.890816 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 04:39:16.890822 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 04:39:16.890831 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 04:39:16.890838 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 30 04:39:16.890845 kernel: ... version: 0 Jan 30 04:39:16.890851 kernel: ... bit width: 48 Jan 30 04:39:16.890857 kernel: ... generic registers: 6 Jan 30 04:39:16.890864 kernel: ... value mask: 0000ffffffffffff Jan 30 04:39:16.890870 kernel: ... max period: 00007fffffffffff Jan 30 04:39:16.890876 kernel: ... fixed-purpose events: 0 Jan 30 04:39:16.890883 kernel: ... event mask: 000000000000003f Jan 30 04:39:16.890891 kernel: signal: max sigframe size: 1776 Jan 30 04:39:16.890898 kernel: rcu: Hierarchical SRCU implementation. Jan 30 04:39:16.890904 kernel: rcu: Max phase no-delay instances is 400. Jan 30 04:39:16.890911 kernel: smp: Bringing up secondary CPUs ... Jan 30 04:39:16.890917 kernel: smpboot: x86: Booting SMP configuration: Jan 30 04:39:16.890924 kernel: .... node #0, CPUs: #1 Jan 30 04:39:16.890930 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 04:39:16.890936 kernel: smpboot: Max logical packages: 1 Jan 30 04:39:16.890943 kernel: smpboot: Total of 2 processors activated (9781.61 BogoMIPS) Jan 30 04:39:16.892011 kernel: devtmpfs: initialized Jan 30 04:39:16.892024 kernel: x86/mm: Memory block size: 128MB Jan 30 04:39:16.892032 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 04:39:16.892039 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 30 04:39:16.892045 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 04:39:16.892052 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 04:39:16.892059 kernel: audit: initializing netlink subsys (disabled) Jan 30 04:39:16.892065 kernel: audit: type=2000 audit(1738211956.097:1): state=initialized audit_enabled=0 res=1 Jan 30 04:39:16.892072 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 04:39:16.892083 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 04:39:16.892089 kernel: cpuidle: using governor menu Jan 30 04:39:16.892096 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 04:39:16.892102 kernel: dca service started, version 1.12.1 Jan 30 04:39:16.892109 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 30 04:39:16.892116 kernel: PCI: Using configuration type 1 for base access Jan 30 04:39:16.892122 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 04:39:16.892129 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 04:39:16.892135 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 04:39:16.892144 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 04:39:16.892151 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 04:39:16.892157 kernel: ACPI: Added _OSI(Module Device) Jan 30 04:39:16.892164 kernel: ACPI: Added _OSI(Processor Device) Jan 30 04:39:16.892170 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 04:39:16.892177 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 04:39:16.892183 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 04:39:16.892190 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 04:39:16.892196 kernel: ACPI: Interpreter enabled Jan 30 04:39:16.892205 kernel: ACPI: PM: (supports S0 S5) Jan 30 04:39:16.892211 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 04:39:16.892218 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 04:39:16.892224 kernel: PCI: Using E820 reservations for host bridge windows Jan 30 04:39:16.892231 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 30 04:39:16.892237 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 04:39:16.892397 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 30 04:39:16.892513 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 30 04:39:16.892624 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 30 04:39:16.892633 kernel: PCI host bridge to bus 0000:00 Jan 30 04:39:16.892741 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 04:39:16.892857 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 04:39:16.892970 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 04:39:16.893070 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] Jan 30 04:39:16.893165 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 30 04:39:16.893264 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 30 04:39:16.893358 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 04:39:16.893476 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 30 04:39:16.893591 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Jan 30 04:39:16.893696 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfb800000-0xfbffffff pref] Jan 30 04:39:16.893817 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfd200000-0xfd203fff 64bit pref] Jan 30 04:39:16.893927 kernel: pci 0000:00:01.0: reg 0x20: [mem 0xfea10000-0xfea10fff] Jan 30 04:39:16.896210 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea00000-0xfea0ffff pref] Jan 30 04:39:16.896368 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 30 04:39:16.896511 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 30 04:39:16.896618 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea11000-0xfea11fff] Jan 30 04:39:16.896738 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 30 04:39:16.896862 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea12000-0xfea12fff] Jan 30 04:39:16.896998 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 30 04:39:16.897106 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea13000-0xfea13fff] Jan 30 04:39:16.897217 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 30 04:39:16.897322 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea14000-0xfea14fff] Jan 30 04:39:16.897433 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 30 04:39:16.897537 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea15000-0xfea15fff] Jan 30 04:39:16.897654 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 30 04:39:16.897774 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea16000-0xfea16fff] Jan 30 04:39:16.897888 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 30 04:39:16.900399 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea17000-0xfea17fff] Jan 30 04:39:16.900524 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 30 04:39:16.900631 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea18000-0xfea18fff] Jan 30 04:39:16.900761 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Jan 30 04:39:16.900871 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfea19000-0xfea19fff] Jan 30 04:39:16.901005 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 30 04:39:16.901112 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 30 04:39:16.901222 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 30 04:39:16.901325 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc040-0xc05f] Jan 30 04:39:16.901432 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea1a000-0xfea1afff] Jan 30 04:39:16.901542 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 30 04:39:16.901690 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 30 04:39:16.901826 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Jan 30 04:39:16.901937 kernel: pci 0000:01:00.0: reg 0x14: [mem 0xfe880000-0xfe880fff] Jan 30 04:39:16.904116 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Jan 30 04:39:16.904234 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfe800000-0xfe87ffff pref] Jan 30 04:39:16.904350 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 30 04:39:16.904454 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Jan 30 04:39:16.904557 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Jan 30 04:39:16.904675 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 30 04:39:16.904801 kernel: pci 0000:02:00.0: reg 0x10: [mem 0xfe600000-0xfe603fff 64bit] Jan 30 04:39:16.904908 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 30 04:39:16.907051 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Jan 30 04:39:16.907161 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 30 04:39:16.907280 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Jan 30 04:39:16.907390 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfe400000-0xfe400fff] Jan 30 04:39:16.907496 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xfcc00000-0xfcc03fff 64bit pref] Jan 30 04:39:16.907647 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 30 04:39:16.907781 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Jan 30 04:39:16.907901 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 30 04:39:16.908100 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Jan 30 04:39:16.908215 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Jan 30 04:39:16.908320 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 30 04:39:16.908446 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Jan 30 04:39:16.908559 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 30 04:39:16.908678 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 30 04:39:16.908803 kernel: pci 0000:05:00.0: reg 0x14: [mem 0xfe000000-0xfe000fff] Jan 30 04:39:16.908919 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xfc800000-0xfc803fff 64bit pref] Jan 30 04:39:16.910109 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 30 04:39:16.910227 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Jan 30 04:39:16.910331 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 30 04:39:16.910455 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Jan 30 04:39:16.910565 kernel: pci 0000:06:00.0: reg 0x14: [mem 0xfde00000-0xfde00fff] Jan 30 04:39:16.910674 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xfc600000-0xfc603fff 64bit pref] Jan 30 04:39:16.910801 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 30 04:39:16.911995 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Jan 30 04:39:16.912112 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 30 04:39:16.912121 kernel: acpiphp: Slot [0] registered Jan 30 04:39:16.912239 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Jan 30 04:39:16.912349 kernel: pci 0000:07:00.0: reg 0x14: [mem 0xfdc80000-0xfdc80fff] Jan 30 04:39:16.912458 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xfc400000-0xfc403fff 64bit pref] Jan 30 04:39:16.912616 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfdc00000-0xfdc7ffff pref] Jan 30 04:39:16.912774 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 30 04:39:16.912884 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Jan 30 04:39:16.913017 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 30 04:39:16.913028 kernel: acpiphp: Slot [0-2] registered Jan 30 04:39:16.913133 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 30 04:39:16.913238 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Jan 30 04:39:16.913357 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 30 04:39:16.913374 kernel: acpiphp: Slot [0-3] registered Jan 30 04:39:16.913491 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 30 04:39:16.913596 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 30 04:39:16.913699 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 30 04:39:16.913709 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 30 04:39:16.913716 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 30 04:39:16.913723 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 30 04:39:16.913730 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 30 04:39:16.913736 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 30 04:39:16.913746 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 30 04:39:16.913774 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 30 04:39:16.913781 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 30 04:39:16.913788 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 30 04:39:16.913795 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 30 04:39:16.913802 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 30 04:39:16.913809 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 30 04:39:16.913815 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 30 04:39:16.913822 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 30 04:39:16.913831 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 30 04:39:16.913838 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 30 04:39:16.913844 kernel: iommu: Default domain type: Translated Jan 30 04:39:16.913851 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 04:39:16.913857 kernel: PCI: Using ACPI for IRQ routing Jan 30 04:39:16.913864 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 04:39:16.913871 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 30 04:39:16.913877 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] Jan 30 04:39:16.916039 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 30 04:39:16.916159 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 30 04:39:16.916264 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 30 04:39:16.916273 kernel: vgaarb: loaded Jan 30 04:39:16.916280 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 30 04:39:16.916287 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 30 04:39:16.916294 kernel: clocksource: Switched to clocksource kvm-clock Jan 30 04:39:16.916300 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 04:39:16.916308 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 04:39:16.916316 kernel: pnp: PnP ACPI init Jan 30 04:39:16.916479 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 30 04:39:16.916492 kernel: pnp: PnP ACPI: found 5 devices Jan 30 04:39:16.916500 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 04:39:16.916506 kernel: NET: Registered PF_INET protocol family Jan 30 04:39:16.916513 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 04:39:16.916520 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 30 04:39:16.916527 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 04:39:16.916533 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 30 04:39:16.916544 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 30 04:39:16.916550 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 30 04:39:16.916557 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 30 04:39:16.916564 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 30 04:39:16.916570 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 04:39:16.916577 kernel: NET: Registered PF_XDP protocol family Jan 30 04:39:16.916683 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 30 04:39:16.916802 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 30 04:39:16.916914 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 30 04:39:16.917048 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Jan 30 04:39:16.917154 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Jan 30 04:39:16.917257 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Jan 30 04:39:16.917378 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 30 04:39:16.917485 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Jan 30 04:39:16.917589 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Jan 30 04:39:16.917741 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 30 04:39:16.919993 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Jan 30 04:39:16.920142 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 30 04:39:16.920251 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 30 04:39:16.920355 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Jan 30 04:39:16.920457 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 30 04:39:16.920561 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 30 04:39:16.920671 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Jan 30 04:39:16.920812 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 30 04:39:16.921164 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 30 04:39:16.921469 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Jan 30 04:39:16.921576 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 30 04:39:16.924069 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 30 04:39:16.924192 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Jan 30 04:39:16.924298 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 30 04:39:16.924402 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 30 04:39:16.924507 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Jan 30 04:39:16.924616 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Jan 30 04:39:16.924721 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 30 04:39:16.924849 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 30 04:39:16.924980 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Jan 30 04:39:16.925091 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Jan 30 04:39:16.925196 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 30 04:39:16.925306 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 30 04:39:16.925409 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Jan 30 04:39:16.925512 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 30 04:39:16.925656 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 30 04:39:16.925788 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 04:39:16.925895 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 04:39:16.926539 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 04:39:16.926651 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] Jan 30 04:39:16.926761 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 30 04:39:16.926862 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 30 04:39:16.926996 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] Jan 30 04:39:16.927102 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] Jan 30 04:39:16.927217 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] Jan 30 04:39:16.927325 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jan 30 04:39:16.927434 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] Jan 30 04:39:16.927534 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 30 04:39:16.927641 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] Jan 30 04:39:16.927741 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 30 04:39:16.927871 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] Jan 30 04:39:16.928036 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 30 04:39:16.928150 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] Jan 30 04:39:16.928250 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 30 04:39:16.928356 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Jan 30 04:39:16.928455 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] Jan 30 04:39:16.928552 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 30 04:39:16.928663 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Jan 30 04:39:16.928776 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] Jan 30 04:39:16.928877 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 30 04:39:16.929006 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Jan 30 04:39:16.929110 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] Jan 30 04:39:16.929209 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 30 04:39:16.929219 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 30 04:39:16.929231 kernel: PCI: CLS 0 bytes, default 64 Jan 30 04:39:16.929238 kernel: Initialise system trusted keyrings Jan 30 04:39:16.929245 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 30 04:39:16.929252 kernel: Key type asymmetric registered Jan 30 04:39:16.929259 kernel: Asymmetric key parser 'x509' registered Jan 30 04:39:16.929266 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 04:39:16.929273 kernel: io scheduler mq-deadline registered Jan 30 04:39:16.929279 kernel: io scheduler kyber registered Jan 30 04:39:16.929286 kernel: io scheduler bfq registered Jan 30 04:39:16.929396 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 30 04:39:16.929504 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 30 04:39:16.929610 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 30 04:39:16.929716 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 30 04:39:16.929835 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 30 04:39:16.929943 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 30 04:39:16.930095 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 30 04:39:16.930201 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 30 04:39:16.930311 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 30 04:39:16.930414 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 30 04:39:16.930516 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 30 04:39:16.930620 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 30 04:39:16.930723 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 30 04:39:16.930840 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 30 04:39:16.930945 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 30 04:39:16.931154 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 30 04:39:16.931169 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 30 04:39:16.931273 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Jan 30 04:39:16.931376 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Jan 30 04:39:16.931386 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 04:39:16.931393 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Jan 30 04:39:16.931400 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 04:39:16.931407 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 04:39:16.931414 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 30 04:39:16.931421 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 30 04:39:16.931431 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 30 04:39:16.931438 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 30 04:39:16.931546 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 30 04:39:16.931645 kernel: rtc_cmos 00:03: registered as rtc0 Jan 30 04:39:16.931742 kernel: rtc_cmos 00:03: setting system clock to 2025-01-30T04:39:16 UTC (1738211956) Jan 30 04:39:16.931863 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 30 04:39:16.931874 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 30 04:39:16.931882 kernel: NET: Registered PF_INET6 protocol family Jan 30 04:39:16.931893 kernel: Segment Routing with IPv6 Jan 30 04:39:16.931900 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 04:39:16.931906 kernel: NET: Registered PF_PACKET protocol family Jan 30 04:39:16.931913 kernel: Key type dns_resolver registered Jan 30 04:39:16.931920 kernel: IPI shorthand broadcast: enabled Jan 30 04:39:16.931927 kernel: sched_clock: Marking stable (1091005836, 133725801)->(1234546796, -9815159) Jan 30 04:39:16.931934 kernel: registered taskstats version 1 Jan 30 04:39:16.931940 kernel: Loading compiled-in X.509 certificates Jan 30 04:39:16.931963 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 7f0738935740330d55027faa5877e7155d5f24f4' Jan 30 04:39:16.931973 kernel: Key type .fscrypt registered Jan 30 04:39:16.931979 kernel: Key type fscrypt-provisioning registered Jan 30 04:39:16.931986 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 04:39:16.931993 kernel: ima: Allocated hash algorithm: sha1 Jan 30 04:39:16.932000 kernel: ima: No architecture policies found Jan 30 04:39:16.932007 kernel: clk: Disabling unused clocks Jan 30 04:39:16.932014 kernel: Freeing unused kernel image (initmem) memory: 43320K Jan 30 04:39:16.932021 kernel: Write protecting the kernel read-only data: 38912k Jan 30 04:39:16.932028 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Jan 30 04:39:16.932038 kernel: Run /init as init process Jan 30 04:39:16.932044 kernel: with arguments: Jan 30 04:39:16.932052 kernel: /init Jan 30 04:39:16.932059 kernel: with environment: Jan 30 04:39:16.932065 kernel: HOME=/ Jan 30 04:39:16.932072 kernel: TERM=linux Jan 30 04:39:16.932081 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 04:39:16.932090 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 04:39:16.932102 systemd[1]: Detected virtualization kvm. Jan 30 04:39:16.932110 systemd[1]: Detected architecture x86-64. Jan 30 04:39:16.932117 systemd[1]: Running in initrd. Jan 30 04:39:16.932124 systemd[1]: No hostname configured, using default hostname. Jan 30 04:39:16.932131 systemd[1]: Hostname set to . Jan 30 04:39:16.932139 systemd[1]: Initializing machine ID from VM UUID. Jan 30 04:39:16.932146 systemd[1]: Queued start job for default target initrd.target. Jan 30 04:39:16.932153 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 04:39:16.932163 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 04:39:16.932171 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 04:39:16.932179 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 04:39:16.932186 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 04:39:16.932194 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 04:39:16.932202 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 04:39:16.932212 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 04:39:16.932220 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 04:39:16.932227 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 04:39:16.932235 systemd[1]: Reached target paths.target - Path Units. Jan 30 04:39:16.932242 systemd[1]: Reached target slices.target - Slice Units. Jan 30 04:39:16.932249 systemd[1]: Reached target swap.target - Swaps. Jan 30 04:39:16.932256 systemd[1]: Reached target timers.target - Timer Units. Jan 30 04:39:16.932264 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 04:39:16.932271 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 04:39:16.932281 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 04:39:16.932288 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 04:39:16.932296 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 04:39:16.932303 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 04:39:16.932310 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 04:39:16.932318 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 04:39:16.932326 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 04:39:16.932333 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 04:39:16.932342 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 04:39:16.932349 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 04:39:16.932357 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 04:39:16.932364 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 04:39:16.932371 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 04:39:16.932379 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 04:39:16.932409 systemd-journald[188]: Collecting audit messages is disabled. Jan 30 04:39:16.932432 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 04:39:16.932439 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 04:39:16.932447 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 04:39:16.932457 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 04:39:16.932464 kernel: Bridge firewalling registered Jan 30 04:39:16.932472 systemd-journald[188]: Journal started Jan 30 04:39:16.932489 systemd-journald[188]: Runtime Journal (/run/log/journal/b569694ee91a492c85bcc77100bd2e54) is 4.8M, max 38.3M, 33.5M free. Jan 30 04:39:16.896425 systemd-modules-load[189]: Inserted module 'overlay' Jan 30 04:39:16.925790 systemd-modules-load[189]: Inserted module 'br_netfilter' Jan 30 04:39:16.959973 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 04:39:16.960091 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 04:39:16.961480 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 04:39:16.963923 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 04:39:16.970094 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 04:39:16.972098 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 04:39:16.977699 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 04:39:16.984670 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 04:39:16.994625 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 04:39:16.997873 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 04:39:16.999487 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 04:39:17.000344 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 04:39:17.006112 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 04:39:17.009659 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 04:39:17.018341 dracut-cmdline[223]: dracut-dracut-053 Jan 30 04:39:17.022120 dracut-cmdline[223]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 30 04:39:17.049674 systemd-resolved[224]: Positive Trust Anchors: Jan 30 04:39:17.049689 systemd-resolved[224]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 04:39:17.049715 systemd-resolved[224]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 04:39:17.055414 systemd-resolved[224]: Defaulting to hostname 'linux'. Jan 30 04:39:17.057057 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 04:39:17.057731 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 04:39:17.088982 kernel: SCSI subsystem initialized Jan 30 04:39:17.097985 kernel: Loading iSCSI transport class v2.0-870. Jan 30 04:39:17.106985 kernel: iscsi: registered transport (tcp) Jan 30 04:39:17.130070 kernel: iscsi: registered transport (qla4xxx) Jan 30 04:39:17.130132 kernel: QLogic iSCSI HBA Driver Jan 30 04:39:17.171185 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 04:39:17.176132 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 04:39:17.204302 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 04:39:17.204360 kernel: device-mapper: uevent: version 1.0.3 Jan 30 04:39:17.207125 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 04:39:17.244989 kernel: raid6: avx2x4 gen() 34691 MB/s Jan 30 04:39:17.261973 kernel: raid6: avx2x2 gen() 34719 MB/s Jan 30 04:39:17.279092 kernel: raid6: avx2x1 gen() 23496 MB/s Jan 30 04:39:17.279133 kernel: raid6: using algorithm avx2x2 gen() 34719 MB/s Jan 30 04:39:17.298040 kernel: raid6: .... xor() 30065 MB/s, rmw enabled Jan 30 04:39:17.298096 kernel: raid6: using avx2x2 recovery algorithm Jan 30 04:39:17.316983 kernel: xor: automatically using best checksumming function avx Jan 30 04:39:17.451000 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 04:39:17.464735 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 04:39:17.476171 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 04:39:17.491279 systemd-udevd[407]: Using default interface naming scheme 'v255'. Jan 30 04:39:17.495212 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 04:39:17.506136 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 04:39:17.518271 dracut-pre-trigger[413]: rd.md=0: removing MD RAID activation Jan 30 04:39:17.552133 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 04:39:17.559085 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 04:39:17.623335 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 04:39:17.632115 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 04:39:17.643437 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 04:39:17.644924 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 04:39:17.647110 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 04:39:17.647550 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 04:39:17.651923 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 04:39:17.670770 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 04:39:17.712776 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 04:39:17.718402 kernel: AVX2 version of gcm_enc/dec engaged. Jan 30 04:39:17.718453 kernel: AES CTR mode by8 optimization enabled Jan 30 04:39:17.725051 kernel: ACPI: bus type USB registered Jan 30 04:39:17.725082 kernel: usbcore: registered new interface driver usbfs Jan 30 04:39:17.726164 kernel: usbcore: registered new interface driver hub Jan 30 04:39:17.727485 kernel: usbcore: registered new device driver usb Jan 30 04:39:17.742065 kernel: scsi host0: Virtio SCSI HBA Jan 30 04:39:17.752105 kernel: libata version 3.00 loaded. Jan 30 04:39:17.753979 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 30 04:39:17.803533 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 04:39:17.803654 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 04:39:17.805522 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 04:39:17.806344 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 04:39:17.806451 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 04:39:17.811265 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 04:39:17.819675 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 30 04:39:17.828100 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jan 30 04:39:17.828249 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 30 04:39:17.828379 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 30 04:39:17.828503 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jan 30 04:39:17.828626 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jan 30 04:39:17.828745 kernel: hub 1-0:1.0: USB hub found Jan 30 04:39:17.828921 kernel: hub 1-0:1.0: 4 ports detected Jan 30 04:39:17.829091 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 30 04:39:17.829237 kernel: hub 2-0:1.0: USB hub found Jan 30 04:39:17.829375 kernel: hub 2-0:1.0: 4 ports detected Jan 30 04:39:17.833911 kernel: ahci 0000:00:1f.2: version 3.0 Jan 30 04:39:17.860936 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 30 04:39:17.860983 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 30 04:39:17.861155 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 30 04:39:17.861282 kernel: scsi host1: ahci Jan 30 04:39:17.861413 kernel: scsi host2: ahci Jan 30 04:39:17.861539 kernel: scsi host3: ahci Jan 30 04:39:17.861669 kernel: scsi host4: ahci Jan 30 04:39:17.861813 kernel: scsi host5: ahci Jan 30 04:39:17.863123 kernel: scsi host6: ahci Jan 30 04:39:17.863258 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 51 Jan 30 04:39:17.863273 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 51 Jan 30 04:39:17.863283 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 51 Jan 30 04:39:17.863292 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 51 Jan 30 04:39:17.863301 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 51 Jan 30 04:39:17.863309 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 51 Jan 30 04:39:17.830191 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 04:39:17.907210 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 04:39:17.914079 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 04:39:17.930394 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 04:39:18.060983 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 30 04:39:18.174467 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 30 04:39:18.174543 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 30 04:39:18.174556 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 30 04:39:18.174568 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 30 04:39:18.174578 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 30 04:39:18.174589 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 30 04:39:18.175980 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 30 04:39:18.178380 kernel: ata1.00: applying bridge limits Jan 30 04:39:18.179050 kernel: ata1.00: configured for UDMA/100 Jan 30 04:39:18.182977 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 30 04:39:18.198979 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 30 04:39:18.205562 kernel: sd 0:0:0:0: Power-on or device reset occurred Jan 30 04:39:18.226560 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Jan 30 04:39:18.226717 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 30 04:39:18.226897 kernel: usbcore: registered new interface driver usbhid Jan 30 04:39:18.226909 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Jan 30 04:39:18.227074 kernel: usbhid: USB HID core driver Jan 30 04:39:18.227085 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 30 04:39:18.227219 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 04:39:18.227229 kernel: GPT:17805311 != 80003071 Jan 30 04:39:18.227237 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 04:39:18.227246 kernel: GPT:17805311 != 80003071 Jan 30 04:39:18.227254 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 04:39:18.227263 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 04:39:18.227275 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 30 04:39:18.235333 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Jan 30 04:39:18.235361 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jan 30 04:39:18.242480 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 30 04:39:18.251359 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 30 04:39:18.251372 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Jan 30 04:39:18.265981 kernel: BTRFS: device fsid f8084233-4a6f-4e67-af0b-519e43b19e58 devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (465) Jan 30 04:39:18.268997 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (454) Jan 30 04:39:18.279144 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 30 04:39:18.285410 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 30 04:39:18.292232 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 30 04:39:18.298806 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 30 04:39:18.300088 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jan 30 04:39:18.308092 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 04:39:18.313611 disk-uuid[576]: Primary Header is updated. Jan 30 04:39:18.313611 disk-uuid[576]: Secondary Entries is updated. Jan 30 04:39:18.313611 disk-uuid[576]: Secondary Header is updated. Jan 30 04:39:18.319980 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 04:39:18.325990 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 04:39:19.329000 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 04:39:19.330102 disk-uuid[578]: The operation has completed successfully. Jan 30 04:39:19.379568 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 04:39:19.379723 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 04:39:19.396065 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 04:39:19.400438 sh[594]: Success Jan 30 04:39:19.413995 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 30 04:39:19.472405 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 04:39:19.474313 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 04:39:19.474913 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 04:39:19.502355 kernel: BTRFS info (device dm-0): first mount of filesystem f8084233-4a6f-4e67-af0b-519e43b19e58 Jan 30 04:39:19.502400 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 04:39:19.504022 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 04:39:19.506467 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 04:39:19.506481 kernel: BTRFS info (device dm-0): using free space tree Jan 30 04:39:19.515976 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 30 04:39:19.518041 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 04:39:19.519497 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 04:39:19.525075 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 04:39:19.528069 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 04:39:19.545246 kernel: BTRFS info (device sda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 04:39:19.545280 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 04:39:19.545295 kernel: BTRFS info (device sda6): using free space tree Jan 30 04:39:19.549677 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 04:39:19.549702 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 04:39:19.559446 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 04:39:19.561892 kernel: BTRFS info (device sda6): last unmount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 04:39:19.565438 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 04:39:19.571084 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 04:39:19.622337 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 04:39:19.631110 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 04:39:19.651969 ignition[708]: Ignition 2.20.0 Jan 30 04:39:19.651983 ignition[708]: Stage: fetch-offline Jan 30 04:39:19.652026 ignition[708]: no configs at "/usr/lib/ignition/base.d" Jan 30 04:39:19.652035 ignition[708]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 04:39:19.652129 ignition[708]: parsed url from cmdline: "" Jan 30 04:39:19.655381 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 04:39:19.652133 ignition[708]: no config URL provided Jan 30 04:39:19.652139 ignition[708]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 04:39:19.652147 ignition[708]: no config at "/usr/lib/ignition/user.ign" Jan 30 04:39:19.657693 systemd-networkd[775]: lo: Link UP Jan 30 04:39:19.652152 ignition[708]: failed to fetch config: resource requires networking Jan 30 04:39:19.657697 systemd-networkd[775]: lo: Gained carrier Jan 30 04:39:19.652311 ignition[708]: Ignition finished successfully Jan 30 04:39:19.660243 systemd-networkd[775]: Enumeration completed Jan 30 04:39:19.660869 systemd-networkd[775]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 04:39:19.660879 systemd-networkd[775]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 04:39:19.661040 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 04:39:19.661837 systemd-networkd[775]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 04:39:19.661841 systemd-networkd[775]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 04:39:19.663562 systemd-networkd[775]: eth0: Link UP Jan 30 04:39:19.663567 systemd-networkd[775]: eth0: Gained carrier Jan 30 04:39:19.663579 systemd-networkd[775]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 04:39:19.663824 systemd[1]: Reached target network.target - Network. Jan 30 04:39:19.667578 systemd-networkd[775]: eth1: Link UP Jan 30 04:39:19.667582 systemd-networkd[775]: eth1: Gained carrier Jan 30 04:39:19.667590 systemd-networkd[775]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 04:39:19.671791 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 04:39:19.681784 ignition[782]: Ignition 2.20.0 Jan 30 04:39:19.682392 ignition[782]: Stage: fetch Jan 30 04:39:19.682582 ignition[782]: no configs at "/usr/lib/ignition/base.d" Jan 30 04:39:19.682594 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 04:39:19.682670 ignition[782]: parsed url from cmdline: "" Jan 30 04:39:19.682674 ignition[782]: no config URL provided Jan 30 04:39:19.682679 ignition[782]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 04:39:19.682687 ignition[782]: no config at "/usr/lib/ignition/user.ign" Jan 30 04:39:19.682709 ignition[782]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jan 30 04:39:19.683072 ignition[782]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 30 04:39:19.699012 systemd-networkd[775]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 04:39:19.732028 systemd-networkd[775]: eth0: DHCPv4 address 116.202.14.223/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 30 04:39:19.883544 ignition[782]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Jan 30 04:39:19.888791 ignition[782]: GET result: OK Jan 30 04:39:19.888862 ignition[782]: parsing config with SHA512: ac451a6fcd237bcc73363369ffd9ba1d3b9a7636a39a203ba9e68f229df07b5a709aa9fc4bd9c2f76dca67797fa7ff4383441b91bb563bfcd8a7bb19aa0c33d1 Jan 30 04:39:19.893577 unknown[782]: fetched base config from "system" Jan 30 04:39:19.893590 unknown[782]: fetched base config from "system" Jan 30 04:39:19.893601 unknown[782]: fetched user config from "hetzner" Jan 30 04:39:19.896082 ignition[782]: fetch: fetch complete Jan 30 04:39:19.896093 ignition[782]: fetch: fetch passed Jan 30 04:39:19.896152 ignition[782]: Ignition finished successfully Jan 30 04:39:19.898908 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 04:39:19.906127 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 04:39:19.920936 ignition[789]: Ignition 2.20.0 Jan 30 04:39:19.920976 ignition[789]: Stage: kargs Jan 30 04:39:19.921188 ignition[789]: no configs at "/usr/lib/ignition/base.d" Jan 30 04:39:19.921205 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 04:39:19.924441 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 04:39:19.922340 ignition[789]: kargs: kargs passed Jan 30 04:39:19.922399 ignition[789]: Ignition finished successfully Jan 30 04:39:19.935144 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 04:39:19.948985 ignition[795]: Ignition 2.20.0 Jan 30 04:39:19.948998 ignition[795]: Stage: disks Jan 30 04:39:19.949191 ignition[795]: no configs at "/usr/lib/ignition/base.d" Jan 30 04:39:19.949205 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 04:39:19.951601 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 04:39:19.950037 ignition[795]: disks: disks passed Jan 30 04:39:19.952699 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 04:39:19.950084 ignition[795]: Ignition finished successfully Jan 30 04:39:19.954027 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 04:39:19.955019 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 04:39:19.956116 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 04:39:19.957128 systemd[1]: Reached target basic.target - Basic System. Jan 30 04:39:19.966200 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 04:39:19.982685 systemd-fsck[804]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 30 04:39:19.986275 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 04:39:19.994057 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 04:39:20.075968 kernel: EXT4-fs (sda9): mounted filesystem cdc615db-d057-439f-af25-aa57b1c399e2 r/w with ordered data mode. Quota mode: none. Jan 30 04:39:20.076488 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 04:39:20.077570 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 04:39:20.090041 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 04:39:20.092058 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 04:39:20.094164 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 30 04:39:20.097057 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 04:39:20.098210 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 04:39:20.104973 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (812) Jan 30 04:39:20.107284 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 04:39:20.117147 kernel: BTRFS info (device sda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 04:39:20.117174 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 04:39:20.117185 kernel: BTRFS info (device sda6): using free space tree Jan 30 04:39:20.117194 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 04:39:20.117203 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 04:39:20.116281 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 04:39:20.125145 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 04:39:20.160796 coreos-metadata[814]: Jan 30 04:39:20.160 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jan 30 04:39:20.161995 coreos-metadata[814]: Jan 30 04:39:20.161 INFO Fetch successful Jan 30 04:39:20.162494 coreos-metadata[814]: Jan 30 04:39:20.162 INFO wrote hostname ci-4186-1-0-8-df2fd9e83c to /sysroot/etc/hostname Jan 30 04:39:20.164395 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 04:39:20.167102 initrd-setup-root[841]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 04:39:20.171602 initrd-setup-root[848]: cut: /sysroot/etc/group: No such file or directory Jan 30 04:39:20.176743 initrd-setup-root[855]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 04:39:20.181035 initrd-setup-root[862]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 04:39:20.270006 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 04:39:20.276027 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 04:39:20.279116 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 04:39:20.286979 kernel: BTRFS info (device sda6): last unmount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 04:39:20.307281 ignition[929]: INFO : Ignition 2.20.0 Jan 30 04:39:20.308161 ignition[929]: INFO : Stage: mount Jan 30 04:39:20.309808 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 04:39:20.309808 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 04:39:20.309808 ignition[929]: INFO : mount: mount passed Jan 30 04:39:20.309808 ignition[929]: INFO : Ignition finished successfully Jan 30 04:39:20.312460 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 04:39:20.313214 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 04:39:20.322072 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 04:39:20.501236 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 04:39:20.506152 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 04:39:20.518994 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (943) Jan 30 04:39:20.519049 kernel: BTRFS info (device sda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 04:39:20.523024 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 04:39:20.523066 kernel: BTRFS info (device sda6): using free space tree Jan 30 04:39:20.529501 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 04:39:20.529545 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 04:39:20.532386 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 04:39:20.557655 ignition[960]: INFO : Ignition 2.20.0 Jan 30 04:39:20.557655 ignition[960]: INFO : Stage: files Jan 30 04:39:20.559369 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 04:39:20.559369 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 04:39:20.559369 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Jan 30 04:39:20.561334 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 04:39:20.561334 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 04:39:20.562725 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 04:39:20.563383 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 04:39:20.563383 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 04:39:20.563163 unknown[960]: wrote ssh authorized keys file for user: core Jan 30 04:39:20.565386 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 04:39:20.565386 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 30 04:39:20.729524 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 04:39:20.993184 systemd-networkd[775]: eth1: Gained IPv6LL Jan 30 04:39:21.633246 systemd-networkd[775]: eth0: Gained IPv6LL Jan 30 04:39:21.863616 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 04:39:21.863616 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 30 04:39:21.866392 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 30 04:39:22.490366 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 30 04:39:22.903990 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 30 04:39:22.903990 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 30 04:39:22.906369 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 04:39:22.906369 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 04:39:22.906369 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 04:39:22.906369 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 04:39:22.906369 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 04:39:22.906369 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 04:39:22.906369 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 04:39:22.906369 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 04:39:22.906369 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 04:39:22.906369 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 04:39:22.906369 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 04:39:22.906369 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 04:39:22.906369 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 30 04:39:23.266225 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 30 04:39:23.491066 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 04:39:23.491066 ignition[960]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 30 04:39:23.492878 ignition[960]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 04:39:23.492878 ignition[960]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 04:39:23.492878 ignition[960]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 30 04:39:23.492878 ignition[960]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 30 04:39:23.492878 ignition[960]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 30 04:39:23.492878 ignition[960]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 30 04:39:23.492878 ignition[960]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 30 04:39:23.492878 ignition[960]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jan 30 04:39:23.492878 ignition[960]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 04:39:23.492878 ignition[960]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 04:39:23.502868 ignition[960]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 04:39:23.502868 ignition[960]: INFO : files: files passed Jan 30 04:39:23.502868 ignition[960]: INFO : Ignition finished successfully Jan 30 04:39:23.495336 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 04:39:23.505548 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 04:39:23.508194 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 04:39:23.509735 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 04:39:23.510382 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 04:39:23.524329 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 04:39:23.524329 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 04:39:23.526843 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 04:39:23.528305 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 04:39:23.529745 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 04:39:23.535139 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 04:39:23.565112 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 04:39:23.565239 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 04:39:23.566581 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 04:39:23.567318 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 04:39:23.568387 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 04:39:23.569728 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 04:39:23.584364 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 04:39:23.590111 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 04:39:23.599594 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 04:39:23.600309 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 04:39:23.601481 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 04:39:23.602445 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 04:39:23.602597 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 04:39:23.603727 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 04:39:23.604408 systemd[1]: Stopped target basic.target - Basic System. Jan 30 04:39:23.605494 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 04:39:23.606511 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 04:39:23.607451 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 04:39:23.608651 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 04:39:23.609831 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 04:39:23.611044 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 04:39:23.612161 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 04:39:23.613315 systemd[1]: Stopped target swap.target - Swaps. Jan 30 04:39:23.614347 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 04:39:23.614452 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 04:39:23.615615 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 04:39:23.616315 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 04:39:23.617301 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 04:39:23.617945 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 04:39:23.619208 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 04:39:23.619301 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 04:39:23.620830 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 04:39:23.620984 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 04:39:23.621709 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 04:39:23.621886 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 04:39:23.622976 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 30 04:39:23.623106 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 04:39:23.629190 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 04:39:23.629656 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 04:39:23.629804 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 04:39:23.632041 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 04:39:23.632516 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 04:39:23.632623 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 04:39:23.633201 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 04:39:23.633319 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 04:39:23.644428 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 04:39:23.644553 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 04:39:23.649262 ignition[1012]: INFO : Ignition 2.20.0 Jan 30 04:39:23.653499 ignition[1012]: INFO : Stage: umount Jan 30 04:39:23.653499 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 04:39:23.653499 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 04:39:23.653499 ignition[1012]: INFO : umount: umount passed Jan 30 04:39:23.653499 ignition[1012]: INFO : Ignition finished successfully Jan 30 04:39:23.653404 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 04:39:23.653547 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 04:39:23.654595 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 04:39:23.654689 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 04:39:23.657749 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 04:39:23.657817 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 04:39:23.658286 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 04:39:23.658331 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 04:39:23.658762 systemd[1]: Stopped target network.target - Network. Jan 30 04:39:23.660235 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 04:39:23.660287 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 04:39:23.661184 systemd[1]: Stopped target paths.target - Path Units. Jan 30 04:39:23.663846 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 04:39:23.667999 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 04:39:23.668511 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 04:39:23.669494 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 04:39:23.671256 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 04:39:23.671305 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 04:39:23.671740 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 04:39:23.671793 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 04:39:23.672336 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 04:39:23.672397 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 04:39:23.675089 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 04:39:23.675146 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 04:39:23.676014 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 04:39:23.676554 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 04:39:23.679732 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 04:39:23.683145 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 04:39:23.683374 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 04:39:23.683999 systemd-networkd[775]: eth1: DHCPv6 lease lost Jan 30 04:39:23.686334 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 04:39:23.686459 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 04:39:23.687553 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 04:39:23.687622 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 04:39:23.687636 systemd-networkd[775]: eth0: DHCPv6 lease lost Jan 30 04:39:23.688359 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 04:39:23.688408 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 04:39:23.689678 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 04:39:23.689814 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 04:39:23.690636 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 04:39:23.690671 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 04:39:23.697090 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 04:39:23.697767 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 04:39:23.697854 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 04:39:23.700291 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 04:39:23.700341 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 04:39:23.701404 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 04:39:23.701472 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 04:39:23.703086 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 04:39:23.722440 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 04:39:23.722714 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 04:39:23.724159 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 04:39:23.724285 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 04:39:23.725910 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 04:39:23.726602 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 04:39:23.727298 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 04:39:23.727351 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 04:39:23.728447 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 04:39:23.728512 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 04:39:23.730177 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 04:39:23.730295 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 04:39:23.731291 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 04:39:23.731348 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 04:39:23.739182 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 04:39:23.739800 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 04:39:23.739869 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 04:39:23.743659 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 04:39:23.743731 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 04:39:23.747396 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 04:39:23.747517 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 04:39:23.749108 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 04:39:23.754153 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 04:39:23.763929 systemd[1]: Switching root. Jan 30 04:39:23.800338 systemd-journald[188]: Journal stopped Jan 30 04:39:24.835439 systemd-journald[188]: Received SIGTERM from PID 1 (systemd). Jan 30 04:39:24.835519 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 04:39:24.835533 kernel: SELinux: policy capability open_perms=1 Jan 30 04:39:24.835550 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 04:39:24.835560 kernel: SELinux: policy capability always_check_network=0 Jan 30 04:39:24.835572 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 04:39:24.835583 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 04:39:24.835592 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 04:39:24.835602 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 04:39:24.835612 kernel: audit: type=1403 audit(1738211963.962:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 04:39:24.835623 systemd[1]: Successfully loaded SELinux policy in 43.676ms. Jan 30 04:39:24.835640 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.128ms. Jan 30 04:39:24.835651 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 04:39:24.835662 systemd[1]: Detected virtualization kvm. Jan 30 04:39:24.835675 systemd[1]: Detected architecture x86-64. Jan 30 04:39:24.835685 systemd[1]: Detected first boot. Jan 30 04:39:24.835697 systemd[1]: Hostname set to . Jan 30 04:39:24.835708 systemd[1]: Initializing machine ID from VM UUID. Jan 30 04:39:24.835722 zram_generator::config[1055]: No configuration found. Jan 30 04:39:24.835737 systemd[1]: Populated /etc with preset unit settings. Jan 30 04:39:24.835748 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 04:39:24.835759 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 04:39:24.835775 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 04:39:24.835800 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 04:39:24.835811 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 04:39:24.835822 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 04:39:24.835832 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 04:39:24.835844 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 04:39:24.835855 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 04:39:24.835866 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 04:39:24.835879 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 04:39:24.835890 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 04:39:24.835901 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 04:39:24.835911 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 04:39:24.835922 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 04:39:24.835933 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 04:39:24.835944 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 04:39:24.838430 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 30 04:39:24.838448 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 04:39:24.838464 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 04:39:24.838476 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 04:39:24.838486 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 04:39:24.838497 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 04:39:24.838509 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 04:39:24.838520 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 04:39:24.838533 systemd[1]: Reached target slices.target - Slice Units. Jan 30 04:39:24.838543 systemd[1]: Reached target swap.target - Swaps. Jan 30 04:39:24.838554 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 04:39:24.838565 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 04:39:24.838575 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 04:39:24.838586 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 04:39:24.838600 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 04:39:24.838615 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 04:39:24.838626 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 04:39:24.838639 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 04:39:24.838650 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 04:39:24.838661 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 04:39:24.838672 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 04:39:24.838683 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 04:39:24.838694 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 04:39:24.838707 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 04:39:24.838719 systemd[1]: Reached target machines.target - Containers. Jan 30 04:39:24.838729 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 04:39:24.838740 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 04:39:24.838750 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 04:39:24.838767 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 04:39:24.838779 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 04:39:24.838803 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 04:39:24.838818 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 04:39:24.838829 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 04:39:24.838840 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 04:39:24.838851 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 04:39:24.838861 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 04:39:24.838872 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 04:39:24.838883 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 04:39:24.838894 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 04:39:24.838904 kernel: fuse: init (API version 7.39) Jan 30 04:39:24.838919 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 04:39:24.838930 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 04:39:24.838941 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 04:39:24.841156 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 04:39:24.841175 kernel: ACPI: bus type drm_connector registered Jan 30 04:39:24.841188 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 04:39:24.841199 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 04:39:24.841210 systemd[1]: Stopped verity-setup.service. Jan 30 04:39:24.841220 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 04:39:24.841236 kernel: loop: module loaded Jan 30 04:39:24.841246 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 04:39:24.841256 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 04:39:24.841267 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 04:39:24.841277 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 04:39:24.841299 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 04:39:24.841325 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 04:39:24.841345 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 04:39:24.841364 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 04:39:24.841381 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 04:39:24.841399 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 04:39:24.841413 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 04:39:24.841424 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 04:39:24.841440 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 04:39:24.841453 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 04:39:24.841463 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 04:39:24.841474 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 04:39:24.841484 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 04:39:24.841496 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 04:39:24.841534 systemd-journald[1132]: Collecting audit messages is disabled. Jan 30 04:39:24.841559 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 04:39:24.841570 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 04:39:24.841581 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 04:39:24.841592 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 04:39:24.841603 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 04:39:24.841614 systemd-journald[1132]: Journal started Jan 30 04:39:24.841637 systemd-journald[1132]: Runtime Journal (/run/log/journal/b569694ee91a492c85bcc77100bd2e54) is 4.8M, max 38.3M, 33.5M free. Jan 30 04:39:24.491182 systemd[1]: Queued start job for default target multi-user.target. Jan 30 04:39:24.843894 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 04:39:24.512719 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 30 04:39:24.513304 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 04:39:24.859868 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 04:39:24.868073 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 04:39:24.873044 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 04:39:24.874970 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 04:39:24.875006 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 04:39:24.877151 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 04:39:24.881084 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 04:39:24.889681 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 04:39:24.890603 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 04:39:24.892901 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 04:39:24.897150 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 04:39:24.897718 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 04:39:24.900332 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 04:39:24.902048 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 04:39:24.907202 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 04:39:24.911126 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 04:39:24.917087 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 04:39:24.920264 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 04:39:24.922272 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 04:39:24.922840 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 04:39:24.924082 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 04:39:24.940096 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 04:39:24.941556 systemd-journald[1132]: Time spent on flushing to /var/log/journal/b569694ee91a492c85bcc77100bd2e54 is 65.544ms for 1141 entries. Jan 30 04:39:24.941556 systemd-journald[1132]: System Journal (/var/log/journal/b569694ee91a492c85bcc77100bd2e54) is 8.0M, max 584.8M, 576.8M free. Jan 30 04:39:25.028703 systemd-journald[1132]: Received client request to flush runtime journal. Jan 30 04:39:25.028746 kernel: loop0: detected capacity change from 0 to 8 Jan 30 04:39:25.028760 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 04:39:24.967965 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 04:39:24.968562 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 04:39:24.977199 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 04:39:24.987179 udevadm[1181]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 30 04:39:25.014298 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 04:39:25.030991 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 04:39:25.044054 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 04:39:25.051263 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 04:39:25.055284 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 04:39:25.057164 kernel: loop1: detected capacity change from 0 to 138184 Jan 30 04:39:25.067765 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 04:39:25.090891 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Jan 30 04:39:25.090908 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Jan 30 04:39:25.102728 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 04:39:25.107882 kernel: loop2: detected capacity change from 0 to 141000 Jan 30 04:39:25.148996 kernel: loop3: detected capacity change from 0 to 210664 Jan 30 04:39:25.194177 kernel: loop4: detected capacity change from 0 to 8 Jan 30 04:39:25.199148 kernel: loop5: detected capacity change from 0 to 138184 Jan 30 04:39:25.229991 kernel: loop6: detected capacity change from 0 to 141000 Jan 30 04:39:25.255995 kernel: loop7: detected capacity change from 0 to 210664 Jan 30 04:39:25.278039 (sd-merge)[1200]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Jan 30 04:39:25.280382 (sd-merge)[1200]: Merged extensions into '/usr'. Jan 30 04:39:25.285608 systemd[1]: Reloading requested from client PID 1175 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 04:39:25.285625 systemd[1]: Reloading... Jan 30 04:39:25.402987 zram_generator::config[1232]: No configuration found. Jan 30 04:39:25.450936 ldconfig[1170]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 04:39:25.506248 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 04:39:25.549561 systemd[1]: Reloading finished in 263 ms. Jan 30 04:39:25.578848 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 04:39:25.579735 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 04:39:25.580688 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 04:39:25.591143 systemd[1]: Starting ensure-sysext.service... Jan 30 04:39:25.593329 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 04:39:25.597164 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 04:39:25.611171 systemd[1]: Reloading requested from client PID 1270 ('systemctl') (unit ensure-sysext.service)... Jan 30 04:39:25.611194 systemd[1]: Reloading... Jan 30 04:39:25.639385 systemd-tmpfiles[1271]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 04:39:25.639634 systemd-tmpfiles[1271]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 04:39:25.640590 systemd-tmpfiles[1271]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 04:39:25.640841 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. Jan 30 04:39:25.640906 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. Jan 30 04:39:25.647409 systemd-udevd[1272]: Using default interface naming scheme 'v255'. Jan 30 04:39:25.648425 systemd-tmpfiles[1271]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 04:39:25.648808 systemd-tmpfiles[1271]: Skipping /boot Jan 30 04:39:25.667175 systemd-tmpfiles[1271]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 04:39:25.667324 systemd-tmpfiles[1271]: Skipping /boot Jan 30 04:39:25.706974 zram_generator::config[1298]: No configuration found. Jan 30 04:39:25.841393 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 04:39:25.870177 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 30 04:39:25.894060 kernel: ACPI: button: Power Button [PWRF] Jan 30 04:39:25.914160 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 30 04:39:25.915112 systemd[1]: Reloading finished in 303 ms. Jan 30 04:39:25.935677 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 04:39:25.941184 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 04:39:25.944390 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 04:39:25.959068 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Jan 30 04:39:25.959227 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 04:39:25.966248 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 30 04:39:25.977979 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 30 04:39:25.980074 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 04:39:25.980681 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 04:39:25.984231 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 04:39:25.994065 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 04:39:26.002680 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 04:39:26.003819 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 04:39:26.010093 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 04:39:26.017088 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 04:39:26.023104 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 04:39:26.031988 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1307) Jan 30 04:39:26.033112 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 04:39:26.037010 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 30 04:39:26.037255 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 30 04:39:26.037436 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 30 04:39:26.049683 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 04:39:26.054155 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 04:39:26.054635 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 04:39:26.056333 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 04:39:26.057045 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 04:39:26.062216 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 04:39:26.062709 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Jan 30 04:39:26.062747 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Jan 30 04:39:26.063316 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 04:39:26.066252 kernel: Console: switching to colour dummy device 80x25 Jan 30 04:39:26.070981 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 30 04:39:26.071021 kernel: [drm] features: -context_init Jan 30 04:39:26.077019 kernel: [drm] number of scanouts: 1 Jan 30 04:39:26.080329 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 04:39:26.080547 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 04:39:26.086676 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 04:39:26.090095 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 04:39:26.093111 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 04:39:26.093254 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 04:39:26.095991 kernel: [drm] number of cap sets: 0 Jan 30 04:39:26.096172 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 04:39:26.096249 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 04:39:26.100013 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 04:39:26.112040 kernel: EDAC MC: Ver: 3.0.0 Jan 30 04:39:26.122813 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 04:39:26.126845 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 04:39:26.128547 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 04:39:26.128768 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 04:39:26.130990 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Jan 30 04:39:26.138044 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 04:39:26.138142 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 04:39:26.138225 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 04:39:26.141107 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 04:39:26.141606 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 04:39:26.141761 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 04:39:26.152402 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 04:39:26.152780 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 04:39:26.153942 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 04:39:26.154780 augenrules[1414]: No rules Jan 30 04:39:26.157549 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 04:39:26.157868 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 30 04:39:26.158362 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 04:39:26.158525 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 04:39:26.167103 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 30 04:39:26.173975 kernel: Console: switching to colour frame buffer device 160x50 Jan 30 04:39:26.179278 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 04:39:26.187174 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 30 04:39:26.215008 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 30 04:39:26.216259 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 04:39:26.220124 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 04:39:26.230215 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 04:39:26.235543 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 04:39:26.244668 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 04:39:26.246709 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 04:39:26.246811 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 04:39:26.246844 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 04:39:26.247213 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 04:39:26.249024 systemd[1]: Finished ensure-sysext.service. Jan 30 04:39:26.253902 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 04:39:26.257762 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 04:39:26.257943 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 04:39:26.279235 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 04:39:26.284275 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 04:39:26.284445 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 04:39:26.291186 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 04:39:26.295672 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 04:39:26.298000 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 04:39:26.302845 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 04:39:26.311467 augenrules[1423]: /sbin/augenrules: No change Jan 30 04:39:26.313231 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 04:39:26.313396 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 04:39:26.317777 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 30 04:39:26.327307 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 04:39:26.327494 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 04:39:26.337758 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 04:39:26.338033 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 04:39:26.351639 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 04:39:26.371984 augenrules[1464]: No rules Jan 30 04:39:26.373122 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 04:39:26.374014 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 04:39:26.383186 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 04:39:26.383523 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 04:39:26.383704 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 30 04:39:26.396842 systemd-networkd[1383]: lo: Link UP Jan 30 04:39:26.397614 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 04:39:26.397711 systemd-networkd[1383]: lo: Gained carrier Jan 30 04:39:26.400858 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 04:39:26.402777 systemd-networkd[1383]: Enumeration completed Jan 30 04:39:26.403070 systemd-timesyncd[1444]: No network connectivity, watching for changes. Jan 30 04:39:26.403585 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 04:39:26.407252 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 04:39:26.409092 systemd-networkd[1383]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 04:39:26.409176 systemd-networkd[1383]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 04:39:26.411043 systemd-networkd[1383]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 04:39:26.411051 systemd-networkd[1383]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 04:39:26.414966 systemd-networkd[1383]: eth0: Link UP Jan 30 04:39:26.417993 systemd-networkd[1383]: eth0: Gained carrier Jan 30 04:39:26.418088 systemd-networkd[1383]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 04:39:26.420518 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 04:39:26.429551 systemd-resolved[1384]: Positive Trust Anchors: Jan 30 04:39:26.429572 systemd-resolved[1384]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 04:39:26.429598 systemd-resolved[1384]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 04:39:26.431277 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 04:39:26.431291 systemd-networkd[1383]: eth1: Link UP Jan 30 04:39:26.431296 systemd-networkd[1383]: eth1: Gained carrier Jan 30 04:39:26.431319 systemd-networkd[1383]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 04:39:26.435012 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 04:39:26.442704 systemd-resolved[1384]: Using system hostname 'ci-4186-1-0-8-df2fd9e83c'. Jan 30 04:39:26.446404 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 04:39:26.448590 systemd[1]: Reached target network.target - Network. Jan 30 04:39:26.449140 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 04:39:26.462912 lvm[1479]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 04:39:26.464096 systemd-networkd[1383]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 04:39:26.467640 systemd-timesyncd[1444]: Network configuration changed, trying to establish connection. Jan 30 04:39:26.482030 systemd-networkd[1383]: eth0: DHCPv4 address 116.202.14.223/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 30 04:39:26.483476 systemd-timesyncd[1444]: Network configuration changed, trying to establish connection. Jan 30 04:39:26.488002 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 04:39:26.489539 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 04:39:26.498262 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 04:39:26.501023 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 04:39:26.502767 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 04:39:26.503612 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 04:39:26.504509 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 04:39:26.505679 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 04:39:26.506496 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 04:39:26.507340 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 04:39:26.508141 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 04:39:26.508171 systemd[1]: Reached target paths.target - Path Units. Jan 30 04:39:26.508672 systemd[1]: Reached target timers.target - Timer Units. Jan 30 04:39:26.510432 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 04:39:26.513253 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 04:39:26.517841 lvm[1483]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 04:39:26.522840 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 04:39:26.524308 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 04:39:26.526666 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 04:39:26.528435 systemd[1]: Reached target basic.target - Basic System. Jan 30 04:39:26.529059 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 04:39:26.529087 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 04:39:26.534824 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 04:39:26.538219 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 30 04:39:26.546000 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 04:39:26.551084 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 04:39:26.555133 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 04:39:26.559356 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 04:39:26.563810 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 04:39:26.575095 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 04:39:26.581420 extend-filesystems[1493]: Found loop4 Jan 30 04:39:26.581420 extend-filesystems[1493]: Found loop5 Jan 30 04:39:26.581420 extend-filesystems[1493]: Found loop6 Jan 30 04:39:26.581420 extend-filesystems[1493]: Found loop7 Jan 30 04:39:26.581420 extend-filesystems[1493]: Found sda Jan 30 04:39:26.581420 extend-filesystems[1493]: Found sda1 Jan 30 04:39:26.581420 extend-filesystems[1493]: Found sda2 Jan 30 04:39:26.581420 extend-filesystems[1493]: Found sda3 Jan 30 04:39:26.581420 extend-filesystems[1493]: Found usr Jan 30 04:39:26.581420 extend-filesystems[1493]: Found sda4 Jan 30 04:39:26.581420 extend-filesystems[1493]: Found sda6 Jan 30 04:39:26.581420 extend-filesystems[1493]: Found sda7 Jan 30 04:39:26.581420 extend-filesystems[1493]: Found sda9 Jan 30 04:39:26.581420 extend-filesystems[1493]: Checking size of /dev/sda9 Jan 30 04:39:26.719111 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Jan 30 04:39:26.719168 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1308) Jan 30 04:39:26.702715 dbus-daemon[1489]: [system] SELinux support is enabled Jan 30 04:39:26.588161 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jan 30 04:39:26.719653 coreos-metadata[1488]: Jan 30 04:39:26.599 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jan 30 04:39:26.719653 coreos-metadata[1488]: Jan 30 04:39:26.601 INFO Fetch successful Jan 30 04:39:26.719653 coreos-metadata[1488]: Jan 30 04:39:26.602 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jan 30 04:39:26.719653 coreos-metadata[1488]: Jan 30 04:39:26.602 INFO Fetch successful Jan 30 04:39:26.719832 jq[1492]: false Jan 30 04:39:26.719914 extend-filesystems[1493]: Resized partition /dev/sda9 Jan 30 04:39:26.592913 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 04:39:26.726625 extend-filesystems[1515]: resize2fs 1.47.1 (20-May-2024) Jan 30 04:39:26.611363 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 04:39:26.632419 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 04:39:26.634807 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 04:39:26.641359 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 04:39:26.644112 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 04:39:26.678066 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 04:39:26.683565 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 04:39:26.693761 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 04:39:26.694032 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 04:39:26.694366 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 04:39:26.694543 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 04:39:26.714784 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 04:39:26.728169 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 04:39:26.728366 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 04:39:26.767815 jq[1518]: true Jan 30 04:39:26.753494 (ntainerd)[1524]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 04:39:26.773376 update_engine[1514]: I20250130 04:39:26.752863 1514 main.cc:92] Flatcar Update Engine starting Jan 30 04:39:26.773376 update_engine[1514]: I20250130 04:39:26.768249 1514 update_check_scheduler.cc:74] Next update check in 2m42s Jan 30 04:39:26.773173 systemd-logind[1509]: New seat seat0. Jan 30 04:39:26.831301 systemd-logind[1509]: Watching system buttons on /dev/input/event2 (Power Button) Jan 30 04:39:26.831326 systemd-logind[1509]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 30 04:39:26.840134 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 04:39:26.843979 jq[1531]: true Jan 30 04:39:26.866782 tar[1522]: linux-amd64/helm Jan 30 04:39:26.863271 systemd[1]: Started update-engine.service - Update Engine. Jan 30 04:39:26.869785 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 04:39:26.869990 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 04:39:26.870543 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 04:39:26.870642 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 04:39:26.883677 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 04:39:26.954006 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Jan 30 04:39:26.961196 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 30 04:39:26.963650 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 04:39:26.974885 sshd_keygen[1521]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 04:39:26.976024 extend-filesystems[1515]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 30 04:39:26.976024 extend-filesystems[1515]: old_desc_blocks = 1, new_desc_blocks = 5 Jan 30 04:39:26.976024 extend-filesystems[1515]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Jan 30 04:39:26.977890 extend-filesystems[1493]: Resized filesystem in /dev/sda9 Jan 30 04:39:26.977890 extend-filesystems[1493]: Found sr0 Jan 30 04:39:26.979542 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 04:39:26.979739 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 04:39:27.022397 bash[1567]: Updated "/home/core/.ssh/authorized_keys" Jan 30 04:39:27.025125 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 04:39:27.038420 systemd[1]: Starting sshkeys.service... Jan 30 04:39:27.051532 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 04:39:27.064340 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 04:39:27.082501 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 30 04:39:27.094628 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 30 04:39:27.099618 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 04:39:27.103025 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 04:39:27.108320 locksmithd[1542]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 04:39:27.112756 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 04:39:27.145125 coreos-metadata[1582]: Jan 30 04:39:27.144 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jan 30 04:39:27.146205 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 04:39:27.153873 coreos-metadata[1582]: Jan 30 04:39:27.153 INFO Fetch successful Jan 30 04:39:27.156691 unknown[1582]: wrote ssh authorized keys file for user: core Jan 30 04:39:27.159328 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 04:39:27.167494 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 30 04:39:27.170699 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 04:39:27.186083 containerd[1524]: time="2025-01-30T04:39:27.186008397Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 30 04:39:27.201648 update-ssh-keys[1593]: Updated "/home/core/.ssh/authorized_keys" Jan 30 04:39:27.203406 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 30 04:39:27.208455 containerd[1524]: time="2025-01-30T04:39:27.208237187Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 04:39:27.210048 containerd[1524]: time="2025-01-30T04:39:27.210023197Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 04:39:27.210120 containerd[1524]: time="2025-01-30T04:39:27.210106624Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 04:39:27.210976 containerd[1524]: time="2025-01-30T04:39:27.210162779Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 04:39:27.210976 containerd[1524]: time="2025-01-30T04:39:27.210333429Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 04:39:27.210976 containerd[1524]: time="2025-01-30T04:39:27.210348607Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 04:39:27.210976 containerd[1524]: time="2025-01-30T04:39:27.210414882Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 04:39:27.210976 containerd[1524]: time="2025-01-30T04:39:27.210425732Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 04:39:27.210976 containerd[1524]: time="2025-01-30T04:39:27.210590581Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 04:39:27.210976 containerd[1524]: time="2025-01-30T04:39:27.210603215Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 04:39:27.210976 containerd[1524]: time="2025-01-30T04:39:27.210615318Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 04:39:27.210976 containerd[1524]: time="2025-01-30T04:39:27.210623172Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 04:39:27.210976 containerd[1524]: time="2025-01-30T04:39:27.210705767Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 04:39:27.211226 containerd[1524]: time="2025-01-30T04:39:27.211207729Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 04:39:27.211378 containerd[1524]: time="2025-01-30T04:39:27.211360786Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 04:39:27.211430 containerd[1524]: time="2025-01-30T04:39:27.211417792Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 04:39:27.211496 systemd[1]: Finished sshkeys.service. Jan 30 04:39:27.211665 containerd[1524]: time="2025-01-30T04:39:27.211648816Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 04:39:27.211814 containerd[1524]: time="2025-01-30T04:39:27.211782577Z" level=info msg="metadata content store policy set" policy=shared Jan 30 04:39:27.217351 containerd[1524]: time="2025-01-30T04:39:27.217325062Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 04:39:27.217456 containerd[1524]: time="2025-01-30T04:39:27.217437663Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 04:39:27.218098 containerd[1524]: time="2025-01-30T04:39:27.218055361Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 04:39:27.218098 containerd[1524]: time="2025-01-30T04:39:27.218099294Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 04:39:27.218181 containerd[1524]: time="2025-01-30T04:39:27.218118660Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 04:39:27.218300 containerd[1524]: time="2025-01-30T04:39:27.218274463Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 04:39:27.218572 containerd[1524]: time="2025-01-30T04:39:27.218545350Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 04:39:27.218862 containerd[1524]: time="2025-01-30T04:39:27.218657360Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 04:39:27.218862 containerd[1524]: time="2025-01-30T04:39:27.218677539Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 04:39:27.218862 containerd[1524]: time="2025-01-30T04:39:27.218690443Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 04:39:27.218862 containerd[1524]: time="2025-01-30T04:39:27.218702214Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 04:39:27.218862 containerd[1524]: time="2025-01-30T04:39:27.218713166Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 04:39:27.218862 containerd[1524]: time="2025-01-30T04:39:27.218723274Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 04:39:27.218862 containerd[1524]: time="2025-01-30T04:39:27.218734175Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 04:39:27.218862 containerd[1524]: time="2025-01-30T04:39:27.218747600Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 04:39:27.218862 containerd[1524]: time="2025-01-30T04:39:27.218757959Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 04:39:27.218862 containerd[1524]: time="2025-01-30T04:39:27.218772256Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 04:39:27.218862 containerd[1524]: time="2025-01-30T04:39:27.218782806Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 04:39:27.218862 containerd[1524]: time="2025-01-30T04:39:27.218816339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 04:39:27.218862 containerd[1524]: time="2025-01-30T04:39:27.218829223Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 04:39:27.218862 containerd[1524]: time="2025-01-30T04:39:27.218840564Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 04:39:27.219168 containerd[1524]: time="2025-01-30T04:39:27.218851885Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 04:39:27.219168 containerd[1524]: time="2025-01-30T04:39:27.218862856Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 04:39:27.219168 containerd[1524]: time="2025-01-30T04:39:27.218875379Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 04:39:27.219168 containerd[1524]: time="2025-01-30T04:39:27.218887111Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 04:39:27.219168 containerd[1524]: time="2025-01-30T04:39:27.218898072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 04:39:27.219168 containerd[1524]: time="2025-01-30T04:39:27.218908011Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 04:39:27.219168 containerd[1524]: time="2025-01-30T04:39:27.218920604Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 04:39:27.219168 containerd[1524]: time="2025-01-30T04:39:27.218930293Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 04:39:27.219168 containerd[1524]: time="2025-01-30T04:39:27.218939560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 04:39:27.219168 containerd[1524]: time="2025-01-30T04:39:27.218965729Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 04:39:27.219168 containerd[1524]: time="2025-01-30T04:39:27.218981950Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 04:39:27.219168 containerd[1524]: time="2025-01-30T04:39:27.219008900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 04:39:27.219168 containerd[1524]: time="2025-01-30T04:39:27.219030460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 04:39:27.219168 containerd[1524]: time="2025-01-30T04:39:27.219046620Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 04:39:27.220118 containerd[1524]: time="2025-01-30T04:39:27.219650724Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 04:39:27.220118 containerd[1524]: time="2025-01-30T04:39:27.219673727Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 04:39:27.220118 containerd[1524]: time="2025-01-30T04:39:27.219683876Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 04:39:27.220118 containerd[1524]: time="2025-01-30T04:39:27.219768735Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 04:39:27.220118 containerd[1524]: time="2025-01-30T04:39:27.219780076Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 04:39:27.220118 containerd[1524]: time="2025-01-30T04:39:27.219790706Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 04:39:27.220118 containerd[1524]: time="2025-01-30T04:39:27.219814431Z" level=info msg="NRI interface is disabled by configuration." Jan 30 04:39:27.220118 containerd[1524]: time="2025-01-30T04:39:27.219823908Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 04:39:27.220245 containerd[1524]: time="2025-01-30T04:39:27.220119443Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 04:39:27.220245 containerd[1524]: time="2025-01-30T04:39:27.220163876Z" level=info msg="Connect containerd service" Jan 30 04:39:27.220245 containerd[1524]: time="2025-01-30T04:39:27.220192550Z" level=info msg="using legacy CRI server" Jan 30 04:39:27.220245 containerd[1524]: time="2025-01-30T04:39:27.220198792Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 04:39:27.220404 containerd[1524]: time="2025-01-30T04:39:27.220291365Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 04:39:27.220982 containerd[1524]: time="2025-01-30T04:39:27.220890058Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 04:39:27.221330 containerd[1524]: time="2025-01-30T04:39:27.221068663Z" level=info msg="Start subscribing containerd event" Jan 30 04:39:27.221330 containerd[1524]: time="2025-01-30T04:39:27.221121652Z" level=info msg="Start recovering state" Jan 30 04:39:27.221330 containerd[1524]: time="2025-01-30T04:39:27.221173760Z" level=info msg="Start event monitor" Jan 30 04:39:27.221330 containerd[1524]: time="2025-01-30T04:39:27.221192645Z" level=info msg="Start snapshots syncer" Jan 30 04:39:27.221330 containerd[1524]: time="2025-01-30T04:39:27.221200551Z" level=info msg="Start cni network conf syncer for default" Jan 30 04:39:27.221330 containerd[1524]: time="2025-01-30T04:39:27.221207413Z" level=info msg="Start streaming server" Jan 30 04:39:27.221589 containerd[1524]: time="2025-01-30T04:39:27.221563481Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 04:39:27.221658 containerd[1524]: time="2025-01-30T04:39:27.221633492Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 04:39:27.221833 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 04:39:27.227824 containerd[1524]: time="2025-01-30T04:39:27.227744995Z" level=info msg="containerd successfully booted in 0.042566s" Jan 30 04:39:27.453356 tar[1522]: linux-amd64/LICENSE Jan 30 04:39:27.453462 tar[1522]: linux-amd64/README.md Jan 30 04:39:27.457106 systemd-networkd[1383]: eth1: Gained IPv6LL Jan 30 04:39:27.457876 systemd-timesyncd[1444]: Network configuration changed, trying to establish connection. Jan 30 04:39:27.462317 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 04:39:27.467678 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 04:39:27.472642 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 04:39:27.483328 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 04:39:27.487595 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 04:39:27.527407 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 04:39:28.161099 systemd-networkd[1383]: eth0: Gained IPv6LL Jan 30 04:39:28.161583 systemd-timesyncd[1444]: Network configuration changed, trying to establish connection. Jan 30 04:39:28.244742 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 04:39:28.245746 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 04:39:28.249116 systemd[1]: Startup finished in 1.215s (kernel) + 7.269s (initrd) + 4.329s (userspace) = 12.815s. Jan 30 04:39:28.255937 (kubelet)[1620]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 04:39:28.265338 agetty[1591]: failed to open credentials directory Jan 30 04:39:28.266667 agetty[1592]: failed to open credentials directory Jan 30 04:39:28.763297 kubelet[1620]: E0130 04:39:28.763229 1620 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 04:39:28.766642 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 04:39:28.766843 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 04:39:32.214491 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 04:39:32.220571 systemd[1]: Started sshd@0-116.202.14.223:22-92.255.85.189:33094.service - OpenSSH per-connection server daemon (92.255.85.189:33094). Jan 30 04:39:32.877382 sshd[1633]: Connection closed by authenticating user root 92.255.85.189 port 33094 [preauth] Jan 30 04:39:32.880260 systemd[1]: sshd@0-116.202.14.223:22-92.255.85.189:33094.service: Deactivated successfully. Jan 30 04:39:38.919638 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 04:39:38.925509 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 04:39:39.082272 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 04:39:39.087354 (kubelet)[1645]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 04:39:39.131680 kubelet[1645]: E0130 04:39:39.131610 1645 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 04:39:39.137274 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 04:39:39.137467 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 04:39:49.169485 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 04:39:49.175380 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 04:39:49.320484 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 04:39:49.335349 (kubelet)[1661]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 04:39:49.390194 kubelet[1661]: E0130 04:39:49.390101 1661 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 04:39:49.394244 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 04:39:49.394490 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 04:39:58.978531 systemd-resolved[1384]: Clock change detected. Flushing caches. Jan 30 04:39:58.978677 systemd-timesyncd[1444]: Contacted time server 178.63.67.56:123 (2.flatcar.pool.ntp.org). Jan 30 04:39:58.978728 systemd-timesyncd[1444]: Initial clock synchronization to Thu 2025-01-30 04:39:58.978487 UTC. Jan 30 04:40:00.010499 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 30 04:40:00.017059 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 04:40:00.164919 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 04:40:00.169317 (kubelet)[1678]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 04:40:00.202562 kubelet[1678]: E0130 04:40:00.202502 1678 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 04:40:00.206511 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 04:40:00.206698 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 04:40:10.260489 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 30 04:40:10.273143 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 04:40:10.406512 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 04:40:10.410652 (kubelet)[1694]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 04:40:10.443406 kubelet[1694]: E0130 04:40:10.443316 1694 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 04:40:10.446503 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 04:40:10.446688 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 04:40:13.070745 update_engine[1514]: I20250130 04:40:13.070621 1514 update_attempter.cc:509] Updating boot flags... Jan 30 04:40:13.117031 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1711) Jan 30 04:40:13.186929 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1712) Jan 30 04:40:13.239710 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1712) Jan 30 04:40:20.510436 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 30 04:40:20.523367 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 04:40:20.659066 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 04:40:20.659257 (kubelet)[1731]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 04:40:20.701096 kubelet[1731]: E0130 04:40:20.701026 1731 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 04:40:20.705165 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 04:40:20.705378 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 04:40:30.760492 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 30 04:40:30.770126 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 04:40:30.918626 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 04:40:30.923495 (kubelet)[1748]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 04:40:30.959502 kubelet[1748]: E0130 04:40:30.959450 1748 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 04:40:30.962262 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 04:40:30.962437 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 04:40:41.010388 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 30 04:40:41.016059 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 04:40:41.137373 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 04:40:41.141788 (kubelet)[1764]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 04:40:41.182521 kubelet[1764]: E0130 04:40:41.182470 1764 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 04:40:41.186143 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 04:40:41.186337 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 04:40:51.260311 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 30 04:40:51.266064 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 04:40:51.409684 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 04:40:51.414156 (kubelet)[1780]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 04:40:51.448876 kubelet[1780]: E0130 04:40:51.448818 1780 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 04:40:51.452262 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 04:40:51.452445 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 04:41:01.510348 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Jan 30 04:41:01.520123 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 04:41:01.681453 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 04:41:01.696258 (kubelet)[1797]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 04:41:01.734288 kubelet[1797]: E0130 04:41:01.734226 1797 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 04:41:01.738081 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 04:41:01.738320 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 04:41:11.760352 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Jan 30 04:41:11.765094 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 04:41:11.916079 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 04:41:11.920523 (kubelet)[1813]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 04:41:11.955841 kubelet[1813]: E0130 04:41:11.955797 1813 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 04:41:11.959499 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 04:41:11.959711 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 04:41:22.010661 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Jan 30 04:41:22.016402 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 04:41:22.155495 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 04:41:22.159420 (kubelet)[1829]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 04:41:22.195831 kubelet[1829]: E0130 04:41:22.195777 1829 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 04:41:22.199317 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 04:41:22.199498 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 04:41:26.454158 systemd[1]: Started sshd@1-116.202.14.223:22-139.178.89.65:53206.service - OpenSSH per-connection server daemon (139.178.89.65:53206). Jan 30 04:41:27.423819 sshd[1838]: Accepted publickey for core from 139.178.89.65 port 53206 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:41:27.425586 sshd-session[1838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:41:27.433857 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 04:41:27.439117 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 04:41:27.441220 systemd-logind[1509]: New session 1 of user core. Jan 30 04:41:27.470641 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 04:41:27.479237 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 04:41:27.483634 (systemd)[1842]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 04:41:27.593880 systemd[1842]: Queued start job for default target default.target. Jan 30 04:41:27.603460 systemd[1842]: Created slice app.slice - User Application Slice. Jan 30 04:41:27.603497 systemd[1842]: Reached target paths.target - Paths. Jan 30 04:41:27.603512 systemd[1842]: Reached target timers.target - Timers. Jan 30 04:41:27.605113 systemd[1842]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 04:41:27.617008 systemd[1842]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 04:41:27.617206 systemd[1842]: Reached target sockets.target - Sockets. Jan 30 04:41:27.617224 systemd[1842]: Reached target basic.target - Basic System. Jan 30 04:41:27.617279 systemd[1842]: Reached target default.target - Main User Target. Jan 30 04:41:27.617315 systemd[1842]: Startup finished in 126ms. Jan 30 04:41:27.617742 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 04:41:27.628087 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 04:41:28.315236 systemd[1]: Started sshd@2-116.202.14.223:22-139.178.89.65:53216.service - OpenSSH per-connection server daemon (139.178.89.65:53216). Jan 30 04:41:29.300197 sshd[1853]: Accepted publickey for core from 139.178.89.65 port 53216 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:41:29.301799 sshd-session[1853]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:41:29.306977 systemd-logind[1509]: New session 2 of user core. Jan 30 04:41:29.314046 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 04:41:29.980394 sshd[1855]: Connection closed by 139.178.89.65 port 53216 Jan 30 04:41:29.981080 sshd-session[1853]: pam_unix(sshd:session): session closed for user core Jan 30 04:41:29.984282 systemd[1]: sshd@2-116.202.14.223:22-139.178.89.65:53216.service: Deactivated successfully. Jan 30 04:41:29.986744 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 04:41:29.988293 systemd-logind[1509]: Session 2 logged out. Waiting for processes to exit. Jan 30 04:41:29.989475 systemd-logind[1509]: Removed session 2. Jan 30 04:41:30.152187 systemd[1]: Started sshd@3-116.202.14.223:22-139.178.89.65:53222.service - OpenSSH per-connection server daemon (139.178.89.65:53222). Jan 30 04:41:31.118639 sshd[1860]: Accepted publickey for core from 139.178.89.65 port 53222 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:41:31.120434 sshd-session[1860]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:41:31.125752 systemd-logind[1509]: New session 3 of user core. Jan 30 04:41:31.138171 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 04:41:31.789582 sshd[1862]: Connection closed by 139.178.89.65 port 53222 Jan 30 04:41:31.790232 sshd-session[1860]: pam_unix(sshd:session): session closed for user core Jan 30 04:41:31.793810 systemd-logind[1509]: Session 3 logged out. Waiting for processes to exit. Jan 30 04:41:31.794644 systemd[1]: sshd@3-116.202.14.223:22-139.178.89.65:53222.service: Deactivated successfully. Jan 30 04:41:31.796478 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 04:41:31.797382 systemd-logind[1509]: Removed session 3. Jan 30 04:41:31.962175 systemd[1]: Started sshd@4-116.202.14.223:22-139.178.89.65:49436.service - OpenSSH per-connection server daemon (139.178.89.65:49436). Jan 30 04:41:32.260500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Jan 30 04:41:32.267140 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 04:41:32.415815 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 04:41:32.420414 (kubelet)[1877]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 04:41:32.456571 kubelet[1877]: E0130 04:41:32.456519 1877 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 04:41:32.460730 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 04:41:32.460932 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 04:41:32.953987 sshd[1867]: Accepted publickey for core from 139.178.89.65 port 49436 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:41:32.955761 sshd-session[1867]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:41:32.961447 systemd-logind[1509]: New session 4 of user core. Jan 30 04:41:32.971495 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 04:41:33.636593 sshd[1885]: Connection closed by 139.178.89.65 port 49436 Jan 30 04:41:33.637334 sshd-session[1867]: pam_unix(sshd:session): session closed for user core Jan 30 04:41:33.641358 systemd-logind[1509]: Session 4 logged out. Waiting for processes to exit. Jan 30 04:41:33.642308 systemd[1]: sshd@4-116.202.14.223:22-139.178.89.65:49436.service: Deactivated successfully. Jan 30 04:41:33.644420 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 04:41:33.645821 systemd-logind[1509]: Removed session 4. Jan 30 04:41:33.812347 systemd[1]: Started sshd@5-116.202.14.223:22-139.178.89.65:49448.service - OpenSSH per-connection server daemon (139.178.89.65:49448). Jan 30 04:41:34.788036 sshd[1890]: Accepted publickey for core from 139.178.89.65 port 49448 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:41:34.789660 sshd-session[1890]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:41:34.794712 systemd-logind[1509]: New session 5 of user core. Jan 30 04:41:34.801107 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 04:41:35.316666 sudo[1893]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 04:41:35.317095 sudo[1893]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 04:41:35.327340 sudo[1893]: pam_unix(sudo:session): session closed for user root Jan 30 04:41:35.486066 sshd[1892]: Connection closed by 139.178.89.65 port 49448 Jan 30 04:41:35.486793 sshd-session[1890]: pam_unix(sshd:session): session closed for user core Jan 30 04:41:35.489703 systemd[1]: sshd@5-116.202.14.223:22-139.178.89.65:49448.service: Deactivated successfully. Jan 30 04:41:35.491546 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 04:41:35.492937 systemd-logind[1509]: Session 5 logged out. Waiting for processes to exit. Jan 30 04:41:35.494144 systemd-logind[1509]: Removed session 5. Jan 30 04:41:35.656143 systemd[1]: Started sshd@6-116.202.14.223:22-139.178.89.65:49452.service - OpenSSH per-connection server daemon (139.178.89.65:49452). Jan 30 04:41:36.647075 sshd[1898]: Accepted publickey for core from 139.178.89.65 port 49452 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:41:36.648742 sshd-session[1898]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:41:36.653660 systemd-logind[1509]: New session 6 of user core. Jan 30 04:41:36.665071 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 04:41:37.168722 sudo[1902]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 04:41:37.169147 sudo[1902]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 04:41:37.173283 sudo[1902]: pam_unix(sudo:session): session closed for user root Jan 30 04:41:37.180038 sudo[1901]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 30 04:41:37.180476 sudo[1901]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 04:41:37.202205 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 30 04:41:37.233326 augenrules[1924]: No rules Jan 30 04:41:37.234997 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 04:41:37.235382 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 30 04:41:37.236775 sudo[1901]: pam_unix(sudo:session): session closed for user root Jan 30 04:41:37.395488 sshd[1900]: Connection closed by 139.178.89.65 port 49452 Jan 30 04:41:37.396182 sshd-session[1898]: pam_unix(sshd:session): session closed for user core Jan 30 04:41:37.400636 systemd[1]: sshd@6-116.202.14.223:22-139.178.89.65:49452.service: Deactivated successfully. Jan 30 04:41:37.402791 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 04:41:37.403763 systemd-logind[1509]: Session 6 logged out. Waiting for processes to exit. Jan 30 04:41:37.404863 systemd-logind[1509]: Removed session 6. Jan 30 04:41:37.567161 systemd[1]: Started sshd@7-116.202.14.223:22-139.178.89.65:49456.service - OpenSSH per-connection server daemon (139.178.89.65:49456). Jan 30 04:41:38.538462 sshd[1932]: Accepted publickey for core from 139.178.89.65 port 49456 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:41:38.540156 sshd-session[1932]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:41:38.545641 systemd-logind[1509]: New session 7 of user core. Jan 30 04:41:38.552050 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 04:41:39.057920 sudo[1935]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 04:41:39.058347 sudo[1935]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 04:41:39.317126 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 04:41:39.317380 (dockerd)[1953]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 04:41:39.570348 dockerd[1953]: time="2025-01-30T04:41:39.569928100Z" level=info msg="Starting up" Jan 30 04:41:39.663172 dockerd[1953]: time="2025-01-30T04:41:39.663121842Z" level=info msg="Loading containers: start." Jan 30 04:41:39.819922 kernel: Initializing XFRM netlink socket Jan 30 04:41:39.905405 systemd-networkd[1383]: docker0: Link UP Jan 30 04:41:39.929174 dockerd[1953]: time="2025-01-30T04:41:39.929144688Z" level=info msg="Loading containers: done." Jan 30 04:41:39.942133 dockerd[1953]: time="2025-01-30T04:41:39.942083459Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 04:41:39.942258 dockerd[1953]: time="2025-01-30T04:41:39.942186632Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jan 30 04:41:39.942322 dockerd[1953]: time="2025-01-30T04:41:39.942295355Z" level=info msg="Daemon has completed initialization" Jan 30 04:41:39.971166 dockerd[1953]: time="2025-01-30T04:41:39.971103342Z" level=info msg="API listen on /run/docker.sock" Jan 30 04:41:39.971447 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 04:41:41.038195 containerd[1524]: time="2025-01-30T04:41:41.038046064Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 30 04:41:41.603937 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount969983388.mount: Deactivated successfully. Jan 30 04:41:42.510329 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Jan 30 04:41:42.518351 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 04:41:42.655720 containerd[1524]: time="2025-01-30T04:41:42.655198987Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 04:41:42.658423 containerd[1524]: time="2025-01-30T04:41:42.657957494Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=32677104" Jan 30 04:41:42.660531 containerd[1524]: time="2025-01-30T04:41:42.659073206Z" level=info msg="ImageCreate event name:\"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 04:41:42.663795 containerd[1524]: time="2025-01-30T04:41:42.663771853Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 04:41:42.664657 containerd[1524]: time="2025-01-30T04:41:42.664622941Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"32673812\" in 1.626523088s" Jan 30 04:41:42.664710 containerd[1524]: time="2025-01-30T04:41:42.664674818Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\"" Jan 30 04:41:42.672106 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 04:41:42.682495 (kubelet)[2212]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 04:41:42.692839 containerd[1524]: time="2025-01-30T04:41:42.692803572Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 30 04:41:42.723082 kubelet[2212]: E0130 04:41:42.722997 2212 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 04:41:42.725950 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 04:41:42.726133 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 04:41:43.979148 containerd[1524]: time="2025-01-30T04:41:43.979074814Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 04:41:43.980359 containerd[1524]: time="2025-01-30T04:41:43.980312333Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=29605765" Jan 30 04:41:43.981228 containerd[1524]: time="2025-01-30T04:41:43.981184140Z" level=info msg="ImageCreate event name:\"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 04:41:43.984045 containerd[1524]: time="2025-01-30T04:41:43.983985358Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 04:41:43.985161 containerd[1524]: time="2025-01-30T04:41:43.984978902Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"31052327\" in 1.292135764s" Jan 30 04:41:43.985161 containerd[1524]: time="2025-01-30T04:41:43.985012494Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\"" Jan 30 04:41:44.007335 containerd[1524]: time="2025-01-30T04:41:44.007286593Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 30 04:41:44.942591 containerd[1524]: time="2025-01-30T04:41:44.942545280Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 04:41:44.943450 containerd[1524]: time="2025-01-30T04:41:44.943381090Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=17783084" Jan 30 04:41:44.944019 containerd[1524]: time="2025-01-30T04:41:44.943971773Z" level=info msg="ImageCreate event name:\"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 04:41:44.946336 containerd[1524]: time="2025-01-30T04:41:44.946299177Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 04:41:44.947404 containerd[1524]: time="2025-01-30T04:41:44.947243820Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"19229664\" in 939.916472ms" Jan 30 04:41:44.947404 containerd[1524]: time="2025-01-30T04:41:44.947289385Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\"" Jan 30 04:41:44.968527 containerd[1524]: time="2025-01-30T04:41:44.968495885Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 30 04:41:45.941472 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4106421451.mount: Deactivated successfully. Jan 30 04:41:46.251259 containerd[1524]: time="2025-01-30T04:41:46.251131111Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 04:41:46.252157 containerd[1524]: time="2025-01-30T04:41:46.252131990Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29058363" Jan 30 04:41:46.252949 containerd[1524]: time="2025-01-30T04:41:46.252911084Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 04:41:46.254583 containerd[1524]: time="2025-01-30T04:41:46.254543631Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 04:41:46.255410 containerd[1524]: time="2025-01-30T04:41:46.255020361Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 1.286325645s" Jan 30 04:41:46.255410 containerd[1524]: time="2025-01-30T04:41:46.255059685Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 30 04:41:46.273834 containerd[1524]: time="2025-01-30T04:41:46.273798298Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 30 04:41:46.785396 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4097797041.mount: Deactivated successfully. Jan 30 04:41:47.381445 containerd[1524]: time="2025-01-30T04:41:47.381362878Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 04:41:47.382675 containerd[1524]: time="2025-01-30T04:41:47.382619656Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185841" Jan 30 04:41:47.383655 containerd[1524]: time="2025-01-30T04:41:47.383584947Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 04:41:47.386283 containerd[1524]: time="2025-01-30T04:41:47.386261825Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 04:41:47.387404 containerd[1524]: time="2025-01-30T04:41:47.387267854Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.113284862s" Jan 30 04:41:47.387404 containerd[1524]: time="2025-01-30T04:41:47.387301767Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 30 04:41:47.407638 containerd[1524]: time="2025-01-30T04:41:47.407583746Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 30 04:41:47.900319 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount782459812.mount: Deactivated successfully. Jan 30 04:41:47.903966 containerd[1524]: time="2025-01-30T04:41:47.903935010Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 04:41:47.904820 containerd[1524]: time="2025-01-30T04:41:47.904772414Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322310" Jan 30 04:41:47.905596 containerd[1524]: time="2025-01-30T04:41:47.905527865Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 04:41:47.907641 containerd[1524]: time="2025-01-30T04:41:47.907579224Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 04:41:47.908607 containerd[1524]: time="2025-01-30T04:41:47.908253514Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 500.634231ms" Jan 30 04:41:47.908607 containerd[1524]: time="2025-01-30T04:41:47.908280794Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 30 04:41:47.932309 containerd[1524]: time="2025-01-30T04:41:47.932228898Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 30 04:41:48.425444 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2343308386.mount: Deactivated successfully. Jan 30 04:41:49.800627 containerd[1524]: time="2025-01-30T04:41:49.800571848Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 04:41:49.802152 containerd[1524]: time="2025-01-30T04:41:49.802091546Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238651" Jan 30 04:41:49.802538 containerd[1524]: time="2025-01-30T04:41:49.802497584Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 04:41:49.805241 containerd[1524]: time="2025-01-30T04:41:49.805194060Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 04:41:49.806140 containerd[1524]: time="2025-01-30T04:41:49.806113387Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 1.873846088s" Jan 30 04:41:49.806404 containerd[1524]: time="2025-01-30T04:41:49.806220817Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 30 04:41:52.397909 systemd[1]: Started sshd@8-116.202.14.223:22-106.12.12.183:40812.service - OpenSSH per-connection server daemon (106.12.12.183:40812). Jan 30 04:41:52.662826 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 04:41:52.669152 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 04:41:52.693983 systemd[1]: Reloading requested from client PID 2426 ('systemctl') (unit session-7.scope)... Jan 30 04:41:52.694150 systemd[1]: Reloading... Jan 30 04:41:52.816913 zram_generator::config[2468]: No configuration found. Jan 30 04:41:52.920335 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 04:41:52.991483 systemd[1]: Reloading finished in 296 ms. Jan 30 04:41:53.038570 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 04:41:53.043807 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 04:41:53.044757 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 04:41:53.045048 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 04:41:53.050178 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 04:41:53.172399 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 04:41:53.180273 (kubelet)[2524]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 04:41:53.216656 kubelet[2524]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 04:41:53.216656 kubelet[2524]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 04:41:53.216656 kubelet[2524]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 04:41:53.218759 kubelet[2524]: I0130 04:41:53.218722 2524 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 04:41:53.371423 kubelet[2524]: I0130 04:41:53.371380 2524 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 04:41:53.371423 kubelet[2524]: I0130 04:41:53.371405 2524 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 04:41:53.371636 kubelet[2524]: I0130 04:41:53.371613 2524 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 04:41:53.390177 kubelet[2524]: I0130 04:41:53.390124 2524 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 04:41:53.392803 kubelet[2524]: E0130 04:41:53.392698 2524 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://116.202.14.223:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 116.202.14.223:6443: connect: connection refused Jan 30 04:41:53.407203 kubelet[2524]: I0130 04:41:53.407170 2524 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 04:41:53.407403 kubelet[2524]: I0130 04:41:53.407361 2524 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 04:41:53.407540 kubelet[2524]: I0130 04:41:53.407387 2524 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186-1-0-8-df2fd9e83c","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 04:41:53.407540 kubelet[2524]: I0130 04:41:53.407538 2524 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 04:41:53.407657 kubelet[2524]: I0130 04:41:53.407547 2524 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 04:41:53.407691 kubelet[2524]: I0130 04:41:53.407667 2524 state_mem.go:36] "Initialized new in-memory state store" Jan 30 04:41:53.408427 kubelet[2524]: I0130 04:41:53.408414 2524 kubelet.go:400] "Attempting to sync node with API server" Jan 30 04:41:53.408466 kubelet[2524]: I0130 04:41:53.408431 2524 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 04:41:53.408466 kubelet[2524]: I0130 04:41:53.408450 2524 kubelet.go:312] "Adding apiserver pod source" Jan 30 04:41:53.408466 kubelet[2524]: I0130 04:41:53.408463 2524 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 04:41:53.413522 kubelet[2524]: W0130 04:41:53.413481 2524 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://116.202.14.223:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 116.202.14.223:6443: connect: connection refused Jan 30 04:41:53.413637 kubelet[2524]: E0130 04:41:53.413623 2524 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://116.202.14.223:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 116.202.14.223:6443: connect: connection refused Jan 30 04:41:53.414982 kubelet[2524]: W0130 04:41:53.414710 2524 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://116.202.14.223:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-1-0-8-df2fd9e83c&limit=500&resourceVersion=0": dial tcp 116.202.14.223:6443: connect: connection refused Jan 30 04:41:53.414982 kubelet[2524]: E0130 04:41:53.414748 2524 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://116.202.14.223:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-1-0-8-df2fd9e83c&limit=500&resourceVersion=0": dial tcp 116.202.14.223:6443: connect: connection refused Jan 30 04:41:53.414982 kubelet[2524]: I0130 04:41:53.414844 2524 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 30 04:41:53.416309 kubelet[2524]: I0130 04:41:53.416283 2524 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 04:41:53.416369 kubelet[2524]: W0130 04:41:53.416345 2524 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 04:41:53.416921 kubelet[2524]: I0130 04:41:53.416872 2524 server.go:1264] "Started kubelet" Jan 30 04:41:53.419061 kubelet[2524]: I0130 04:41:53.418932 2524 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 04:41:53.421818 kubelet[2524]: I0130 04:41:53.420870 2524 server.go:455] "Adding debug handlers to kubelet server" Jan 30 04:41:53.421818 kubelet[2524]: I0130 04:41:53.421386 2524 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 04:41:53.421818 kubelet[2524]: I0130 04:41:53.421619 2524 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 04:41:53.421818 kubelet[2524]: E0130 04:41:53.421725 2524 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://116.202.14.223:6443/api/v1/namespaces/default/events\": dial tcp 116.202.14.223:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4186-1-0-8-df2fd9e83c.181f5eb52c7e232f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186-1-0-8-df2fd9e83c,UID:ci-4186-1-0-8-df2fd9e83c,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4186-1-0-8-df2fd9e83c,},FirstTimestamp:2025-01-30 04:41:53.416856367 +0000 UTC m=+0.232453085,LastTimestamp:2025-01-30 04:41:53.416856367 +0000 UTC m=+0.232453085,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186-1-0-8-df2fd9e83c,}" Jan 30 04:41:53.424491 kubelet[2524]: I0130 04:41:53.424237 2524 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 04:41:53.429682 kubelet[2524]: E0130 04:41:53.429659 2524 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 04:41:53.430299 kubelet[2524]: E0130 04:41:53.430285 2524 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4186-1-0-8-df2fd9e83c\" not found" Jan 30 04:41:53.430395 kubelet[2524]: I0130 04:41:53.430384 2524 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 04:41:53.430537 kubelet[2524]: I0130 04:41:53.430525 2524 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 04:41:53.430621 kubelet[2524]: I0130 04:41:53.430611 2524 reconciler.go:26] "Reconciler: start to sync state" Jan 30 04:41:53.431027 kubelet[2524]: W0130 04:41:53.430978 2524 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://116.202.14.223:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 116.202.14.223:6443: connect: connection refused Jan 30 04:41:53.431118 kubelet[2524]: E0130 04:41:53.431099 2524 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://116.202.14.223:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 116.202.14.223:6443: connect: connection refused Jan 30 04:41:53.431843 kubelet[2524]: I0130 04:41:53.431826 2524 factory.go:221] Registration of the systemd container factory successfully Jan 30 04:41:53.432032 kubelet[2524]: I0130 04:41:53.432014 2524 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 04:41:53.432336 kubelet[2524]: E0130 04:41:53.432314 2524 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://116.202.14.223:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-1-0-8-df2fd9e83c?timeout=10s\": dial tcp 116.202.14.223:6443: connect: connection refused" interval="200ms" Jan 30 04:41:53.433547 kubelet[2524]: I0130 04:41:53.433530 2524 factory.go:221] Registration of the containerd container factory successfully Jan 30 04:41:53.448108 kubelet[2524]: I0130 04:41:53.448068 2524 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 04:41:53.449391 kubelet[2524]: I0130 04:41:53.449371 2524 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 04:41:53.449474 kubelet[2524]: I0130 04:41:53.449464 2524 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 04:41:53.449536 kubelet[2524]: I0130 04:41:53.449526 2524 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 04:41:53.449620 kubelet[2524]: E0130 04:41:53.449604 2524 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 04:41:53.456553 kubelet[2524]: W0130 04:41:53.456517 2524 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://116.202.14.223:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 116.202.14.223:6443: connect: connection refused Jan 30 04:41:53.456709 kubelet[2524]: E0130 04:41:53.456697 2524 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://116.202.14.223:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 116.202.14.223:6443: connect: connection refused Jan 30 04:41:53.466319 kubelet[2524]: I0130 04:41:53.466286 2524 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 04:41:53.466319 kubelet[2524]: I0130 04:41:53.466304 2524 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 04:41:53.466319 kubelet[2524]: I0130 04:41:53.466320 2524 state_mem.go:36] "Initialized new in-memory state store" Jan 30 04:41:53.467699 kubelet[2524]: I0130 04:41:53.467679 2524 policy_none.go:49] "None policy: Start" Jan 30 04:41:53.468217 kubelet[2524]: I0130 04:41:53.468204 2524 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 04:41:53.468559 kubelet[2524]: I0130 04:41:53.468353 2524 state_mem.go:35] "Initializing new in-memory state store" Jan 30 04:41:53.475850 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 04:41:53.485078 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 04:41:53.488318 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 04:41:53.498049 kubelet[2524]: I0130 04:41:53.498018 2524 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 04:41:53.498506 kubelet[2524]: I0130 04:41:53.498214 2524 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 04:41:53.498506 kubelet[2524]: I0130 04:41:53.498318 2524 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 04:41:53.499907 kubelet[2524]: E0130 04:41:53.499867 2524 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4186-1-0-8-df2fd9e83c\" not found" Jan 30 04:41:53.532912 kubelet[2524]: I0130 04:41:53.532867 2524 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186-1-0-8-df2fd9e83c" Jan 30 04:41:53.533216 kubelet[2524]: E0130 04:41:53.533189 2524 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://116.202.14.223:6443/api/v1/nodes\": dial tcp 116.202.14.223:6443: connect: connection refused" node="ci-4186-1-0-8-df2fd9e83c" Jan 30 04:41:53.550426 kubelet[2524]: I0130 04:41:53.550363 2524 topology_manager.go:215] "Topology Admit Handler" podUID="3a1af4af34c78f6900180b260047e815" podNamespace="kube-system" podName="kube-scheduler-ci-4186-1-0-8-df2fd9e83c" Jan 30 04:41:53.552021 kubelet[2524]: I0130 04:41:53.551973 2524 topology_manager.go:215] "Topology Admit Handler" podUID="d17c49c45fac2ff9fd874398841248ae" podNamespace="kube-system" podName="kube-apiserver-ci-4186-1-0-8-df2fd9e83c" Jan 30 04:41:53.553380 kubelet[2524]: I0130 04:41:53.553353 2524 topology_manager.go:215] "Topology Admit Handler" podUID="2cde3ade72b82935c6fc4cc1efd4a49a" podNamespace="kube-system" podName="kube-controller-manager-ci-4186-1-0-8-df2fd9e83c" Jan 30 04:41:53.560869 systemd[1]: Created slice kubepods-burstable-pod3a1af4af34c78f6900180b260047e815.slice - libcontainer container kubepods-burstable-pod3a1af4af34c78f6900180b260047e815.slice. Jan 30 04:41:53.586665 systemd[1]: Created slice kubepods-burstable-podd17c49c45fac2ff9fd874398841248ae.slice - libcontainer container kubepods-burstable-podd17c49c45fac2ff9fd874398841248ae.slice. Jan 30 04:41:53.591959 systemd[1]: Created slice kubepods-burstable-pod2cde3ade72b82935c6fc4cc1efd4a49a.slice - libcontainer container kubepods-burstable-pod2cde3ade72b82935c6fc4cc1efd4a49a.slice. Jan 30 04:41:53.632267 kubelet[2524]: I0130 04:41:53.632224 2524 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2cde3ade72b82935c6fc4cc1efd4a49a-k8s-certs\") pod \"kube-controller-manager-ci-4186-1-0-8-df2fd9e83c\" (UID: \"2cde3ade72b82935c6fc4cc1efd4a49a\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-8-df2fd9e83c" Jan 30 04:41:53.632267 kubelet[2524]: I0130 04:41:53.632262 2524 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2cde3ade72b82935c6fc4cc1efd4a49a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186-1-0-8-df2fd9e83c\" (UID: \"2cde3ade72b82935c6fc4cc1efd4a49a\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-8-df2fd9e83c" Jan 30 04:41:53.632267 kubelet[2524]: I0130 04:41:53.632283 2524 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3a1af4af34c78f6900180b260047e815-kubeconfig\") pod \"kube-scheduler-ci-4186-1-0-8-df2fd9e83c\" (UID: \"3a1af4af34c78f6900180b260047e815\") " pod="kube-system/kube-scheduler-ci-4186-1-0-8-df2fd9e83c" Jan 30 04:41:53.632689 kubelet[2524]: I0130 04:41:53.632306 2524 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d17c49c45fac2ff9fd874398841248ae-k8s-certs\") pod \"kube-apiserver-ci-4186-1-0-8-df2fd9e83c\" (UID: \"d17c49c45fac2ff9fd874398841248ae\") " pod="kube-system/kube-apiserver-ci-4186-1-0-8-df2fd9e83c" Jan 30 04:41:53.632689 kubelet[2524]: I0130 04:41:53.632320 2524 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d17c49c45fac2ff9fd874398841248ae-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186-1-0-8-df2fd9e83c\" (UID: \"d17c49c45fac2ff9fd874398841248ae\") " pod="kube-system/kube-apiserver-ci-4186-1-0-8-df2fd9e83c" Jan 30 04:41:53.632689 kubelet[2524]: I0130 04:41:53.632334 2524 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2cde3ade72b82935c6fc4cc1efd4a49a-kubeconfig\") pod \"kube-controller-manager-ci-4186-1-0-8-df2fd9e83c\" (UID: \"2cde3ade72b82935c6fc4cc1efd4a49a\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-8-df2fd9e83c" Jan 30 04:41:53.632689 kubelet[2524]: I0130 04:41:53.632348 2524 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d17c49c45fac2ff9fd874398841248ae-ca-certs\") pod \"kube-apiserver-ci-4186-1-0-8-df2fd9e83c\" (UID: \"d17c49c45fac2ff9fd874398841248ae\") " pod="kube-system/kube-apiserver-ci-4186-1-0-8-df2fd9e83c" Jan 30 04:41:53.632689 kubelet[2524]: I0130 04:41:53.632389 2524 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2cde3ade72b82935c6fc4cc1efd4a49a-ca-certs\") pod \"kube-controller-manager-ci-4186-1-0-8-df2fd9e83c\" (UID: \"2cde3ade72b82935c6fc4cc1efd4a49a\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-8-df2fd9e83c" Jan 30 04:41:53.632856 kubelet[2524]: I0130 04:41:53.632428 2524 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2cde3ade72b82935c6fc4cc1efd4a49a-flexvolume-dir\") pod \"kube-controller-manager-ci-4186-1-0-8-df2fd9e83c\" (UID: \"2cde3ade72b82935c6fc4cc1efd4a49a\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-8-df2fd9e83c" Jan 30 04:41:53.633618 kubelet[2524]: E0130 04:41:53.633576 2524 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://116.202.14.223:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-1-0-8-df2fd9e83c?timeout=10s\": dial tcp 116.202.14.223:6443: connect: connection refused" interval="400ms" Jan 30 04:41:53.728653 sshd[2418]: Connection closed by authenticating user root 106.12.12.183 port 40812 [preauth] Jan 30 04:41:53.731694 systemd[1]: sshd@8-116.202.14.223:22-106.12.12.183:40812.service: Deactivated successfully. Jan 30 04:41:53.735694 kubelet[2524]: I0130 04:41:53.735660 2524 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186-1-0-8-df2fd9e83c" Jan 30 04:41:53.736331 kubelet[2524]: E0130 04:41:53.736294 2524 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://116.202.14.223:6443/api/v1/nodes\": dial tcp 116.202.14.223:6443: connect: connection refused" node="ci-4186-1-0-8-df2fd9e83c" Jan 30 04:41:53.883614 containerd[1524]: time="2025-01-30T04:41:53.883544323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186-1-0-8-df2fd9e83c,Uid:3a1af4af34c78f6900180b260047e815,Namespace:kube-system,Attempt:0,}" Jan 30 04:41:53.890488 containerd[1524]: time="2025-01-30T04:41:53.890319962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186-1-0-8-df2fd9e83c,Uid:d17c49c45fac2ff9fd874398841248ae,Namespace:kube-system,Attempt:0,}" Jan 30 04:41:53.895210 containerd[1524]: time="2025-01-30T04:41:53.895149766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186-1-0-8-df2fd9e83c,Uid:2cde3ade72b82935c6fc4cc1efd4a49a,Namespace:kube-system,Attempt:0,}" Jan 30 04:41:54.034806 kubelet[2524]: E0130 04:41:54.034751 2524 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://116.202.14.223:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-1-0-8-df2fd9e83c?timeout=10s\": dial tcp 116.202.14.223:6443: connect: connection refused" interval="800ms" Jan 30 04:41:54.138138 kubelet[2524]: I0130 04:41:54.138087 2524 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186-1-0-8-df2fd9e83c" Jan 30 04:41:54.138557 kubelet[2524]: E0130 04:41:54.138509 2524 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://116.202.14.223:6443/api/v1/nodes\": dial tcp 116.202.14.223:6443: connect: connection refused" node="ci-4186-1-0-8-df2fd9e83c" Jan 30 04:41:54.377783 kubelet[2524]: W0130 04:41:54.377599 2524 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://116.202.14.223:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-1-0-8-df2fd9e83c&limit=500&resourceVersion=0": dial tcp 116.202.14.223:6443: connect: connection refused Jan 30 04:41:54.377783 kubelet[2524]: E0130 04:41:54.377672 2524 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://116.202.14.223:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-1-0-8-df2fd9e83c&limit=500&resourceVersion=0": dial tcp 116.202.14.223:6443: connect: connection refused Jan 30 04:41:54.377909 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2059152518.mount: Deactivated successfully. Jan 30 04:41:54.405294 containerd[1524]: time="2025-01-30T04:41:54.405228267Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 04:41:54.408169 containerd[1524]: time="2025-01-30T04:41:54.408109520Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312076" Jan 30 04:41:54.408916 containerd[1524]: time="2025-01-30T04:41:54.408839173Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 04:41:54.409779 containerd[1524]: time="2025-01-30T04:41:54.409725399Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 04:41:54.411124 containerd[1524]: time="2025-01-30T04:41:54.411091211Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 04:41:54.411918 containerd[1524]: time="2025-01-30T04:41:54.411811017Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 04:41:54.413229 containerd[1524]: time="2025-01-30T04:41:54.413084005Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 04:41:54.413947 containerd[1524]: time="2025-01-30T04:41:54.413901262Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 04:41:54.416274 containerd[1524]: time="2025-01-30T04:41:54.416238059Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 525.825233ms" Jan 30 04:41:54.419573 containerd[1524]: time="2025-01-30T04:41:54.419444129Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 533.8733ms" Jan 30 04:41:54.421343 containerd[1524]: time="2025-01-30T04:41:54.421303313Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 526.067877ms" Jan 30 04:41:54.525745 kubelet[2524]: W0130 04:41:54.525654 2524 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://116.202.14.223:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 116.202.14.223:6443: connect: connection refused Jan 30 04:41:54.525745 kubelet[2524]: E0130 04:41:54.525715 2524 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://116.202.14.223:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 116.202.14.223:6443: connect: connection refused Jan 30 04:41:54.537677 containerd[1524]: time="2025-01-30T04:41:54.536173062Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 04:41:54.537677 containerd[1524]: time="2025-01-30T04:41:54.536219789Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 04:41:54.537677 containerd[1524]: time="2025-01-30T04:41:54.536228655Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 04:41:54.537677 containerd[1524]: time="2025-01-30T04:41:54.536291783Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 04:41:54.538197 containerd[1524]: time="2025-01-30T04:41:54.537978876Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 04:41:54.538197 containerd[1524]: time="2025-01-30T04:41:54.538039659Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 04:41:54.538197 containerd[1524]: time="2025-01-30T04:41:54.538050329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 04:41:54.538197 containerd[1524]: time="2025-01-30T04:41:54.538119268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 04:41:54.539205 containerd[1524]: time="2025-01-30T04:41:54.535846882Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 04:41:54.539338 containerd[1524]: time="2025-01-30T04:41:54.539312828Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 04:41:54.540774 containerd[1524]: time="2025-01-30T04:41:54.540643013Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 04:41:54.540774 containerd[1524]: time="2025-01-30T04:41:54.540721540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 04:41:54.559913 systemd[1]: Started cri-containerd-adbec2bf8120ac1407302136f84209cedd2beb96152ff13ba2f69e89f90a3c2d.scope - libcontainer container adbec2bf8120ac1407302136f84209cedd2beb96152ff13ba2f69e89f90a3c2d. Jan 30 04:41:54.566083 systemd[1]: Started cri-containerd-e89fec12a6e2c603f9f087eb9c32187ae4bbafecab6e536436311428f6bca1b6.scope - libcontainer container e89fec12a6e2c603f9f087eb9c32187ae4bbafecab6e536436311428f6bca1b6. Jan 30 04:41:54.577593 systemd[1]: Started cri-containerd-37bc5655f846b0caa5d46a9ac0849f31eb44e05f04e37e38e5f12f373e00c3d1.scope - libcontainer container 37bc5655f846b0caa5d46a9ac0849f31eb44e05f04e37e38e5f12f373e00c3d1. Jan 30 04:41:54.624925 containerd[1524]: time="2025-01-30T04:41:54.624223226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186-1-0-8-df2fd9e83c,Uid:2cde3ade72b82935c6fc4cc1efd4a49a,Namespace:kube-system,Attempt:0,} returns sandbox id \"adbec2bf8120ac1407302136f84209cedd2beb96152ff13ba2f69e89f90a3c2d\"" Jan 30 04:41:54.629949 containerd[1524]: time="2025-01-30T04:41:54.629375824Z" level=info msg="CreateContainer within sandbox \"adbec2bf8120ac1407302136f84209cedd2beb96152ff13ba2f69e89f90a3c2d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 04:41:54.638077 containerd[1524]: time="2025-01-30T04:41:54.637990330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186-1-0-8-df2fd9e83c,Uid:d17c49c45fac2ff9fd874398841248ae,Namespace:kube-system,Attempt:0,} returns sandbox id \"37bc5655f846b0caa5d46a9ac0849f31eb44e05f04e37e38e5f12f373e00c3d1\"" Jan 30 04:41:54.643505 containerd[1524]: time="2025-01-30T04:41:54.643473546Z" level=info msg="CreateContainer within sandbox \"37bc5655f846b0caa5d46a9ac0849f31eb44e05f04e37e38e5f12f373e00c3d1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 04:41:54.644694 containerd[1524]: time="2025-01-30T04:41:54.644659952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186-1-0-8-df2fd9e83c,Uid:3a1af4af34c78f6900180b260047e815,Namespace:kube-system,Attempt:0,} returns sandbox id \"e89fec12a6e2c603f9f087eb9c32187ae4bbafecab6e536436311428f6bca1b6\"" Jan 30 04:41:54.649183 containerd[1524]: time="2025-01-30T04:41:54.649003379Z" level=info msg="CreateContainer within sandbox \"e89fec12a6e2c603f9f087eb9c32187ae4bbafecab6e536436311428f6bca1b6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 04:41:54.667237 containerd[1524]: time="2025-01-30T04:41:54.667170284Z" level=info msg="CreateContainer within sandbox \"37bc5655f846b0caa5d46a9ac0849f31eb44e05f04e37e38e5f12f373e00c3d1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"cf7e3f427ddde3a61ef6e0102cd220f5d6649cd57820987786263eb5f1be1c1b\"" Jan 30 04:41:54.668978 containerd[1524]: time="2025-01-30T04:41:54.667777198Z" level=info msg="CreateContainer within sandbox \"adbec2bf8120ac1407302136f84209cedd2beb96152ff13ba2f69e89f90a3c2d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6783804cf24e209465e5955c53e94ebd98eca8266ceb246264543138325a347d\"" Jan 30 04:41:54.668978 containerd[1524]: time="2025-01-30T04:41:54.667837510Z" level=info msg="StartContainer for \"cf7e3f427ddde3a61ef6e0102cd220f5d6649cd57820987786263eb5f1be1c1b\"" Jan 30 04:41:54.668978 containerd[1524]: time="2025-01-30T04:41:54.668348185Z" level=info msg="StartContainer for \"6783804cf24e209465e5955c53e94ebd98eca8266ceb246264543138325a347d\"" Jan 30 04:41:54.675081 containerd[1524]: time="2025-01-30T04:41:54.675050789Z" level=info msg="CreateContainer within sandbox \"e89fec12a6e2c603f9f087eb9c32187ae4bbafecab6e536436311428f6bca1b6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"257de490127b037c5e468102156976d0cd58283414130bcfd7a69f245c3560cb\"" Jan 30 04:41:54.677195 containerd[1524]: time="2025-01-30T04:41:54.677160611Z" level=info msg="StartContainer for \"257de490127b037c5e468102156976d0cd58283414130bcfd7a69f245c3560cb\"" Jan 30 04:41:54.705711 systemd[1]: Started cri-containerd-cf7e3f427ddde3a61ef6e0102cd220f5d6649cd57820987786263eb5f1be1c1b.scope - libcontainer container cf7e3f427ddde3a61ef6e0102cd220f5d6649cd57820987786263eb5f1be1c1b. Jan 30 04:41:54.721083 systemd[1]: Started cri-containerd-257de490127b037c5e468102156976d0cd58283414130bcfd7a69f245c3560cb.scope - libcontainer container 257de490127b037c5e468102156976d0cd58283414130bcfd7a69f245c3560cb. Jan 30 04:41:54.723178 systemd[1]: Started cri-containerd-6783804cf24e209465e5955c53e94ebd98eca8266ceb246264543138325a347d.scope - libcontainer container 6783804cf24e209465e5955c53e94ebd98eca8266ceb246264543138325a347d. Jan 30 04:41:54.762709 containerd[1524]: time="2025-01-30T04:41:54.762667244Z" level=info msg="StartContainer for \"cf7e3f427ddde3a61ef6e0102cd220f5d6649cd57820987786263eb5f1be1c1b\" returns successfully" Jan 30 04:41:54.782729 containerd[1524]: time="2025-01-30T04:41:54.782342648Z" level=info msg="StartContainer for \"6783804cf24e209465e5955c53e94ebd98eca8266ceb246264543138325a347d\" returns successfully" Jan 30 04:41:54.800237 containerd[1524]: time="2025-01-30T04:41:54.800080451Z" level=info msg="StartContainer for \"257de490127b037c5e468102156976d0cd58283414130bcfd7a69f245c3560cb\" returns successfully" Jan 30 04:41:54.822182 kubelet[2524]: W0130 04:41:54.822088 2524 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://116.202.14.223:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 116.202.14.223:6443: connect: connection refused Jan 30 04:41:54.822182 kubelet[2524]: E0130 04:41:54.822145 2524 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://116.202.14.223:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 116.202.14.223:6443: connect: connection refused Jan 30 04:41:54.837097 kubelet[2524]: E0130 04:41:54.837039 2524 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://116.202.14.223:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-1-0-8-df2fd9e83c?timeout=10s\": dial tcp 116.202.14.223:6443: connect: connection refused" interval="1.6s" Jan 30 04:41:54.940939 kubelet[2524]: I0130 04:41:54.940490 2524 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186-1-0-8-df2fd9e83c" Jan 30 04:41:54.940939 kubelet[2524]: E0130 04:41:54.940762 2524 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://116.202.14.223:6443/api/v1/nodes\": dial tcp 116.202.14.223:6443: connect: connection refused" node="ci-4186-1-0-8-df2fd9e83c" Jan 30 04:41:54.975343 kubelet[2524]: W0130 04:41:54.975229 2524 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://116.202.14.223:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 116.202.14.223:6443: connect: connection refused Jan 30 04:41:54.975343 kubelet[2524]: E0130 04:41:54.975303 2524 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://116.202.14.223:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 116.202.14.223:6443: connect: connection refused Jan 30 04:41:56.441710 kubelet[2524]: E0130 04:41:56.441641 2524 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4186-1-0-8-df2fd9e83c\" not found" node="ci-4186-1-0-8-df2fd9e83c" Jan 30 04:41:56.510971 kubelet[2524]: E0130 04:41:56.510837 2524 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4186-1-0-8-df2fd9e83c" not found Jan 30 04:41:56.543916 kubelet[2524]: I0130 04:41:56.543607 2524 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186-1-0-8-df2fd9e83c" Jan 30 04:41:56.556402 kubelet[2524]: I0130 04:41:56.556356 2524 kubelet_node_status.go:76] "Successfully registered node" node="ci-4186-1-0-8-df2fd9e83c" Jan 30 04:41:56.568823 kubelet[2524]: E0130 04:41:56.568774 2524 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4186-1-0-8-df2fd9e83c\" not found" Jan 30 04:41:56.669795 kubelet[2524]: E0130 04:41:56.669733 2524 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4186-1-0-8-df2fd9e83c\" not found" Jan 30 04:41:56.770375 kubelet[2524]: E0130 04:41:56.770326 2524 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4186-1-0-8-df2fd9e83c\" not found" Jan 30 04:41:56.871046 kubelet[2524]: E0130 04:41:56.870970 2524 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4186-1-0-8-df2fd9e83c\" not found" Jan 30 04:41:56.971994 kubelet[2524]: E0130 04:41:56.971941 2524 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4186-1-0-8-df2fd9e83c\" not found" Jan 30 04:41:57.072779 kubelet[2524]: E0130 04:41:57.072656 2524 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4186-1-0-8-df2fd9e83c\" not found" Jan 30 04:41:57.173545 kubelet[2524]: E0130 04:41:57.173507 2524 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4186-1-0-8-df2fd9e83c\" not found" Jan 30 04:41:57.412797 kubelet[2524]: I0130 04:41:57.412641 2524 apiserver.go:52] "Watching apiserver" Jan 30 04:41:57.431203 kubelet[2524]: I0130 04:41:57.431164 2524 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 04:41:58.115109 systemd[1]: Reloading requested from client PID 2801 ('systemctl') (unit session-7.scope)... Jan 30 04:41:58.115132 systemd[1]: Reloading... Jan 30 04:41:58.197011 zram_generator::config[2839]: No configuration found. Jan 30 04:41:58.309639 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 04:41:58.393650 systemd[1]: Reloading finished in 278 ms. Jan 30 04:41:58.444477 kubelet[2524]: I0130 04:41:58.444405 2524 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 04:41:58.444477 kubelet[2524]: E0130 04:41:58.444335 2524 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{ci-4186-1-0-8-df2fd9e83c.181f5eb52c7e232f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186-1-0-8-df2fd9e83c,UID:ci-4186-1-0-8-df2fd9e83c,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4186-1-0-8-df2fd9e83c,},FirstTimestamp:2025-01-30 04:41:53.416856367 +0000 UTC m=+0.232453085,LastTimestamp:2025-01-30 04:41:53.416856367 +0000 UTC m=+0.232453085,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186-1-0-8-df2fd9e83c,}" Jan 30 04:41:58.444628 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 04:41:58.460348 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 04:41:58.460574 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 04:41:58.468112 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 04:41:58.596146 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 04:41:58.607274 (kubelet)[2892]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 04:41:58.654245 kubelet[2892]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 04:41:58.654245 kubelet[2892]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 04:41:58.654245 kubelet[2892]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 04:41:58.654245 kubelet[2892]: I0130 04:41:58.653803 2892 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 04:41:58.658672 kubelet[2892]: I0130 04:41:58.658649 2892 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 04:41:58.658672 kubelet[2892]: I0130 04:41:58.658667 2892 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 04:41:58.658846 kubelet[2892]: I0130 04:41:58.658796 2892 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 04:41:58.661909 kubelet[2892]: I0130 04:41:58.660207 2892 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 04:41:58.662311 kubelet[2892]: I0130 04:41:58.662289 2892 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 04:41:58.669295 kubelet[2892]: I0130 04:41:58.669275 2892 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 04:41:58.669739 kubelet[2892]: I0130 04:41:58.669704 2892 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 04:41:58.670011 kubelet[2892]: I0130 04:41:58.669798 2892 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186-1-0-8-df2fd9e83c","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 04:41:58.670172 kubelet[2892]: I0130 04:41:58.670155 2892 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 04:41:58.670262 kubelet[2892]: I0130 04:41:58.670250 2892 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 04:41:58.670352 kubelet[2892]: I0130 04:41:58.670343 2892 state_mem.go:36] "Initialized new in-memory state store" Jan 30 04:41:58.670485 kubelet[2892]: I0130 04:41:58.670473 2892 kubelet.go:400] "Attempting to sync node with API server" Jan 30 04:41:58.670546 kubelet[2892]: I0130 04:41:58.670537 2892 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 04:41:58.670606 kubelet[2892]: I0130 04:41:58.670597 2892 kubelet.go:312] "Adding apiserver pod source" Jan 30 04:41:58.670658 kubelet[2892]: I0130 04:41:58.670649 2892 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 04:41:58.674043 kubelet[2892]: I0130 04:41:58.674022 2892 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 30 04:41:58.674298 kubelet[2892]: I0130 04:41:58.674280 2892 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 04:41:58.677610 kubelet[2892]: I0130 04:41:58.677414 2892 server.go:1264] "Started kubelet" Jan 30 04:41:58.680909 kubelet[2892]: I0130 04:41:58.680735 2892 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 04:41:58.692341 kubelet[2892]: E0130 04:41:58.692193 2892 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 04:41:58.694229 kubelet[2892]: I0130 04:41:58.694197 2892 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 04:41:58.695098 kubelet[2892]: I0130 04:41:58.695063 2892 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 04:41:58.695799 kubelet[2892]: I0130 04:41:58.695293 2892 server.go:455] "Adding debug handlers to kubelet server" Jan 30 04:41:58.695799 kubelet[2892]: I0130 04:41:58.695725 2892 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 04:41:58.696747 kubelet[2892]: I0130 04:41:58.696421 2892 reconciler.go:26] "Reconciler: start to sync state" Jan 30 04:41:58.697569 kubelet[2892]: I0130 04:41:58.697528 2892 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 04:41:58.697794 kubelet[2892]: I0130 04:41:58.697780 2892 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 04:41:58.700746 kubelet[2892]: I0130 04:41:58.700706 2892 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 04:41:58.702865 kubelet[2892]: I0130 04:41:58.702833 2892 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 04:41:58.702954 kubelet[2892]: I0130 04:41:58.702875 2892 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 04:41:58.702954 kubelet[2892]: I0130 04:41:58.702905 2892 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 04:41:58.702954 kubelet[2892]: E0130 04:41:58.702940 2892 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 04:41:58.707913 kubelet[2892]: I0130 04:41:58.706753 2892 factory.go:221] Registration of the containerd container factory successfully Jan 30 04:41:58.707913 kubelet[2892]: I0130 04:41:58.706770 2892 factory.go:221] Registration of the systemd container factory successfully Jan 30 04:41:58.707913 kubelet[2892]: I0130 04:41:58.706851 2892 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 04:41:58.756341 kubelet[2892]: I0130 04:41:58.756318 2892 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 04:41:58.756534 kubelet[2892]: I0130 04:41:58.756504 2892 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 04:41:58.756612 kubelet[2892]: I0130 04:41:58.756602 2892 state_mem.go:36] "Initialized new in-memory state store" Jan 30 04:41:58.756818 kubelet[2892]: I0130 04:41:58.756802 2892 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 04:41:58.756944 kubelet[2892]: I0130 04:41:58.756880 2892 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 04:41:58.757024 kubelet[2892]: I0130 04:41:58.757014 2892 policy_none.go:49] "None policy: Start" Jan 30 04:41:58.757757 kubelet[2892]: I0130 04:41:58.757704 2892 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 04:41:58.757757 kubelet[2892]: I0130 04:41:58.757746 2892 state_mem.go:35] "Initializing new in-memory state store" Jan 30 04:41:58.757932 kubelet[2892]: I0130 04:41:58.757900 2892 state_mem.go:75] "Updated machine memory state" Jan 30 04:41:58.762232 kubelet[2892]: I0130 04:41:58.762204 2892 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 04:41:58.762598 kubelet[2892]: I0130 04:41:58.762403 2892 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 04:41:58.762598 kubelet[2892]: I0130 04:41:58.762510 2892 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 04:41:58.798667 kubelet[2892]: I0130 04:41:58.798643 2892 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186-1-0-8-df2fd9e83c" Jan 30 04:41:58.803716 kubelet[2892]: I0130 04:41:58.803683 2892 topology_manager.go:215] "Topology Admit Handler" podUID="d17c49c45fac2ff9fd874398841248ae" podNamespace="kube-system" podName="kube-apiserver-ci-4186-1-0-8-df2fd9e83c" Jan 30 04:41:58.804331 kubelet[2892]: I0130 04:41:58.804178 2892 topology_manager.go:215] "Topology Admit Handler" podUID="2cde3ade72b82935c6fc4cc1efd4a49a" podNamespace="kube-system" podName="kube-controller-manager-ci-4186-1-0-8-df2fd9e83c" Jan 30 04:41:58.804331 kubelet[2892]: I0130 04:41:58.804234 2892 topology_manager.go:215] "Topology Admit Handler" podUID="3a1af4af34c78f6900180b260047e815" podNamespace="kube-system" podName="kube-scheduler-ci-4186-1-0-8-df2fd9e83c" Jan 30 04:41:58.807112 kubelet[2892]: I0130 04:41:58.807087 2892 kubelet_node_status.go:112] "Node was previously registered" node="ci-4186-1-0-8-df2fd9e83c" Jan 30 04:41:58.807178 kubelet[2892]: I0130 04:41:58.807163 2892 kubelet_node_status.go:76] "Successfully registered node" node="ci-4186-1-0-8-df2fd9e83c" Jan 30 04:41:58.810776 kubelet[2892]: E0130 04:41:58.810700 2892 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4186-1-0-8-df2fd9e83c\" already exists" pod="kube-system/kube-apiserver-ci-4186-1-0-8-df2fd9e83c" Jan 30 04:41:58.998557 kubelet[2892]: I0130 04:41:58.998359 2892 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2cde3ade72b82935c6fc4cc1efd4a49a-ca-certs\") pod \"kube-controller-manager-ci-4186-1-0-8-df2fd9e83c\" (UID: \"2cde3ade72b82935c6fc4cc1efd4a49a\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-8-df2fd9e83c" Jan 30 04:41:58.998557 kubelet[2892]: I0130 04:41:58.998419 2892 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2cde3ade72b82935c6fc4cc1efd4a49a-kubeconfig\") pod \"kube-controller-manager-ci-4186-1-0-8-df2fd9e83c\" (UID: \"2cde3ade72b82935c6fc4cc1efd4a49a\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-8-df2fd9e83c" Jan 30 04:41:58.998557 kubelet[2892]: I0130 04:41:58.998448 2892 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2cde3ade72b82935c6fc4cc1efd4a49a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186-1-0-8-df2fd9e83c\" (UID: \"2cde3ade72b82935c6fc4cc1efd4a49a\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-8-df2fd9e83c" Jan 30 04:41:58.998557 kubelet[2892]: I0130 04:41:58.998478 2892 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d17c49c45fac2ff9fd874398841248ae-ca-certs\") pod \"kube-apiserver-ci-4186-1-0-8-df2fd9e83c\" (UID: \"d17c49c45fac2ff9fd874398841248ae\") " pod="kube-system/kube-apiserver-ci-4186-1-0-8-df2fd9e83c" Jan 30 04:41:58.998557 kubelet[2892]: I0130 04:41:58.998501 2892 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d17c49c45fac2ff9fd874398841248ae-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186-1-0-8-df2fd9e83c\" (UID: \"d17c49c45fac2ff9fd874398841248ae\") " pod="kube-system/kube-apiserver-ci-4186-1-0-8-df2fd9e83c" Jan 30 04:41:58.998852 kubelet[2892]: I0130 04:41:58.998524 2892 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2cde3ade72b82935c6fc4cc1efd4a49a-flexvolume-dir\") pod \"kube-controller-manager-ci-4186-1-0-8-df2fd9e83c\" (UID: \"2cde3ade72b82935c6fc4cc1efd4a49a\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-8-df2fd9e83c" Jan 30 04:41:58.998852 kubelet[2892]: I0130 04:41:58.998546 2892 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2cde3ade72b82935c6fc4cc1efd4a49a-k8s-certs\") pod \"kube-controller-manager-ci-4186-1-0-8-df2fd9e83c\" (UID: \"2cde3ade72b82935c6fc4cc1efd4a49a\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-8-df2fd9e83c" Jan 30 04:41:58.998852 kubelet[2892]: I0130 04:41:58.998568 2892 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3a1af4af34c78f6900180b260047e815-kubeconfig\") pod \"kube-scheduler-ci-4186-1-0-8-df2fd9e83c\" (UID: \"3a1af4af34c78f6900180b260047e815\") " pod="kube-system/kube-scheduler-ci-4186-1-0-8-df2fd9e83c" Jan 30 04:41:58.998852 kubelet[2892]: I0130 04:41:58.998589 2892 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d17c49c45fac2ff9fd874398841248ae-k8s-certs\") pod \"kube-apiserver-ci-4186-1-0-8-df2fd9e83c\" (UID: \"d17c49c45fac2ff9fd874398841248ae\") " pod="kube-system/kube-apiserver-ci-4186-1-0-8-df2fd9e83c" Jan 30 04:41:59.132739 sudo[2926]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 30 04:41:59.133208 sudo[2926]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 30 04:41:59.629668 sudo[2926]: pam_unix(sudo:session): session closed for user root Jan 30 04:41:59.671718 kubelet[2892]: I0130 04:41:59.671680 2892 apiserver.go:52] "Watching apiserver" Jan 30 04:41:59.696217 kubelet[2892]: I0130 04:41:59.696148 2892 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 04:41:59.752910 kubelet[2892]: E0130 04:41:59.751432 2892 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4186-1-0-8-df2fd9e83c\" already exists" pod="kube-system/kube-apiserver-ci-4186-1-0-8-df2fd9e83c" Jan 30 04:41:59.773745 kubelet[2892]: I0130 04:41:59.773594 2892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4186-1-0-8-df2fd9e83c" podStartSLOduration=1.773573593 podStartE2EDuration="1.773573593s" podCreationTimestamp="2025-01-30 04:41:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 04:41:59.764671892 +0000 UTC m=+1.150345355" watchObservedRunningTime="2025-01-30 04:41:59.773573593 +0000 UTC m=+1.159247056" Jan 30 04:41:59.785304 kubelet[2892]: I0130 04:41:59.785225 2892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4186-1-0-8-df2fd9e83c" podStartSLOduration=2.785158108 podStartE2EDuration="2.785158108s" podCreationTimestamp="2025-01-30 04:41:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 04:41:59.775093434 +0000 UTC m=+1.160766898" watchObservedRunningTime="2025-01-30 04:41:59.785158108 +0000 UTC m=+1.170831572" Jan 30 04:41:59.785817 kubelet[2892]: I0130 04:41:59.785605 2892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4186-1-0-8-df2fd9e83c" podStartSLOduration=1.785598592 podStartE2EDuration="1.785598592s" podCreationTimestamp="2025-01-30 04:41:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 04:41:59.783945602 +0000 UTC m=+1.169619066" watchObservedRunningTime="2025-01-30 04:41:59.785598592 +0000 UTC m=+1.171272055" Jan 30 04:42:00.844695 sudo[1935]: pam_unix(sudo:session): session closed for user root Jan 30 04:42:01.003211 sshd[1934]: Connection closed by 139.178.89.65 port 49456 Jan 30 04:42:01.005407 sshd-session[1932]: pam_unix(sshd:session): session closed for user core Jan 30 04:42:01.011058 systemd[1]: sshd@7-116.202.14.223:22-139.178.89.65:49456.service: Deactivated successfully. Jan 30 04:42:01.013607 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 04:42:01.013765 systemd[1]: session-7.scope: Consumed 4.402s CPU time, 185.7M memory peak, 0B memory swap peak. Jan 30 04:42:01.014433 systemd-logind[1509]: Session 7 logged out. Waiting for processes to exit. Jan 30 04:42:01.015401 systemd-logind[1509]: Removed session 7. Jan 30 04:42:10.026894 update_engine[1514]: I20250130 04:42:10.026827 1514 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 30 04:42:10.026894 update_engine[1514]: I20250130 04:42:10.026901 1514 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 30 04:42:10.027370 update_engine[1514]: I20250130 04:42:10.027137 1514 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 30 04:42:10.027695 update_engine[1514]: I20250130 04:42:10.027658 1514 omaha_request_params.cc:62] Current group set to beta Jan 30 04:42:10.028629 update_engine[1514]: I20250130 04:42:10.028543 1514 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 30 04:42:10.028629 update_engine[1514]: I20250130 04:42:10.028565 1514 update_attempter.cc:643] Scheduling an action processor start. Jan 30 04:42:10.028629 update_engine[1514]: I20250130 04:42:10.028584 1514 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 30 04:42:10.028629 update_engine[1514]: I20250130 04:42:10.028616 1514 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 30 04:42:10.028743 update_engine[1514]: I20250130 04:42:10.028687 1514 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 30 04:42:10.028743 update_engine[1514]: I20250130 04:42:10.028697 1514 omaha_request_action.cc:272] Request: Jan 30 04:42:10.028743 update_engine[1514]: Jan 30 04:42:10.028743 update_engine[1514]: Jan 30 04:42:10.028743 update_engine[1514]: Jan 30 04:42:10.028743 update_engine[1514]: Jan 30 04:42:10.028743 update_engine[1514]: Jan 30 04:42:10.028743 update_engine[1514]: Jan 30 04:42:10.028743 update_engine[1514]: Jan 30 04:42:10.028743 update_engine[1514]: Jan 30 04:42:10.028743 update_engine[1514]: I20250130 04:42:10.028705 1514 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 04:42:10.029245 locksmithd[1542]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 30 04:42:10.030922 update_engine[1514]: I20250130 04:42:10.030864 1514 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 04:42:10.031266 update_engine[1514]: I20250130 04:42:10.031222 1514 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 04:42:10.031858 update_engine[1514]: E20250130 04:42:10.031824 1514 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 04:42:10.031992 update_engine[1514]: I20250130 04:42:10.031913 1514 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 30 04:42:13.470121 kubelet[2892]: I0130 04:42:13.470077 2892 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 04:42:13.470768 containerd[1524]: time="2025-01-30T04:42:13.470609466Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 04:42:13.472361 kubelet[2892]: I0130 04:42:13.471633 2892 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 04:42:14.485919 kubelet[2892]: I0130 04:42:14.484569 2892 topology_manager.go:215] "Topology Admit Handler" podUID="bd1a451d-6173-4175-85e2-b55411ac1c06" podNamespace="kube-system" podName="kube-proxy-5d8gg" Jan 30 04:42:14.496826 kubelet[2892]: I0130 04:42:14.495615 2892 topology_manager.go:215] "Topology Admit Handler" podUID="01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2" podNamespace="kube-system" podName="cilium-8pwrt" Jan 30 04:42:14.499506 systemd[1]: Created slice kubepods-besteffort-podbd1a451d_6173_4175_85e2_b55411ac1c06.slice - libcontainer container kubepods-besteffort-podbd1a451d_6173_4175_85e2_b55411ac1c06.slice. Jan 30 04:42:14.506914 kubelet[2892]: I0130 04:42:14.506862 2892 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bd1a451d-6173-4175-85e2-b55411ac1c06-xtables-lock\") pod \"kube-proxy-5d8gg\" (UID: \"bd1a451d-6173-4175-85e2-b55411ac1c06\") " pod="kube-system/kube-proxy-5d8gg" Jan 30 04:42:14.507029 kubelet[2892]: I0130 04:42:14.506934 2892 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qlhf\" (UniqueName: \"kubernetes.io/projected/bd1a451d-6173-4175-85e2-b55411ac1c06-kube-api-access-6qlhf\") pod \"kube-proxy-5d8gg\" (UID: \"bd1a451d-6173-4175-85e2-b55411ac1c06\") " pod="kube-system/kube-proxy-5d8gg" Jan 30 04:42:14.507029 kubelet[2892]: I0130 04:42:14.506984 2892 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bd1a451d-6173-4175-85e2-b55411ac1c06-kube-proxy\") pod \"kube-proxy-5d8gg\" (UID: \"bd1a451d-6173-4175-85e2-b55411ac1c06\") " pod="kube-system/kube-proxy-5d8gg" Jan 30 04:42:14.507029 kubelet[2892]: I0130 04:42:14.507013 2892 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bd1a451d-6173-4175-85e2-b55411ac1c06-lib-modules\") pod \"kube-proxy-5d8gg\" (UID: \"bd1a451d-6173-4175-85e2-b55411ac1c06\") " pod="kube-system/kube-proxy-5d8gg" Jan 30 04:42:14.507436 kubelet[2892]: W0130 04:42:14.507283 2892 reflector.go:547] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4186-1-0-8-df2fd9e83c" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4186-1-0-8-df2fd9e83c' and this object Jan 30 04:42:14.507436 kubelet[2892]: E0130 04:42:14.507312 2892 reflector.go:150] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4186-1-0-8-df2fd9e83c" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4186-1-0-8-df2fd9e83c' and this object Jan 30 04:42:14.507436 kubelet[2892]: W0130 04:42:14.507360 2892 reflector.go:547] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4186-1-0-8-df2fd9e83c" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4186-1-0-8-df2fd9e83c' and this object Jan 30 04:42:14.507436 kubelet[2892]: E0130 04:42:14.507369 2892 reflector.go:150] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4186-1-0-8-df2fd9e83c" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4186-1-0-8-df2fd9e83c' and this object Jan 30 04:42:14.519331 systemd[1]: Created slice kubepods-burstable-pod01878e3e_38d5_4b4e_a4b5_d1ade3a7f9c2.slice - libcontainer container kubepods-burstable-pod01878e3e_38d5_4b4e_a4b5_d1ade3a7f9c2.slice. Jan 30 04:42:14.577138 kubelet[2892]: I0130 04:42:14.576751 2892 topology_manager.go:215] "Topology Admit Handler" podUID="110a281d-60bb-48e2-aad6-f2afaead73bb" podNamespace="kube-system" podName="cilium-operator-599987898-gtfkq" Jan 30 04:42:14.585182 systemd[1]: Created slice kubepods-besteffort-pod110a281d_60bb_48e2_aad6_f2afaead73bb.slice - libcontainer container kubepods-besteffort-pod110a281d_60bb_48e2_aad6_f2afaead73bb.slice. Jan 30 04:42:14.608037 kubelet[2892]: I0130 04:42:14.607616 2892 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5ksc\" (UniqueName: \"kubernetes.io/projected/01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2-kube-api-access-v5ksc\") pod \"cilium-8pwrt\" (UID: \"01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2\") " pod="kube-system/cilium-8pwrt" Jan 30 04:42:14.608037 kubelet[2892]: I0130 04:42:14.607659 2892 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2-cilium-cgroup\") pod \"cilium-8pwrt\" (UID: \"01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2\") " pod="kube-system/cilium-8pwrt" Jan 30 04:42:14.608037 kubelet[2892]: I0130 04:42:14.607675 2892 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2-cni-path\") pod \"cilium-8pwrt\" (UID: \"01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2\") " pod="kube-system/cilium-8pwrt" Jan 30 04:42:14.608037 kubelet[2892]: I0130 04:42:14.607688 2892 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2-etc-cni-netd\") pod \"cilium-8pwrt\" (UID: \"01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2\") " pod="kube-system/cilium-8pwrt" Jan 30 04:42:14.608037 kubelet[2892]: I0130 04:42:14.607703 2892 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2-clustermesh-secrets\") pod \"cilium-8pwrt\" (UID: \"01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2\") " pod="kube-system/cilium-8pwrt" Jan 30 04:42:14.608037 kubelet[2892]: I0130 04:42:14.607715 2892 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2-bpf-maps\") pod \"cilium-8pwrt\" (UID: \"01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2\") " pod="kube-system/cilium-8pwrt" Jan 30 04:42:14.608262 kubelet[2892]: I0130 04:42:14.607730 2892 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2-hostproc\") pod \"cilium-8pwrt\" (UID: \"01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2\") " pod="kube-system/cilium-8pwrt" Jan 30 04:42:14.608262 kubelet[2892]: I0130 04:42:14.607751 2892 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2-host-proc-sys-net\") pod \"cilium-8pwrt\" (UID: \"01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2\") " pod="kube-system/cilium-8pwrt" Jan 30 04:42:14.608262 kubelet[2892]: I0130 04:42:14.607778 2892 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2-lib-modules\") pod \"cilium-8pwrt\" (UID: \"01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2\") " pod="kube-system/cilium-8pwrt" Jan 30 04:42:14.608262 kubelet[2892]: I0130 04:42:14.607791 2892 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/110a281d-60bb-48e2-aad6-f2afaead73bb-cilium-config-path\") pod \"cilium-operator-599987898-gtfkq\" (UID: \"110a281d-60bb-48e2-aad6-f2afaead73bb\") " pod="kube-system/cilium-operator-599987898-gtfkq" Jan 30 04:42:14.608262 kubelet[2892]: I0130 04:42:14.607804 2892 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2-cilium-config-path\") pod \"cilium-8pwrt\" (UID: \"01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2\") " pod="kube-system/cilium-8pwrt" Jan 30 04:42:14.608372 kubelet[2892]: I0130 04:42:14.607816 2892 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2-host-proc-sys-kernel\") pod \"cilium-8pwrt\" (UID: \"01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2\") " pod="kube-system/cilium-8pwrt" Jan 30 04:42:14.608372 kubelet[2892]: I0130 04:42:14.607835 2892 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzzsk\" (UniqueName: \"kubernetes.io/projected/110a281d-60bb-48e2-aad6-f2afaead73bb-kube-api-access-zzzsk\") pod \"cilium-operator-599987898-gtfkq\" (UID: \"110a281d-60bb-48e2-aad6-f2afaead73bb\") " pod="kube-system/cilium-operator-599987898-gtfkq" Jan 30 04:42:14.608372 kubelet[2892]: I0130 04:42:14.607854 2892 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2-cilium-run\") pod \"cilium-8pwrt\" (UID: \"01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2\") " pod="kube-system/cilium-8pwrt" Jan 30 04:42:14.608372 kubelet[2892]: I0130 04:42:14.607878 2892 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2-hubble-tls\") pod \"cilium-8pwrt\" (UID: \"01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2\") " pod="kube-system/cilium-8pwrt" Jan 30 04:42:14.608372 kubelet[2892]: I0130 04:42:14.607911 2892 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2-xtables-lock\") pod \"cilium-8pwrt\" (UID: \"01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2\") " pod="kube-system/cilium-8pwrt" Jan 30 04:42:14.813118 containerd[1524]: time="2025-01-30T04:42:14.813058621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5d8gg,Uid:bd1a451d-6173-4175-85e2-b55411ac1c06,Namespace:kube-system,Attempt:0,}" Jan 30 04:42:14.838202 containerd[1524]: time="2025-01-30T04:42:14.838118243Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 04:42:14.838339 containerd[1524]: time="2025-01-30T04:42:14.838203683Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 04:42:14.838339 containerd[1524]: time="2025-01-30T04:42:14.838234560Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 04:42:14.838401 containerd[1524]: time="2025-01-30T04:42:14.838338756Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 04:42:14.859028 systemd[1]: Started cri-containerd-85eafd0df8d4027b03c82c7827e7d403cc5761c1d0ca97e494e21aa29700764e.scope - libcontainer container 85eafd0df8d4027b03c82c7827e7d403cc5761c1d0ca97e494e21aa29700764e. Jan 30 04:42:14.879110 containerd[1524]: time="2025-01-30T04:42:14.879068780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5d8gg,Uid:bd1a451d-6173-4175-85e2-b55411ac1c06,Namespace:kube-system,Attempt:0,} returns sandbox id \"85eafd0df8d4027b03c82c7827e7d403cc5761c1d0ca97e494e21aa29700764e\"" Jan 30 04:42:14.881949 containerd[1524]: time="2025-01-30T04:42:14.881800763Z" level=info msg="CreateContainer within sandbox \"85eafd0df8d4027b03c82c7827e7d403cc5761c1d0ca97e494e21aa29700764e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 04:42:14.891464 containerd[1524]: time="2025-01-30T04:42:14.891432355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-gtfkq,Uid:110a281d-60bb-48e2-aad6-f2afaead73bb,Namespace:kube-system,Attempt:0,}" Jan 30 04:42:14.893617 containerd[1524]: time="2025-01-30T04:42:14.893587468Z" level=info msg="CreateContainer within sandbox \"85eafd0df8d4027b03c82c7827e7d403cc5761c1d0ca97e494e21aa29700764e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4b0d01be41610b6dddacd5d8076664da4c39ebf1d09d6e258aaf202dbe80d280\"" Jan 30 04:42:14.894115 containerd[1524]: time="2025-01-30T04:42:14.894076533Z" level=info msg="StartContainer for \"4b0d01be41610b6dddacd5d8076664da4c39ebf1d09d6e258aaf202dbe80d280\"" Jan 30 04:42:14.915539 containerd[1524]: time="2025-01-30T04:42:14.914638911Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 04:42:14.916357 containerd[1524]: time="2025-01-30T04:42:14.915529006Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 04:42:14.916357 containerd[1524]: time="2025-01-30T04:42:14.915569853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 04:42:14.916357 containerd[1524]: time="2025-01-30T04:42:14.915777351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 04:42:14.923816 systemd[1]: Started cri-containerd-4b0d01be41610b6dddacd5d8076664da4c39ebf1d09d6e258aaf202dbe80d280.scope - libcontainer container 4b0d01be41610b6dddacd5d8076664da4c39ebf1d09d6e258aaf202dbe80d280. Jan 30 04:42:14.940044 systemd[1]: Started cri-containerd-21a6a90f7de75bd51d5ad9bec88841220c05183e48f22fa643b9242edde85553.scope - libcontainer container 21a6a90f7de75bd51d5ad9bec88841220c05183e48f22fa643b9242edde85553. Jan 30 04:42:14.963702 containerd[1524]: time="2025-01-30T04:42:14.963663185Z" level=info msg="StartContainer for \"4b0d01be41610b6dddacd5d8076664da4c39ebf1d09d6e258aaf202dbe80d280\" returns successfully" Jan 30 04:42:14.983348 containerd[1524]: time="2025-01-30T04:42:14.983294589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-gtfkq,Uid:110a281d-60bb-48e2-aad6-f2afaead73bb,Namespace:kube-system,Attempt:0,} returns sandbox id \"21a6a90f7de75bd51d5ad9bec88841220c05183e48f22fa643b9242edde85553\"" Jan 30 04:42:14.984746 containerd[1524]: time="2025-01-30T04:42:14.984716871Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 30 04:42:15.711254 kubelet[2892]: E0130 04:42:15.711177 2892 projected.go:269] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Jan 30 04:42:15.711254 kubelet[2892]: E0130 04:42:15.711240 2892 projected.go:200] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-8pwrt: failed to sync secret cache: timed out waiting for the condition Jan 30 04:42:15.711829 kubelet[2892]: E0130 04:42:15.711317 2892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2-hubble-tls podName:01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2 nodeName:}" failed. No retries permitted until 2025-01-30 04:42:16.211295877 +0000 UTC m=+17.596969340 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2-hubble-tls") pod "cilium-8pwrt" (UID: "01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2") : failed to sync secret cache: timed out waiting for the condition Jan 30 04:42:16.324229 containerd[1524]: time="2025-01-30T04:42:16.324135952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8pwrt,Uid:01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2,Namespace:kube-system,Attempt:0,}" Jan 30 04:42:16.350771 containerd[1524]: time="2025-01-30T04:42:16.350328058Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 04:42:16.350771 containerd[1524]: time="2025-01-30T04:42:16.350388141Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 04:42:16.350771 containerd[1524]: time="2025-01-30T04:42:16.350404702Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 04:42:16.350771 containerd[1524]: time="2025-01-30T04:42:16.350490824Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 04:42:16.379088 systemd[1]: Started cri-containerd-7c0376dc791b4c5ddb4c788600ceace473d5dfe4ebbe2483b8718bbfe07db89e.scope - libcontainer container 7c0376dc791b4c5ddb4c788600ceace473d5dfe4ebbe2483b8718bbfe07db89e. Jan 30 04:42:16.405427 containerd[1524]: time="2025-01-30T04:42:16.405385754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8pwrt,Uid:01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c0376dc791b4c5ddb4c788600ceace473d5dfe4ebbe2483b8718bbfe07db89e\"" Jan 30 04:42:17.044335 containerd[1524]: time="2025-01-30T04:42:17.044296429Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 04:42:17.045137 containerd[1524]: time="2025-01-30T04:42:17.045101396Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 30 04:42:17.045644 containerd[1524]: time="2025-01-30T04:42:17.045455959Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 04:42:17.046600 containerd[1524]: time="2025-01-30T04:42:17.046577699Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.061441752s" Jan 30 04:42:17.046841 containerd[1524]: time="2025-01-30T04:42:17.046823549Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 30 04:42:17.056154 containerd[1524]: time="2025-01-30T04:42:17.055692186Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 30 04:42:17.062422 containerd[1524]: time="2025-01-30T04:42:17.062384721Z" level=info msg="CreateContainer within sandbox \"21a6a90f7de75bd51d5ad9bec88841220c05183e48f22fa643b9242edde85553\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 30 04:42:17.073610 containerd[1524]: time="2025-01-30T04:42:17.073564384Z" level=info msg="CreateContainer within sandbox \"21a6a90f7de75bd51d5ad9bec88841220c05183e48f22fa643b9242edde85553\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"eb67a5010022c6d0f7ad1bd79a79ba1eee5119ab2fe41d5fa80c6b2a75fc2655\"" Jan 30 04:42:17.077169 containerd[1524]: time="2025-01-30T04:42:17.077059856Z" level=info msg="StartContainer for \"eb67a5010022c6d0f7ad1bd79a79ba1eee5119ab2fe41d5fa80c6b2a75fc2655\"" Jan 30 04:42:17.112190 systemd[1]: Started cri-containerd-eb67a5010022c6d0f7ad1bd79a79ba1eee5119ab2fe41d5fa80c6b2a75fc2655.scope - libcontainer container eb67a5010022c6d0f7ad1bd79a79ba1eee5119ab2fe41d5fa80c6b2a75fc2655. Jan 30 04:42:17.142314 containerd[1524]: time="2025-01-30T04:42:17.142081362Z" level=info msg="StartContainer for \"eb67a5010022c6d0f7ad1bd79a79ba1eee5119ab2fe41d5fa80c6b2a75fc2655\" returns successfully" Jan 30 04:42:17.806119 kubelet[2892]: I0130 04:42:17.805685 2892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5d8gg" podStartSLOduration=3.805655588 podStartE2EDuration="3.805655588s" podCreationTimestamp="2025-01-30 04:42:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 04:42:15.778094348 +0000 UTC m=+17.163767821" watchObservedRunningTime="2025-01-30 04:42:17.805655588 +0000 UTC m=+19.191329051" Jan 30 04:42:17.806119 kubelet[2892]: I0130 04:42:17.805925 2892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-gtfkq" podStartSLOduration=1.739931732 podStartE2EDuration="3.80591837s" podCreationTimestamp="2025-01-30 04:42:14 +0000 UTC" firstStartedPulling="2025-01-30 04:42:14.98440599 +0000 UTC m=+16.370079453" lastFinishedPulling="2025-01-30 04:42:17.050392628 +0000 UTC m=+18.436066091" observedRunningTime="2025-01-30 04:42:17.804810677 +0000 UTC m=+19.190484141" watchObservedRunningTime="2025-01-30 04:42:17.80591837 +0000 UTC m=+19.191591833" Jan 30 04:42:20.027158 update_engine[1514]: I20250130 04:42:20.026921 1514 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 04:42:20.027523 update_engine[1514]: I20250130 04:42:20.027168 1514 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 04:42:20.027523 update_engine[1514]: I20250130 04:42:20.027400 1514 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 04:42:20.028179 update_engine[1514]: E20250130 04:42:20.027742 1514 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 04:42:20.028179 update_engine[1514]: I20250130 04:42:20.028034 1514 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 30 04:42:21.598503 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3271390081.mount: Deactivated successfully. Jan 30 04:42:23.196164 containerd[1524]: time="2025-01-30T04:42:23.196097760Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 04:42:23.199845 containerd[1524]: time="2025-01-30T04:42:23.198743724Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 04:42:23.199845 containerd[1524]: time="2025-01-30T04:42:23.198801241Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 30 04:42:23.204554 containerd[1524]: time="2025-01-30T04:42:23.204515528Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 6.1487849s" Jan 30 04:42:23.204608 containerd[1524]: time="2025-01-30T04:42:23.204555413Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 30 04:42:23.208342 containerd[1524]: time="2025-01-30T04:42:23.208231133Z" level=info msg="CreateContainer within sandbox \"7c0376dc791b4c5ddb4c788600ceace473d5dfe4ebbe2483b8718bbfe07db89e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 04:42:23.252330 containerd[1524]: time="2025-01-30T04:42:23.252282377Z" level=info msg="CreateContainer within sandbox \"7c0376dc791b4c5ddb4c788600ceace473d5dfe4ebbe2483b8718bbfe07db89e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"be83d8831adf4ca8569e77b6ee848087ceade6e0953b81c57777e71412d2df0e\"" Jan 30 04:42:23.252865 containerd[1524]: time="2025-01-30T04:42:23.252811247Z" level=info msg="StartContainer for \"be83d8831adf4ca8569e77b6ee848087ceade6e0953b81c57777e71412d2df0e\"" Jan 30 04:42:23.417155 systemd[1]: Started cri-containerd-be83d8831adf4ca8569e77b6ee848087ceade6e0953b81c57777e71412d2df0e.scope - libcontainer container be83d8831adf4ca8569e77b6ee848087ceade6e0953b81c57777e71412d2df0e. Jan 30 04:42:23.443863 containerd[1524]: time="2025-01-30T04:42:23.443812525Z" level=info msg="StartContainer for \"be83d8831adf4ca8569e77b6ee848087ceade6e0953b81c57777e71412d2df0e\" returns successfully" Jan 30 04:42:23.455321 systemd[1]: cri-containerd-be83d8831adf4ca8569e77b6ee848087ceade6e0953b81c57777e71412d2df0e.scope: Deactivated successfully. Jan 30 04:42:23.555539 containerd[1524]: time="2025-01-30T04:42:23.542439032Z" level=info msg="shim disconnected" id=be83d8831adf4ca8569e77b6ee848087ceade6e0953b81c57777e71412d2df0e namespace=k8s.io Jan 30 04:42:23.555539 containerd[1524]: time="2025-01-30T04:42:23.555517053Z" level=warning msg="cleaning up after shim disconnected" id=be83d8831adf4ca8569e77b6ee848087ceade6e0953b81c57777e71412d2df0e namespace=k8s.io Jan 30 04:42:23.555539 containerd[1524]: time="2025-01-30T04:42:23.555532693Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 04:42:23.805419 containerd[1524]: time="2025-01-30T04:42:23.805223406Z" level=info msg="CreateContainer within sandbox \"7c0376dc791b4c5ddb4c788600ceace473d5dfe4ebbe2483b8718bbfe07db89e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 04:42:23.822386 containerd[1524]: time="2025-01-30T04:42:23.822331231Z" level=info msg="CreateContainer within sandbox \"7c0376dc791b4c5ddb4c788600ceace473d5dfe4ebbe2483b8718bbfe07db89e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7e8883405fdf4b8301d4918f827a16380823baa96378d038fc2906391591983b\"" Jan 30 04:42:23.838765 containerd[1524]: time="2025-01-30T04:42:23.838709611Z" level=info msg="StartContainer for \"7e8883405fdf4b8301d4918f827a16380823baa96378d038fc2906391591983b\"" Jan 30 04:42:23.862082 systemd[1]: Started cri-containerd-7e8883405fdf4b8301d4918f827a16380823baa96378d038fc2906391591983b.scope - libcontainer container 7e8883405fdf4b8301d4918f827a16380823baa96378d038fc2906391591983b. Jan 30 04:42:23.895572 containerd[1524]: time="2025-01-30T04:42:23.895529176Z" level=info msg="StartContainer for \"7e8883405fdf4b8301d4918f827a16380823baa96378d038fc2906391591983b\" returns successfully" Jan 30 04:42:23.906569 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 04:42:23.907095 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 04:42:23.907271 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 30 04:42:23.913244 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 04:42:23.913458 systemd[1]: cri-containerd-7e8883405fdf4b8301d4918f827a16380823baa96378d038fc2906391591983b.scope: Deactivated successfully. Jan 30 04:42:23.948640 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 04:42:23.958348 containerd[1524]: time="2025-01-30T04:42:23.958288109Z" level=info msg="shim disconnected" id=7e8883405fdf4b8301d4918f827a16380823baa96378d038fc2906391591983b namespace=k8s.io Jan 30 04:42:23.958348 containerd[1524]: time="2025-01-30T04:42:23.958340767Z" level=warning msg="cleaning up after shim disconnected" id=7e8883405fdf4b8301d4918f827a16380823baa96378d038fc2906391591983b namespace=k8s.io Jan 30 04:42:23.958348 containerd[1524]: time="2025-01-30T04:42:23.958349394Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 04:42:24.245102 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-be83d8831adf4ca8569e77b6ee848087ceade6e0953b81c57777e71412d2df0e-rootfs.mount: Deactivated successfully. Jan 30 04:42:24.808705 containerd[1524]: time="2025-01-30T04:42:24.808397257Z" level=info msg="CreateContainer within sandbox \"7c0376dc791b4c5ddb4c788600ceace473d5dfe4ebbe2483b8718bbfe07db89e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 04:42:24.844119 containerd[1524]: time="2025-01-30T04:42:24.844061592Z" level=info msg="CreateContainer within sandbox \"7c0376dc791b4c5ddb4c788600ceace473d5dfe4ebbe2483b8718bbfe07db89e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"65a41be782422e712b4f9f394fbe814baf0e24d4f98ed81772b6b3c751bd4d28\"" Jan 30 04:42:24.845690 containerd[1524]: time="2025-01-30T04:42:24.844644633Z" level=info msg="StartContainer for \"65a41be782422e712b4f9f394fbe814baf0e24d4f98ed81772b6b3c751bd4d28\"" Jan 30 04:42:24.881095 systemd[1]: Started cri-containerd-65a41be782422e712b4f9f394fbe814baf0e24d4f98ed81772b6b3c751bd4d28.scope - libcontainer container 65a41be782422e712b4f9f394fbe814baf0e24d4f98ed81772b6b3c751bd4d28. Jan 30 04:42:24.912529 containerd[1524]: time="2025-01-30T04:42:24.912368214Z" level=info msg="StartContainer for \"65a41be782422e712b4f9f394fbe814baf0e24d4f98ed81772b6b3c751bd4d28\" returns successfully" Jan 30 04:42:24.917273 systemd[1]: cri-containerd-65a41be782422e712b4f9f394fbe814baf0e24d4f98ed81772b6b3c751bd4d28.scope: Deactivated successfully. Jan 30 04:42:24.944330 containerd[1524]: time="2025-01-30T04:42:24.944250660Z" level=info msg="shim disconnected" id=65a41be782422e712b4f9f394fbe814baf0e24d4f98ed81772b6b3c751bd4d28 namespace=k8s.io Jan 30 04:42:24.944330 containerd[1524]: time="2025-01-30T04:42:24.944312386Z" level=warning msg="cleaning up after shim disconnected" id=65a41be782422e712b4f9f394fbe814baf0e24d4f98ed81772b6b3c751bd4d28 namespace=k8s.io Jan 30 04:42:24.944330 containerd[1524]: time="2025-01-30T04:42:24.944325730Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 04:42:25.245060 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-65a41be782422e712b4f9f394fbe814baf0e24d4f98ed81772b6b3c751bd4d28-rootfs.mount: Deactivated successfully. Jan 30 04:42:25.811719 containerd[1524]: time="2025-01-30T04:42:25.811674569Z" level=info msg="CreateContainer within sandbox \"7c0376dc791b4c5ddb4c788600ceace473d5dfe4ebbe2483b8718bbfe07db89e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 04:42:25.832755 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2545114325.mount: Deactivated successfully. Jan 30 04:42:25.841354 containerd[1524]: time="2025-01-30T04:42:25.839133536Z" level=info msg="CreateContainer within sandbox \"7c0376dc791b4c5ddb4c788600ceace473d5dfe4ebbe2483b8718bbfe07db89e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e7535ba295c3f5caee3e3f19d32928c35e798d4a61b7605704ed7ae6b5a2625c\"" Jan 30 04:42:25.842042 containerd[1524]: time="2025-01-30T04:42:25.842013077Z" level=info msg="StartContainer for \"e7535ba295c3f5caee3e3f19d32928c35e798d4a61b7605704ed7ae6b5a2625c\"" Jan 30 04:42:25.865028 systemd[1]: Started cri-containerd-e7535ba295c3f5caee3e3f19d32928c35e798d4a61b7605704ed7ae6b5a2625c.scope - libcontainer container e7535ba295c3f5caee3e3f19d32928c35e798d4a61b7605704ed7ae6b5a2625c. Jan 30 04:42:25.888540 systemd[1]: cri-containerd-e7535ba295c3f5caee3e3f19d32928c35e798d4a61b7605704ed7ae6b5a2625c.scope: Deactivated successfully. Jan 30 04:42:25.889632 containerd[1524]: time="2025-01-30T04:42:25.888709738Z" level=info msg="StartContainer for \"e7535ba295c3f5caee3e3f19d32928c35e798d4a61b7605704ed7ae6b5a2625c\" returns successfully" Jan 30 04:42:25.914727 containerd[1524]: time="2025-01-30T04:42:25.914661372Z" level=info msg="shim disconnected" id=e7535ba295c3f5caee3e3f19d32928c35e798d4a61b7605704ed7ae6b5a2625c namespace=k8s.io Jan 30 04:42:25.914727 containerd[1524]: time="2025-01-30T04:42:25.914723038Z" level=warning msg="cleaning up after shim disconnected" id=e7535ba295c3f5caee3e3f19d32928c35e798d4a61b7605704ed7ae6b5a2625c namespace=k8s.io Jan 30 04:42:25.914727 containerd[1524]: time="2025-01-30T04:42:25.914732274Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 04:42:26.245657 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e7535ba295c3f5caee3e3f19d32928c35e798d4a61b7605704ed7ae6b5a2625c-rootfs.mount: Deactivated successfully. Jan 30 04:42:26.822821 containerd[1524]: time="2025-01-30T04:42:26.822774816Z" level=info msg="CreateContainer within sandbox \"7c0376dc791b4c5ddb4c788600ceace473d5dfe4ebbe2483b8718bbfe07db89e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 04:42:26.842173 containerd[1524]: time="2025-01-30T04:42:26.842123300Z" level=info msg="CreateContainer within sandbox \"7c0376dc791b4c5ddb4c788600ceace473d5dfe4ebbe2483b8718bbfe07db89e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e6e263608e8ab7352ec6ef9b0dd7c50f9ab9bd05cba61ea58585e7054d30e26a\"" Jan 30 04:42:26.842780 containerd[1524]: time="2025-01-30T04:42:26.842752368Z" level=info msg="StartContainer for \"e6e263608e8ab7352ec6ef9b0dd7c50f9ab9bd05cba61ea58585e7054d30e26a\"" Jan 30 04:42:26.876130 systemd[1]: Started cri-containerd-e6e263608e8ab7352ec6ef9b0dd7c50f9ab9bd05cba61ea58585e7054d30e26a.scope - libcontainer container e6e263608e8ab7352ec6ef9b0dd7c50f9ab9bd05cba61ea58585e7054d30e26a. Jan 30 04:42:26.904126 containerd[1524]: time="2025-01-30T04:42:26.904084993Z" level=info msg="StartContainer for \"e6e263608e8ab7352ec6ef9b0dd7c50f9ab9bd05cba61ea58585e7054d30e26a\" returns successfully" Jan 30 04:42:27.084732 kubelet[2892]: I0130 04:42:27.084395 2892 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 30 04:42:27.111207 kubelet[2892]: I0130 04:42:27.110095 2892 topology_manager.go:215] "Topology Admit Handler" podUID="45f8009a-0c22-4d8c-9038-4e030021e956" podNamespace="kube-system" podName="coredns-7db6d8ff4d-rqs26" Jan 30 04:42:27.114391 kubelet[2892]: I0130 04:42:27.113390 2892 topology_manager.go:215] "Topology Admit Handler" podUID="bf23d295-3e54-46cf-aa60-b809e7c1fd1b" podNamespace="kube-system" podName="coredns-7db6d8ff4d-jr8d5" Jan 30 04:42:27.122586 systemd[1]: Created slice kubepods-burstable-pod45f8009a_0c22_4d8c_9038_4e030021e956.slice - libcontainer container kubepods-burstable-pod45f8009a_0c22_4d8c_9038_4e030021e956.slice. Jan 30 04:42:27.131059 systemd[1]: Created slice kubepods-burstable-podbf23d295_3e54_46cf_aa60_b809e7c1fd1b.slice - libcontainer container kubepods-burstable-podbf23d295_3e54_46cf_aa60_b809e7c1fd1b.slice. Jan 30 04:42:27.206193 kubelet[2892]: I0130 04:42:27.206130 2892 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lk6t4\" (UniqueName: \"kubernetes.io/projected/bf23d295-3e54-46cf-aa60-b809e7c1fd1b-kube-api-access-lk6t4\") pod \"coredns-7db6d8ff4d-jr8d5\" (UID: \"bf23d295-3e54-46cf-aa60-b809e7c1fd1b\") " pod="kube-system/coredns-7db6d8ff4d-jr8d5" Jan 30 04:42:27.206629 kubelet[2892]: I0130 04:42:27.206610 2892 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/45f8009a-0c22-4d8c-9038-4e030021e956-config-volume\") pod \"coredns-7db6d8ff4d-rqs26\" (UID: \"45f8009a-0c22-4d8c-9038-4e030021e956\") " pod="kube-system/coredns-7db6d8ff4d-rqs26" Jan 30 04:42:27.206800 kubelet[2892]: I0130 04:42:27.206745 2892 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tkb5\" (UniqueName: \"kubernetes.io/projected/45f8009a-0c22-4d8c-9038-4e030021e956-kube-api-access-4tkb5\") pod \"coredns-7db6d8ff4d-rqs26\" (UID: \"45f8009a-0c22-4d8c-9038-4e030021e956\") " pod="kube-system/coredns-7db6d8ff4d-rqs26" Jan 30 04:42:27.206800 kubelet[2892]: I0130 04:42:27.206766 2892 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf23d295-3e54-46cf-aa60-b809e7c1fd1b-config-volume\") pod \"coredns-7db6d8ff4d-jr8d5\" (UID: \"bf23d295-3e54-46cf-aa60-b809e7c1fd1b\") " pod="kube-system/coredns-7db6d8ff4d-jr8d5" Jan 30 04:42:27.429842 containerd[1524]: time="2025-01-30T04:42:27.429736565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rqs26,Uid:45f8009a-0c22-4d8c-9038-4e030021e956,Namespace:kube-system,Attempt:0,}" Jan 30 04:42:27.440314 containerd[1524]: time="2025-01-30T04:42:27.440275967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jr8d5,Uid:bf23d295-3e54-46cf-aa60-b809e7c1fd1b,Namespace:kube-system,Attempt:0,}" Jan 30 04:42:29.178758 systemd-networkd[1383]: cilium_host: Link UP Jan 30 04:42:29.180124 systemd-networkd[1383]: cilium_net: Link UP Jan 30 04:42:29.180365 systemd-networkd[1383]: cilium_net: Gained carrier Jan 30 04:42:29.180552 systemd-networkd[1383]: cilium_host: Gained carrier Jan 30 04:42:29.264175 systemd-networkd[1383]: cilium_net: Gained IPv6LL Jan 30 04:42:29.289010 systemd-networkd[1383]: cilium_vxlan: Link UP Jan 30 04:42:29.289022 systemd-networkd[1383]: cilium_vxlan: Gained carrier Jan 30 04:42:29.657021 kernel: NET: Registered PF_ALG protocol family Jan 30 04:42:30.034944 update_engine[1514]: I20250130 04:42:30.034693 1514 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 04:42:30.040681 update_engine[1514]: I20250130 04:42:30.040341 1514 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 04:42:30.040681 update_engine[1514]: I20250130 04:42:30.040616 1514 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 04:42:30.041171 update_engine[1514]: E20250130 04:42:30.041095 1514 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 04:42:30.041171 update_engine[1514]: I20250130 04:42:30.041142 1514 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 30 04:42:30.192063 systemd-networkd[1383]: cilium_host: Gained IPv6LL Jan 30 04:42:30.276126 systemd-networkd[1383]: lxc_health: Link UP Jan 30 04:42:30.285437 systemd-networkd[1383]: lxc_health: Gained carrier Jan 30 04:42:30.342464 kubelet[2892]: I0130 04:42:30.342404 2892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-8pwrt" podStartSLOduration=9.544196795 podStartE2EDuration="16.342387593s" podCreationTimestamp="2025-01-30 04:42:14 +0000 UTC" firstStartedPulling="2025-01-30 04:42:16.407531509 +0000 UTC m=+17.793204972" lastFinishedPulling="2025-01-30 04:42:23.205722307 +0000 UTC m=+24.591395770" observedRunningTime="2025-01-30 04:42:27.850865129 +0000 UTC m=+29.236538593" watchObservedRunningTime="2025-01-30 04:42:30.342387593 +0000 UTC m=+31.728061056" Jan 30 04:42:30.493554 systemd-networkd[1383]: lxc9d4a65708fd8: Link UP Jan 30 04:42:30.498944 kernel: eth0: renamed from tmp6af81 Jan 30 04:42:30.504385 systemd-networkd[1383]: lxc9d4a65708fd8: Gained carrier Jan 30 04:42:30.522299 kernel: eth0: renamed from tmp53753 Jan 30 04:42:30.530679 systemd-networkd[1383]: tmp53753: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 04:42:30.530757 systemd-networkd[1383]: tmp53753: Cannot enable IPv6, ignoring: No such file or directory Jan 30 04:42:30.531268 systemd-networkd[1383]: tmp53753: Cannot configure IPv6 privacy extensions for interface, ignoring: No such file or directory Jan 30 04:42:30.531292 systemd-networkd[1383]: tmp53753: Cannot disable kernel IPv6 accept_ra for interface, ignoring: No such file or directory Jan 30 04:42:30.531309 systemd-networkd[1383]: tmp53753: Cannot set IPv6 proxy NDP, ignoring: No such file or directory Jan 30 04:42:30.531328 systemd-networkd[1383]: tmp53753: Cannot enable promote_secondaries for interface, ignoring: No such file or directory Jan 30 04:42:30.531780 systemd-networkd[1383]: lxc54676ca9bf20: Link UP Jan 30 04:42:30.535661 systemd-networkd[1383]: lxc54676ca9bf20: Gained carrier Jan 30 04:42:30.768568 systemd-networkd[1383]: cilium_vxlan: Gained IPv6LL Jan 30 04:42:32.176433 systemd-networkd[1383]: lxc9d4a65708fd8: Gained IPv6LL Jan 30 04:42:32.240724 systemd-networkd[1383]: lxc_health: Gained IPv6LL Jan 30 04:42:32.241138 systemd-networkd[1383]: lxc54676ca9bf20: Gained IPv6LL Jan 30 04:42:33.815370 containerd[1524]: time="2025-01-30T04:42:33.815291458Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 04:42:33.815714 containerd[1524]: time="2025-01-30T04:42:33.815374823Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 04:42:33.815714 containerd[1524]: time="2025-01-30T04:42:33.815409478Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 04:42:33.815785 containerd[1524]: time="2025-01-30T04:42:33.815713668Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 04:42:33.817419 containerd[1524]: time="2025-01-30T04:42:33.815865903Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 04:42:33.817419 containerd[1524]: time="2025-01-30T04:42:33.816775216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 04:42:33.817657 containerd[1524]: time="2025-01-30T04:42:33.817610190Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 04:42:33.819066 containerd[1524]: time="2025-01-30T04:42:33.819003970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 04:42:33.851026 systemd[1]: Started cri-containerd-5375304f1e60a60fa08d03c0b75b9ab17e445adeea72672b9acd5a8930467489.scope - libcontainer container 5375304f1e60a60fa08d03c0b75b9ab17e445adeea72672b9acd5a8930467489. Jan 30 04:42:33.874506 systemd[1]: Started cri-containerd-6af813433cad0a9244e87c11c25f3a269b4e0557047e73400514540f98e7cc88.scope - libcontainer container 6af813433cad0a9244e87c11c25f3a269b4e0557047e73400514540f98e7cc88. Jan 30 04:42:33.929634 containerd[1524]: time="2025-01-30T04:42:33.929576596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jr8d5,Uid:bf23d295-3e54-46cf-aa60-b809e7c1fd1b,Namespace:kube-system,Attempt:0,} returns sandbox id \"5375304f1e60a60fa08d03c0b75b9ab17e445adeea72672b9acd5a8930467489\"" Jan 30 04:42:33.934075 containerd[1524]: time="2025-01-30T04:42:33.933951740Z" level=info msg="CreateContainer within sandbox \"5375304f1e60a60fa08d03c0b75b9ab17e445adeea72672b9acd5a8930467489\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 04:42:33.959171 containerd[1524]: time="2025-01-30T04:42:33.958396505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rqs26,Uid:45f8009a-0c22-4d8c-9038-4e030021e956,Namespace:kube-system,Attempt:0,} returns sandbox id \"6af813433cad0a9244e87c11c25f3a269b4e0557047e73400514540f98e7cc88\"" Jan 30 04:42:33.963901 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount989854197.mount: Deactivated successfully. Jan 30 04:42:33.966392 containerd[1524]: time="2025-01-30T04:42:33.965726932Z" level=info msg="CreateContainer within sandbox \"6af813433cad0a9244e87c11c25f3a269b4e0557047e73400514540f98e7cc88\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 04:42:33.970144 containerd[1524]: time="2025-01-30T04:42:33.970112864Z" level=info msg="CreateContainer within sandbox \"5375304f1e60a60fa08d03c0b75b9ab17e445adeea72672b9acd5a8930467489\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7de6cdfc96cc5d9349695f9312dccb0330585b629724cd8cc05599aee84a2fa5\"" Jan 30 04:42:33.971069 containerd[1524]: time="2025-01-30T04:42:33.971045782Z" level=info msg="StartContainer for \"7de6cdfc96cc5d9349695f9312dccb0330585b629724cd8cc05599aee84a2fa5\"" Jan 30 04:42:33.984612 containerd[1524]: time="2025-01-30T04:42:33.984542746Z" level=info msg="CreateContainer within sandbox \"6af813433cad0a9244e87c11c25f3a269b4e0557047e73400514540f98e7cc88\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5b235d60d4f8dc5e74e7c1aa00ef70526a47209d7eae11da39a25e6bca6ac41a\"" Jan 30 04:42:33.986322 containerd[1524]: time="2025-01-30T04:42:33.986298813Z" level=info msg="StartContainer for \"5b235d60d4f8dc5e74e7c1aa00ef70526a47209d7eae11da39a25e6bca6ac41a\"" Jan 30 04:42:34.004045 systemd[1]: Started cri-containerd-7de6cdfc96cc5d9349695f9312dccb0330585b629724cd8cc05599aee84a2fa5.scope - libcontainer container 7de6cdfc96cc5d9349695f9312dccb0330585b629724cd8cc05599aee84a2fa5. Jan 30 04:42:34.031426 systemd[1]: Started cri-containerd-5b235d60d4f8dc5e74e7c1aa00ef70526a47209d7eae11da39a25e6bca6ac41a.scope - libcontainer container 5b235d60d4f8dc5e74e7c1aa00ef70526a47209d7eae11da39a25e6bca6ac41a. Jan 30 04:42:34.059024 containerd[1524]: time="2025-01-30T04:42:34.057410009Z" level=info msg="StartContainer for \"7de6cdfc96cc5d9349695f9312dccb0330585b629724cd8cc05599aee84a2fa5\" returns successfully" Jan 30 04:42:34.067640 containerd[1524]: time="2025-01-30T04:42:34.067401690Z" level=info msg="StartContainer for \"5b235d60d4f8dc5e74e7c1aa00ef70526a47209d7eae11da39a25e6bca6ac41a\" returns successfully" Jan 30 04:42:34.864157 kubelet[2892]: I0130 04:42:34.864050 2892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-jr8d5" podStartSLOduration=20.864029148 podStartE2EDuration="20.864029148s" podCreationTimestamp="2025-01-30 04:42:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 04:42:34.862687205 +0000 UTC m=+36.248360668" watchObservedRunningTime="2025-01-30 04:42:34.864029148 +0000 UTC m=+36.249702621" Jan 30 04:42:34.876330 kubelet[2892]: I0130 04:42:34.875876 2892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-rqs26" podStartSLOduration=20.875858971 podStartE2EDuration="20.875858971s" podCreationTimestamp="2025-01-30 04:42:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 04:42:34.875839405 +0000 UTC m=+36.261512868" watchObservedRunningTime="2025-01-30 04:42:34.875858971 +0000 UTC m=+36.261532464" Jan 30 04:42:36.202640 kubelet[2892]: I0130 04:42:36.202400 2892 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 04:42:40.031824 update_engine[1514]: I20250130 04:42:40.031729 1514 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 04:42:40.032395 update_engine[1514]: I20250130 04:42:40.032090 1514 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 04:42:40.032395 update_engine[1514]: I20250130 04:42:40.032330 1514 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 04:42:40.032996 update_engine[1514]: E20250130 04:42:40.032684 1514 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 04:42:40.032996 update_engine[1514]: I20250130 04:42:40.032722 1514 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 30 04:42:40.032996 update_engine[1514]: I20250130 04:42:40.032732 1514 omaha_request_action.cc:617] Omaha request response: Jan 30 04:42:40.033158 update_engine[1514]: E20250130 04:42:40.033123 1514 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 30 04:42:40.054967 update_engine[1514]: I20250130 04:42:40.054874 1514 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 30 04:42:40.054967 update_engine[1514]: I20250130 04:42:40.054939 1514 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 30 04:42:40.054967 update_engine[1514]: I20250130 04:42:40.054948 1514 update_attempter.cc:306] Processing Done. Jan 30 04:42:40.054967 update_engine[1514]: E20250130 04:42:40.054972 1514 update_attempter.cc:619] Update failed. Jan 30 04:42:40.055238 update_engine[1514]: I20250130 04:42:40.054996 1514 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 30 04:42:40.055238 update_engine[1514]: I20250130 04:42:40.055006 1514 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 30 04:42:40.055238 update_engine[1514]: I20250130 04:42:40.055013 1514 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 30 04:42:40.055238 update_engine[1514]: I20250130 04:42:40.055114 1514 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 30 04:42:40.058955 update_engine[1514]: I20250130 04:42:40.056970 1514 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 30 04:42:40.058955 update_engine[1514]: I20250130 04:42:40.058940 1514 omaha_request_action.cc:272] Request: Jan 30 04:42:40.058955 update_engine[1514]: Jan 30 04:42:40.058955 update_engine[1514]: Jan 30 04:42:40.058955 update_engine[1514]: Jan 30 04:42:40.058955 update_engine[1514]: Jan 30 04:42:40.058955 update_engine[1514]: Jan 30 04:42:40.058955 update_engine[1514]: Jan 30 04:42:40.058955 update_engine[1514]: I20250130 04:42:40.058954 1514 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 04:42:40.059520 update_engine[1514]: I20250130 04:42:40.059252 1514 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 04:42:40.059520 update_engine[1514]: I20250130 04:42:40.059485 1514 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 04:42:40.059748 locksmithd[1542]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 30 04:42:40.060217 update_engine[1514]: E20250130 04:42:40.059827 1514 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 04:42:40.060217 update_engine[1514]: I20250130 04:42:40.059875 1514 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 30 04:42:40.060217 update_engine[1514]: I20250130 04:42:40.059908 1514 omaha_request_action.cc:617] Omaha request response: Jan 30 04:42:40.060217 update_engine[1514]: I20250130 04:42:40.059917 1514 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 30 04:42:40.060217 update_engine[1514]: I20250130 04:42:40.059924 1514 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 30 04:42:40.060217 update_engine[1514]: I20250130 04:42:40.059931 1514 update_attempter.cc:306] Processing Done. Jan 30 04:42:40.060217 update_engine[1514]: I20250130 04:42:40.059938 1514 update_attempter.cc:310] Error event sent. Jan 30 04:42:40.060217 update_engine[1514]: I20250130 04:42:40.059948 1514 update_check_scheduler.cc:74] Next update check in 44m41s Jan 30 04:42:40.060456 locksmithd[1542]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 30 04:45:35.476977 systemd[1]: Started sshd@9-116.202.14.223:22-114.242.9.121:35092.service - OpenSSH per-connection server daemon (114.242.9.121:35092). Jan 30 04:46:47.929199 systemd[1]: Started sshd@10-116.202.14.223:22-139.178.89.65:48896.service - OpenSSH per-connection server daemon (139.178.89.65:48896). Jan 30 04:46:48.920764 sshd[4289]: Accepted publickey for core from 139.178.89.65 port 48896 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:46:48.923212 sshd-session[4289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:46:48.927933 systemd-logind[1509]: New session 8 of user core. Jan 30 04:46:48.933008 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 04:46:50.044017 sshd[4291]: Connection closed by 139.178.89.65 port 48896 Jan 30 04:46:50.044902 sshd-session[4289]: pam_unix(sshd:session): session closed for user core Jan 30 04:46:50.048188 systemd[1]: sshd@10-116.202.14.223:22-139.178.89.65:48896.service: Deactivated successfully. Jan 30 04:46:50.050461 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 04:46:50.052131 systemd-logind[1509]: Session 8 logged out. Waiting for processes to exit. Jan 30 04:46:50.053597 systemd-logind[1509]: Removed session 8. Jan 30 04:46:55.223610 systemd[1]: Started sshd@11-116.202.14.223:22-139.178.89.65:39156.service - OpenSSH per-connection server daemon (139.178.89.65:39156). Jan 30 04:46:56.199131 sshd[4303]: Accepted publickey for core from 139.178.89.65 port 39156 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:46:56.202454 sshd-session[4303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:46:56.211675 systemd-logind[1509]: New session 9 of user core. Jan 30 04:46:56.215030 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 04:46:56.956395 sshd[4305]: Connection closed by 139.178.89.65 port 39156 Jan 30 04:46:56.957172 sshd-session[4303]: pam_unix(sshd:session): session closed for user core Jan 30 04:46:56.961664 systemd[1]: sshd@11-116.202.14.223:22-139.178.89.65:39156.service: Deactivated successfully. Jan 30 04:46:56.962035 systemd-logind[1509]: Session 9 logged out. Waiting for processes to exit. Jan 30 04:46:56.964384 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 04:46:56.965402 systemd-logind[1509]: Removed session 9. Jan 30 04:47:02.127160 systemd[1]: Started sshd@12-116.202.14.223:22-139.178.89.65:58778.service - OpenSSH per-connection server daemon (139.178.89.65:58778). Jan 30 04:47:03.111533 sshd[4319]: Accepted publickey for core from 139.178.89.65 port 58778 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:47:03.113681 sshd-session[4319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:47:03.118022 systemd-logind[1509]: New session 10 of user core. Jan 30 04:47:03.120087 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 04:47:03.878666 sshd[4321]: Connection closed by 139.178.89.65 port 58778 Jan 30 04:47:03.879565 sshd-session[4319]: pam_unix(sshd:session): session closed for user core Jan 30 04:47:03.882759 systemd[1]: sshd@12-116.202.14.223:22-139.178.89.65:58778.service: Deactivated successfully. Jan 30 04:47:03.884948 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 04:47:03.887345 systemd-logind[1509]: Session 10 logged out. Waiting for processes to exit. Jan 30 04:47:03.888838 systemd-logind[1509]: Removed session 10. Jan 30 04:47:09.051051 systemd[1]: Started sshd@13-116.202.14.223:22-139.178.89.65:58794.service - OpenSSH per-connection server daemon (139.178.89.65:58794). Jan 30 04:47:10.041023 sshd[4332]: Accepted publickey for core from 139.178.89.65 port 58794 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:47:10.042917 sshd-session[4332]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:47:10.049025 systemd-logind[1509]: New session 11 of user core. Jan 30 04:47:10.054085 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 04:47:10.798959 sshd[4334]: Connection closed by 139.178.89.65 port 58794 Jan 30 04:47:10.799545 sshd-session[4332]: pam_unix(sshd:session): session closed for user core Jan 30 04:47:10.804380 systemd-logind[1509]: Session 11 logged out. Waiting for processes to exit. Jan 30 04:47:10.805270 systemd[1]: sshd@13-116.202.14.223:22-139.178.89.65:58794.service: Deactivated successfully. Jan 30 04:47:10.807699 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 04:47:10.808988 systemd-logind[1509]: Removed session 11. Jan 30 04:47:15.973178 systemd[1]: Started sshd@14-116.202.14.223:22-139.178.89.65:55666.service - OpenSSH per-connection server daemon (139.178.89.65:55666). Jan 30 04:47:16.959399 sshd[4348]: Accepted publickey for core from 139.178.89.65 port 55666 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:47:16.960963 sshd-session[4348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:47:16.965483 systemd-logind[1509]: New session 12 of user core. Jan 30 04:47:16.970052 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 04:47:17.700288 sshd[4350]: Connection closed by 139.178.89.65 port 55666 Jan 30 04:47:17.701049 sshd-session[4348]: pam_unix(sshd:session): session closed for user core Jan 30 04:47:17.705082 systemd[1]: sshd@14-116.202.14.223:22-139.178.89.65:55666.service: Deactivated successfully. Jan 30 04:47:17.707871 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 04:47:17.708729 systemd-logind[1509]: Session 12 logged out. Waiting for processes to exit. Jan 30 04:47:17.710340 systemd-logind[1509]: Removed session 12. Jan 30 04:47:22.879254 systemd[1]: Started sshd@15-116.202.14.223:22-139.178.89.65:41096.service - OpenSSH per-connection server daemon (139.178.89.65:41096). Jan 30 04:47:23.857597 sshd[4362]: Accepted publickey for core from 139.178.89.65 port 41096 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:47:23.859454 sshd-session[4362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:47:23.864052 systemd-logind[1509]: New session 13 of user core. Jan 30 04:47:23.871027 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 04:47:24.610823 sshd[4364]: Connection closed by 139.178.89.65 port 41096 Jan 30 04:47:24.611659 sshd-session[4362]: pam_unix(sshd:session): session closed for user core Jan 30 04:47:24.615167 systemd[1]: sshd@15-116.202.14.223:22-139.178.89.65:41096.service: Deactivated successfully. Jan 30 04:47:24.617543 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 04:47:24.619257 systemd-logind[1509]: Session 13 logged out. Waiting for processes to exit. Jan 30 04:47:24.620532 systemd-logind[1509]: Removed session 13. Jan 30 04:47:29.781557 systemd[1]: Started sshd@16-116.202.14.223:22-139.178.89.65:41112.service - OpenSSH per-connection server daemon (139.178.89.65:41112). Jan 30 04:47:30.763784 sshd[4376]: Accepted publickey for core from 139.178.89.65 port 41112 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:47:30.767045 sshd-session[4376]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:47:30.771944 systemd-logind[1509]: New session 14 of user core. Jan 30 04:47:30.777062 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 04:47:31.563634 sshd[4378]: Connection closed by 139.178.89.65 port 41112 Jan 30 04:47:31.564148 sshd-session[4376]: pam_unix(sshd:session): session closed for user core Jan 30 04:47:31.568653 systemd[1]: sshd@16-116.202.14.223:22-139.178.89.65:41112.service: Deactivated successfully. Jan 30 04:47:31.571273 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 04:47:31.572493 systemd-logind[1509]: Session 14 logged out. Waiting for processes to exit. Jan 30 04:47:31.573767 systemd-logind[1509]: Removed session 14. Jan 30 04:47:35.522412 systemd[1]: sshd@9-116.202.14.223:22-114.242.9.121:35092.service: Deactivated successfully. Jan 30 04:47:36.734292 systemd[1]: Started sshd@17-116.202.14.223:22-139.178.89.65:49232.service - OpenSSH per-connection server daemon (139.178.89.65:49232). Jan 30 04:47:37.728074 sshd[4393]: Accepted publickey for core from 139.178.89.65 port 49232 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:47:37.729623 sshd-session[4393]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:47:37.734305 systemd-logind[1509]: New session 15 of user core. Jan 30 04:47:37.742025 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 04:47:38.474249 sshd[4395]: Connection closed by 139.178.89.65 port 49232 Jan 30 04:47:38.474828 sshd-session[4393]: pam_unix(sshd:session): session closed for user core Jan 30 04:47:38.478347 systemd[1]: sshd@17-116.202.14.223:22-139.178.89.65:49232.service: Deactivated successfully. Jan 30 04:47:38.480449 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 04:47:38.481087 systemd-logind[1509]: Session 15 logged out. Waiting for processes to exit. Jan 30 04:47:38.482570 systemd-logind[1509]: Removed session 15. Jan 30 04:47:43.646210 systemd[1]: Started sshd@18-116.202.14.223:22-139.178.89.65:33644.service - OpenSSH per-connection server daemon (139.178.89.65:33644). Jan 30 04:47:44.624133 sshd[4409]: Accepted publickey for core from 139.178.89.65 port 33644 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:47:44.625843 sshd-session[4409]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:47:44.630262 systemd-logind[1509]: New session 16 of user core. Jan 30 04:47:44.637017 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 04:47:45.378816 sshd[4411]: Connection closed by 139.178.89.65 port 33644 Jan 30 04:47:45.379428 sshd-session[4409]: pam_unix(sshd:session): session closed for user core Jan 30 04:47:45.383506 systemd[1]: sshd@18-116.202.14.223:22-139.178.89.65:33644.service: Deactivated successfully. Jan 30 04:47:45.387502 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 04:47:45.389273 systemd-logind[1509]: Session 16 logged out. Waiting for processes to exit. Jan 30 04:47:45.390717 systemd-logind[1509]: Removed session 16. Jan 30 04:47:50.555155 systemd[1]: Started sshd@19-116.202.14.223:22-139.178.89.65:33660.service - OpenSSH per-connection server daemon (139.178.89.65:33660). Jan 30 04:47:51.542296 sshd[4426]: Accepted publickey for core from 139.178.89.65 port 33660 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:47:51.543880 sshd-session[4426]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:47:51.548677 systemd-logind[1509]: New session 17 of user core. Jan 30 04:47:51.555020 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 04:47:52.285476 sshd[4428]: Connection closed by 139.178.89.65 port 33660 Jan 30 04:47:52.286130 sshd-session[4426]: pam_unix(sshd:session): session closed for user core Jan 30 04:47:52.289244 systemd[1]: sshd@19-116.202.14.223:22-139.178.89.65:33660.service: Deactivated successfully. Jan 30 04:47:52.291364 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 04:47:52.293355 systemd-logind[1509]: Session 17 logged out. Waiting for processes to exit. Jan 30 04:47:52.294435 systemd-logind[1509]: Removed session 17. Jan 30 04:47:57.452776 systemd[1]: Started sshd@20-116.202.14.223:22-139.178.89.65:52532.service - OpenSSH per-connection server daemon (139.178.89.65:52532). Jan 30 04:47:58.432010 sshd[4441]: Accepted publickey for core from 139.178.89.65 port 52532 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:47:58.433848 sshd-session[4441]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:47:58.439336 systemd-logind[1509]: New session 18 of user core. Jan 30 04:47:58.446053 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 04:47:59.179710 sshd[4443]: Connection closed by 139.178.89.65 port 52532 Jan 30 04:47:59.180669 sshd-session[4441]: pam_unix(sshd:session): session closed for user core Jan 30 04:47:59.184278 systemd[1]: sshd@20-116.202.14.223:22-139.178.89.65:52532.service: Deactivated successfully. Jan 30 04:47:59.186299 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 04:47:59.187279 systemd-logind[1509]: Session 18 logged out. Waiting for processes to exit. Jan 30 04:47:59.188818 systemd-logind[1509]: Removed session 18. Jan 30 04:48:04.353213 systemd[1]: Started sshd@21-116.202.14.223:22-139.178.89.65:51896.service - OpenSSH per-connection server daemon (139.178.89.65:51896). Jan 30 04:48:05.323108 sshd[4457]: Accepted publickey for core from 139.178.89.65 port 51896 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:48:05.325196 sshd-session[4457]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:48:05.330059 systemd-logind[1509]: New session 19 of user core. Jan 30 04:48:05.333079 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 04:48:06.063255 sshd[4459]: Connection closed by 139.178.89.65 port 51896 Jan 30 04:48:06.064212 sshd-session[4457]: pam_unix(sshd:session): session closed for user core Jan 30 04:48:06.068121 systemd[1]: sshd@21-116.202.14.223:22-139.178.89.65:51896.service: Deactivated successfully. Jan 30 04:48:06.070783 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 04:48:06.072210 systemd-logind[1509]: Session 19 logged out. Waiting for processes to exit. Jan 30 04:48:06.073445 systemd-logind[1509]: Removed session 19. Jan 30 04:48:11.236030 systemd[1]: Started sshd@22-116.202.14.223:22-139.178.89.65:34896.service - OpenSSH per-connection server daemon (139.178.89.65:34896). Jan 30 04:48:12.215098 sshd[4471]: Accepted publickey for core from 139.178.89.65 port 34896 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:48:12.216806 sshd-session[4471]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:48:12.222121 systemd-logind[1509]: New session 20 of user core. Jan 30 04:48:12.225014 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 04:48:12.951777 sshd[4473]: Connection closed by 139.178.89.65 port 34896 Jan 30 04:48:12.952563 sshd-session[4471]: pam_unix(sshd:session): session closed for user core Jan 30 04:48:12.957312 systemd[1]: sshd@22-116.202.14.223:22-139.178.89.65:34896.service: Deactivated successfully. Jan 30 04:48:12.960753 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 04:48:12.962028 systemd-logind[1509]: Session 20 logged out. Waiting for processes to exit. Jan 30 04:48:12.963355 systemd-logind[1509]: Removed session 20. Jan 30 04:48:18.128155 systemd[1]: Started sshd@23-116.202.14.223:22-139.178.89.65:34908.service - OpenSSH per-connection server daemon (139.178.89.65:34908). Jan 30 04:48:19.111357 sshd[4487]: Accepted publickey for core from 139.178.89.65 port 34908 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:48:19.114308 sshd-session[4487]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:48:19.123021 systemd-logind[1509]: New session 21 of user core. Jan 30 04:48:19.128151 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 04:48:19.866070 sshd[4489]: Connection closed by 139.178.89.65 port 34908 Jan 30 04:48:19.866742 sshd-session[4487]: pam_unix(sshd:session): session closed for user core Jan 30 04:48:19.871879 systemd[1]: sshd@23-116.202.14.223:22-139.178.89.65:34908.service: Deactivated successfully. Jan 30 04:48:19.875847 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 04:48:19.878982 systemd-logind[1509]: Session 21 logged out. Waiting for processes to exit. Jan 30 04:48:19.881858 systemd-logind[1509]: Removed session 21. Jan 30 04:48:25.036163 systemd[1]: Started sshd@24-116.202.14.223:22-139.178.89.65:45370.service - OpenSSH per-connection server daemon (139.178.89.65:45370). Jan 30 04:48:26.025746 sshd[4502]: Accepted publickey for core from 139.178.89.65 port 45370 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:48:26.027456 sshd-session[4502]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:48:26.032550 systemd-logind[1509]: New session 22 of user core. Jan 30 04:48:26.037010 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 30 04:48:26.765622 sshd[4504]: Connection closed by 139.178.89.65 port 45370 Jan 30 04:48:26.766330 sshd-session[4502]: pam_unix(sshd:session): session closed for user core Jan 30 04:48:26.771104 systemd[1]: sshd@24-116.202.14.223:22-139.178.89.65:45370.service: Deactivated successfully. Jan 30 04:48:26.771304 systemd-logind[1509]: Session 22 logged out. Waiting for processes to exit. Jan 30 04:48:26.774792 systemd[1]: session-22.scope: Deactivated successfully. Jan 30 04:48:26.775835 systemd-logind[1509]: Removed session 22. Jan 30 04:48:31.939190 systemd[1]: Started sshd@25-116.202.14.223:22-139.178.89.65:56718.service - OpenSSH per-connection server daemon (139.178.89.65:56718). Jan 30 04:48:32.915817 sshd[4517]: Accepted publickey for core from 139.178.89.65 port 56718 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:48:32.917610 sshd-session[4517]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:48:32.922733 systemd-logind[1509]: New session 23 of user core. Jan 30 04:48:32.930072 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 30 04:48:33.655401 sshd[4519]: Connection closed by 139.178.89.65 port 56718 Jan 30 04:48:33.656271 sshd-session[4517]: pam_unix(sshd:session): session closed for user core Jan 30 04:48:33.661244 systemd[1]: sshd@25-116.202.14.223:22-139.178.89.65:56718.service: Deactivated successfully. Jan 30 04:48:33.664931 systemd[1]: session-23.scope: Deactivated successfully. Jan 30 04:48:33.665956 systemd-logind[1509]: Session 23 logged out. Waiting for processes to exit. Jan 30 04:48:33.667390 systemd-logind[1509]: Removed session 23. Jan 30 04:48:38.828126 systemd[1]: Started sshd@26-116.202.14.223:22-139.178.89.65:56724.service - OpenSSH per-connection server daemon (139.178.89.65:56724). Jan 30 04:48:39.807367 sshd[4531]: Accepted publickey for core from 139.178.89.65 port 56724 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:48:39.809603 sshd-session[4531]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:48:39.816732 systemd-logind[1509]: New session 24 of user core. Jan 30 04:48:39.822151 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 30 04:48:40.544956 sshd[4533]: Connection closed by 139.178.89.65 port 56724 Jan 30 04:48:40.545852 sshd-session[4531]: pam_unix(sshd:session): session closed for user core Jan 30 04:48:40.550072 systemd[1]: sshd@26-116.202.14.223:22-139.178.89.65:56724.service: Deactivated successfully. Jan 30 04:48:40.551984 systemd[1]: session-24.scope: Deactivated successfully. Jan 30 04:48:40.552660 systemd-logind[1509]: Session 24 logged out. Waiting for processes to exit. Jan 30 04:48:40.554063 systemd-logind[1509]: Removed session 24. Jan 30 04:48:45.720492 systemd[1]: Started sshd@27-116.202.14.223:22-139.178.89.65:52654.service - OpenSSH per-connection server daemon (139.178.89.65:52654). Jan 30 04:48:46.707172 sshd[4547]: Accepted publickey for core from 139.178.89.65 port 52654 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:48:46.708730 sshd-session[4547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:48:46.713123 systemd-logind[1509]: New session 25 of user core. Jan 30 04:48:46.717995 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 30 04:48:47.459143 sshd[4549]: Connection closed by 139.178.89.65 port 52654 Jan 30 04:48:47.459820 sshd-session[4547]: pam_unix(sshd:session): session closed for user core Jan 30 04:48:47.463955 systemd-logind[1509]: Session 25 logged out. Waiting for processes to exit. Jan 30 04:48:47.464725 systemd[1]: sshd@27-116.202.14.223:22-139.178.89.65:52654.service: Deactivated successfully. Jan 30 04:48:47.467428 systemd[1]: session-25.scope: Deactivated successfully. Jan 30 04:48:47.468621 systemd-logind[1509]: Removed session 25. Jan 30 04:48:52.630208 systemd[1]: Started sshd@28-116.202.14.223:22-139.178.89.65:44178.service - OpenSSH per-connection server daemon (139.178.89.65:44178). Jan 30 04:48:53.601079 sshd[4560]: Accepted publickey for core from 139.178.89.65 port 44178 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:48:53.602917 sshd-session[4560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:48:53.608096 systemd-logind[1509]: New session 26 of user core. Jan 30 04:48:53.616075 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 30 04:48:54.325285 sshd[4562]: Connection closed by 139.178.89.65 port 44178 Jan 30 04:48:54.326096 sshd-session[4560]: pam_unix(sshd:session): session closed for user core Jan 30 04:48:54.329194 systemd[1]: sshd@28-116.202.14.223:22-139.178.89.65:44178.service: Deactivated successfully. Jan 30 04:48:54.331473 systemd[1]: session-26.scope: Deactivated successfully. Jan 30 04:48:54.332975 systemd-logind[1509]: Session 26 logged out. Waiting for processes to exit. Jan 30 04:48:54.334513 systemd-logind[1509]: Removed session 26. Jan 30 04:48:59.502154 systemd[1]: Started sshd@29-116.202.14.223:22-139.178.89.65:44190.service - OpenSSH per-connection server daemon (139.178.89.65:44190). Jan 30 04:49:00.494468 sshd[4576]: Accepted publickey for core from 139.178.89.65 port 44190 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:49:00.496150 sshd-session[4576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:49:00.501245 systemd-logind[1509]: New session 27 of user core. Jan 30 04:49:00.506024 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 30 04:49:01.243623 sshd[4578]: Connection closed by 139.178.89.65 port 44190 Jan 30 04:49:01.244316 sshd-session[4576]: pam_unix(sshd:session): session closed for user core Jan 30 04:49:01.247277 systemd[1]: sshd@29-116.202.14.223:22-139.178.89.65:44190.service: Deactivated successfully. Jan 30 04:49:01.249088 systemd[1]: session-27.scope: Deactivated successfully. Jan 30 04:49:01.250425 systemd-logind[1509]: Session 27 logged out. Waiting for processes to exit. Jan 30 04:49:01.251767 systemd-logind[1509]: Removed session 27. Jan 30 04:49:06.421227 systemd[1]: Started sshd@30-116.202.14.223:22-139.178.89.65:39926.service - OpenSSH per-connection server daemon (139.178.89.65:39926). Jan 30 04:49:07.411950 sshd[4590]: Accepted publickey for core from 139.178.89.65 port 39926 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:49:07.413579 sshd-session[4590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:49:07.418640 systemd-logind[1509]: New session 28 of user core. Jan 30 04:49:07.423068 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 30 04:49:08.153434 sshd[4592]: Connection closed by 139.178.89.65 port 39926 Jan 30 04:49:08.154134 sshd-session[4590]: pam_unix(sshd:session): session closed for user core Jan 30 04:49:08.158445 systemd-logind[1509]: Session 28 logged out. Waiting for processes to exit. Jan 30 04:49:08.159051 systemd[1]: sshd@30-116.202.14.223:22-139.178.89.65:39926.service: Deactivated successfully. Jan 30 04:49:08.161705 systemd[1]: session-28.scope: Deactivated successfully. Jan 30 04:49:08.162951 systemd-logind[1509]: Removed session 28. Jan 30 04:49:13.326244 systemd[1]: Started sshd@31-116.202.14.223:22-139.178.89.65:41580.service - OpenSSH per-connection server daemon (139.178.89.65:41580). Jan 30 04:49:14.299683 sshd[4604]: Accepted publickey for core from 139.178.89.65 port 41580 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:49:14.301442 sshd-session[4604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:49:14.306434 systemd-logind[1509]: New session 29 of user core. Jan 30 04:49:14.311010 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 30 04:49:15.045476 sshd[4606]: Connection closed by 139.178.89.65 port 41580 Jan 30 04:49:15.046131 sshd-session[4604]: pam_unix(sshd:session): session closed for user core Jan 30 04:49:15.050041 systemd-logind[1509]: Session 29 logged out. Waiting for processes to exit. Jan 30 04:49:15.050378 systemd[1]: sshd@31-116.202.14.223:22-139.178.89.65:41580.service: Deactivated successfully. Jan 30 04:49:15.052631 systemd[1]: session-29.scope: Deactivated successfully. Jan 30 04:49:15.053794 systemd-logind[1509]: Removed session 29. Jan 30 04:49:20.210953 systemd[1]: Started sshd@32-116.202.14.223:22-139.178.89.65:41588.service - OpenSSH per-connection server daemon (139.178.89.65:41588). Jan 30 04:49:21.179036 sshd[4620]: Accepted publickey for core from 139.178.89.65 port 41588 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:49:21.180833 sshd-session[4620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:49:21.186156 systemd-logind[1509]: New session 30 of user core. Jan 30 04:49:21.192016 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 30 04:49:21.917257 sshd[4622]: Connection closed by 139.178.89.65 port 41588 Jan 30 04:49:21.918023 sshd-session[4620]: pam_unix(sshd:session): session closed for user core Jan 30 04:49:21.922431 systemd[1]: sshd@32-116.202.14.223:22-139.178.89.65:41588.service: Deactivated successfully. Jan 30 04:49:21.925380 systemd[1]: session-30.scope: Deactivated successfully. Jan 30 04:49:21.926268 systemd-logind[1509]: Session 30 logged out. Waiting for processes to exit. Jan 30 04:49:21.928165 systemd-logind[1509]: Removed session 30. Jan 30 04:49:27.092223 systemd[1]: Started sshd@33-116.202.14.223:22-139.178.89.65:53880.service - OpenSSH per-connection server daemon (139.178.89.65:53880). Jan 30 04:49:28.076065 sshd[4634]: Accepted publickey for core from 139.178.89.65 port 53880 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:49:28.077815 sshd-session[4634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:49:28.083460 systemd-logind[1509]: New session 31 of user core. Jan 30 04:49:28.090166 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 30 04:49:28.802189 sshd[4636]: Connection closed by 139.178.89.65 port 53880 Jan 30 04:49:28.802947 sshd-session[4634]: pam_unix(sshd:session): session closed for user core Jan 30 04:49:28.806067 systemd[1]: sshd@33-116.202.14.223:22-139.178.89.65:53880.service: Deactivated successfully. Jan 30 04:49:28.808627 systemd[1]: session-31.scope: Deactivated successfully. Jan 30 04:49:28.810954 systemd-logind[1509]: Session 31 logged out. Waiting for processes to exit. Jan 30 04:49:28.812412 systemd-logind[1509]: Removed session 31. Jan 30 04:49:33.978253 systemd[1]: Started sshd@34-116.202.14.223:22-139.178.89.65:33116.service - OpenSSH per-connection server daemon (139.178.89.65:33116). Jan 30 04:49:34.954454 sshd[4648]: Accepted publickey for core from 139.178.89.65 port 33116 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:49:34.956016 sshd-session[4648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:49:34.960765 systemd-logind[1509]: New session 32 of user core. Jan 30 04:49:34.968051 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 30 04:49:35.684436 sshd[4652]: Connection closed by 139.178.89.65 port 33116 Jan 30 04:49:35.685301 sshd-session[4648]: pam_unix(sshd:session): session closed for user core Jan 30 04:49:35.689164 systemd-logind[1509]: Session 32 logged out. Waiting for processes to exit. Jan 30 04:49:35.689787 systemd[1]: sshd@34-116.202.14.223:22-139.178.89.65:33116.service: Deactivated successfully. Jan 30 04:49:35.692198 systemd[1]: session-32.scope: Deactivated successfully. Jan 30 04:49:35.693226 systemd-logind[1509]: Removed session 32. Jan 30 04:49:40.855220 systemd[1]: Started sshd@35-116.202.14.223:22-139.178.89.65:33128.service - OpenSSH per-connection server daemon (139.178.89.65:33128). Jan 30 04:49:41.839587 sshd[4664]: Accepted publickey for core from 139.178.89.65 port 33128 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:49:41.841223 sshd-session[4664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:49:41.845529 systemd-logind[1509]: New session 33 of user core. Jan 30 04:49:41.850047 systemd[1]: Started session-33.scope - Session 33 of User core. Jan 30 04:49:42.602248 sshd[4667]: Connection closed by 139.178.89.65 port 33128 Jan 30 04:49:42.603673 sshd-session[4664]: pam_unix(sshd:session): session closed for user core Jan 30 04:49:42.606645 systemd[1]: sshd@35-116.202.14.223:22-139.178.89.65:33128.service: Deactivated successfully. Jan 30 04:49:42.608802 systemd[1]: session-33.scope: Deactivated successfully. Jan 30 04:49:42.610725 systemd-logind[1509]: Session 33 logged out. Waiting for processes to exit. Jan 30 04:49:42.612213 systemd-logind[1509]: Removed session 33. Jan 30 04:49:47.778240 systemd[1]: Started sshd@36-116.202.14.223:22-139.178.89.65:35846.service - OpenSSH per-connection server daemon (139.178.89.65:35846). Jan 30 04:49:48.775153 sshd[4682]: Accepted publickey for core from 139.178.89.65 port 35846 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:49:48.777002 sshd-session[4682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:49:48.782354 systemd-logind[1509]: New session 34 of user core. Jan 30 04:49:48.786026 systemd[1]: Started session-34.scope - Session 34 of User core. Jan 30 04:49:49.536594 sshd[4684]: Connection closed by 139.178.89.65 port 35846 Jan 30 04:49:49.537367 sshd-session[4682]: pam_unix(sshd:session): session closed for user core Jan 30 04:49:49.541924 systemd-logind[1509]: Session 34 logged out. Waiting for processes to exit. Jan 30 04:49:49.542817 systemd[1]: sshd@36-116.202.14.223:22-139.178.89.65:35846.service: Deactivated successfully. Jan 30 04:49:49.545492 systemd[1]: session-34.scope: Deactivated successfully. Jan 30 04:49:49.546991 systemd-logind[1509]: Removed session 34. Jan 30 04:49:54.704456 systemd[1]: Started sshd@37-116.202.14.223:22-139.178.89.65:38478.service - OpenSSH per-connection server daemon (139.178.89.65:38478). Jan 30 04:49:55.696080 sshd[4696]: Accepted publickey for core from 139.178.89.65 port 38478 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:49:55.697604 sshd-session[4696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:49:55.701944 systemd-logind[1509]: New session 35 of user core. Jan 30 04:49:55.706040 systemd[1]: Started session-35.scope - Session 35 of User core. Jan 30 04:49:56.458797 sshd[4698]: Connection closed by 139.178.89.65 port 38478 Jan 30 04:49:56.460939 sshd-session[4696]: pam_unix(sshd:session): session closed for user core Jan 30 04:49:56.466389 systemd[1]: sshd@37-116.202.14.223:22-139.178.89.65:38478.service: Deactivated successfully. Jan 30 04:49:56.468569 systemd[1]: session-35.scope: Deactivated successfully. Jan 30 04:49:56.469550 systemd-logind[1509]: Session 35 logged out. Waiting for processes to exit. Jan 30 04:49:56.470611 systemd-logind[1509]: Removed session 35. Jan 30 04:50:01.634179 systemd[1]: Started sshd@38-116.202.14.223:22-139.178.89.65:49386.service - OpenSSH per-connection server daemon (139.178.89.65:49386). Jan 30 04:50:02.629166 sshd[4712]: Accepted publickey for core from 139.178.89.65 port 49386 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:50:02.631345 sshd-session[4712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:50:02.639338 systemd-logind[1509]: New session 36 of user core. Jan 30 04:50:02.647309 systemd[1]: Started session-36.scope - Session 36 of User core. Jan 30 04:50:03.390331 sshd[4714]: Connection closed by 139.178.89.65 port 49386 Jan 30 04:50:03.391380 sshd-session[4712]: pam_unix(sshd:session): session closed for user core Jan 30 04:50:03.395524 systemd[1]: sshd@38-116.202.14.223:22-139.178.89.65:49386.service: Deactivated successfully. Jan 30 04:50:03.398866 systemd[1]: session-36.scope: Deactivated successfully. Jan 30 04:50:03.401303 systemd-logind[1509]: Session 36 logged out. Waiting for processes to exit. Jan 30 04:50:03.403081 systemd-logind[1509]: Removed session 36. Jan 30 04:50:08.565183 systemd[1]: Started sshd@39-116.202.14.223:22-139.178.89.65:49402.service - OpenSSH per-connection server daemon (139.178.89.65:49402). Jan 30 04:50:09.548693 sshd[4726]: Accepted publickey for core from 139.178.89.65 port 49402 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:50:09.551048 sshd-session[4726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:50:09.557123 systemd-logind[1509]: New session 37 of user core. Jan 30 04:50:09.564098 systemd[1]: Started session-37.scope - Session 37 of User core. Jan 30 04:50:10.374592 sshd[4728]: Connection closed by 139.178.89.65 port 49402 Jan 30 04:50:10.375972 sshd-session[4726]: pam_unix(sshd:session): session closed for user core Jan 30 04:50:10.381831 systemd[1]: sshd@39-116.202.14.223:22-139.178.89.65:49402.service: Deactivated successfully. Jan 30 04:50:10.384750 systemd[1]: session-37.scope: Deactivated successfully. Jan 30 04:50:10.385937 systemd-logind[1509]: Session 37 logged out. Waiting for processes to exit. Jan 30 04:50:10.387354 systemd-logind[1509]: Removed session 37. Jan 30 04:50:15.547283 systemd[1]: Started sshd@40-116.202.14.223:22-139.178.89.65:54540.service - OpenSSH per-connection server daemon (139.178.89.65:54540). Jan 30 04:50:16.518568 sshd[4742]: Accepted publickey for core from 139.178.89.65 port 54540 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:50:16.520407 sshd-session[4742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:50:16.525206 systemd-logind[1509]: New session 38 of user core. Jan 30 04:50:16.529017 systemd[1]: Started session-38.scope - Session 38 of User core. Jan 30 04:50:17.249760 sshd[4744]: Connection closed by 139.178.89.65 port 54540 Jan 30 04:50:17.250549 sshd-session[4742]: pam_unix(sshd:session): session closed for user core Jan 30 04:50:17.254707 systemd-logind[1509]: Session 38 logged out. Waiting for processes to exit. Jan 30 04:50:17.255571 systemd[1]: sshd@40-116.202.14.223:22-139.178.89.65:54540.service: Deactivated successfully. Jan 30 04:50:17.258434 systemd[1]: session-38.scope: Deactivated successfully. Jan 30 04:50:17.260118 systemd-logind[1509]: Removed session 38. Jan 30 04:50:22.431117 systemd[1]: Started sshd@41-116.202.14.223:22-139.178.89.65:35440.service - OpenSSH per-connection server daemon (139.178.89.65:35440). Jan 30 04:50:23.419337 sshd[4756]: Accepted publickey for core from 139.178.89.65 port 35440 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:50:23.420986 sshd-session[4756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:50:23.426076 systemd-logind[1509]: New session 39 of user core. Jan 30 04:50:23.432066 systemd[1]: Started session-39.scope - Session 39 of User core. Jan 30 04:50:24.170493 sshd[4758]: Connection closed by 139.178.89.65 port 35440 Jan 30 04:50:24.171142 sshd-session[4756]: pam_unix(sshd:session): session closed for user core Jan 30 04:50:24.175077 systemd[1]: sshd@41-116.202.14.223:22-139.178.89.65:35440.service: Deactivated successfully. Jan 30 04:50:24.177842 systemd[1]: session-39.scope: Deactivated successfully. Jan 30 04:50:24.180022 systemd-logind[1509]: Session 39 logged out. Waiting for processes to exit. Jan 30 04:50:24.181720 systemd-logind[1509]: Removed session 39. Jan 30 04:50:29.342151 systemd[1]: Started sshd@42-116.202.14.223:22-139.178.89.65:35452.service - OpenSSH per-connection server daemon (139.178.89.65:35452). Jan 30 04:50:30.313516 sshd[4770]: Accepted publickey for core from 139.178.89.65 port 35452 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:50:30.315156 sshd-session[4770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:50:30.320418 systemd-logind[1509]: New session 40 of user core. Jan 30 04:50:30.330031 systemd[1]: Started session-40.scope - Session 40 of User core. Jan 30 04:50:31.048675 sshd[4772]: Connection closed by 139.178.89.65 port 35452 Jan 30 04:50:31.049482 sshd-session[4770]: pam_unix(sshd:session): session closed for user core Jan 30 04:50:31.052337 systemd[1]: sshd@42-116.202.14.223:22-139.178.89.65:35452.service: Deactivated successfully. Jan 30 04:50:31.054580 systemd[1]: session-40.scope: Deactivated successfully. Jan 30 04:50:31.056223 systemd-logind[1509]: Session 40 logged out. Waiting for processes to exit. Jan 30 04:50:31.057668 systemd-logind[1509]: Removed session 40. Jan 30 04:50:36.224411 systemd[1]: Started sshd@43-116.202.14.223:22-139.178.89.65:49000.service - OpenSSH per-connection server daemon (139.178.89.65:49000). Jan 30 04:50:37.205828 sshd[4786]: Accepted publickey for core from 139.178.89.65 port 49000 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:50:37.207333 sshd-session[4786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:50:37.212072 systemd-logind[1509]: New session 41 of user core. Jan 30 04:50:37.217052 systemd[1]: Started session-41.scope - Session 41 of User core. Jan 30 04:50:37.954145 sshd[4788]: Connection closed by 139.178.89.65 port 49000 Jan 30 04:50:37.955127 sshd-session[4786]: pam_unix(sshd:session): session closed for user core Jan 30 04:50:37.960036 systemd-logind[1509]: Session 41 logged out. Waiting for processes to exit. Jan 30 04:50:37.960847 systemd[1]: sshd@43-116.202.14.223:22-139.178.89.65:49000.service: Deactivated successfully. Jan 30 04:50:37.963957 systemd[1]: session-41.scope: Deactivated successfully. Jan 30 04:50:37.965547 systemd-logind[1509]: Removed session 41. Jan 30 04:50:43.130155 systemd[1]: Started sshd@44-116.202.14.223:22-139.178.89.65:33202.service - OpenSSH per-connection server daemon (139.178.89.65:33202). Jan 30 04:50:44.118875 sshd[4800]: Accepted publickey for core from 139.178.89.65 port 33202 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:50:44.120776 sshd-session[4800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:50:44.126133 systemd-logind[1509]: New session 42 of user core. Jan 30 04:50:44.130069 systemd[1]: Started session-42.scope - Session 42 of User core. Jan 30 04:50:44.868439 sshd[4802]: Connection closed by 139.178.89.65 port 33202 Jan 30 04:50:44.869221 sshd-session[4800]: pam_unix(sshd:session): session closed for user core Jan 30 04:50:44.872325 systemd[1]: sshd@44-116.202.14.223:22-139.178.89.65:33202.service: Deactivated successfully. Jan 30 04:50:44.874846 systemd[1]: session-42.scope: Deactivated successfully. Jan 30 04:50:44.876660 systemd-logind[1509]: Session 42 logged out. Waiting for processes to exit. Jan 30 04:50:44.878352 systemd-logind[1509]: Removed session 42. Jan 30 04:50:50.040144 systemd[1]: Started sshd@45-116.202.14.223:22-139.178.89.65:33216.service - OpenSSH per-connection server daemon (139.178.89.65:33216). Jan 30 04:50:51.037003 sshd[4816]: Accepted publickey for core from 139.178.89.65 port 33216 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:50:51.038692 sshd-session[4816]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:50:51.042980 systemd-logind[1509]: New session 43 of user core. Jan 30 04:50:51.048032 systemd[1]: Started session-43.scope - Session 43 of User core. Jan 30 04:50:51.809829 sshd[4818]: Connection closed by 139.178.89.65 port 33216 Jan 30 04:50:51.810711 sshd-session[4816]: pam_unix(sshd:session): session closed for user core Jan 30 04:50:51.814606 systemd[1]: sshd@45-116.202.14.223:22-139.178.89.65:33216.service: Deactivated successfully. Jan 30 04:50:51.816605 systemd[1]: session-43.scope: Deactivated successfully. Jan 30 04:50:51.817297 systemd-logind[1509]: Session 43 logged out. Waiting for processes to exit. Jan 30 04:50:51.818366 systemd-logind[1509]: Removed session 43. Jan 30 04:50:51.981132 systemd[1]: Started sshd@46-116.202.14.223:22-139.178.89.65:48020.service - OpenSSH per-connection server daemon (139.178.89.65:48020). Jan 30 04:50:52.977053 sshd[4830]: Accepted publickey for core from 139.178.89.65 port 48020 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:50:52.978657 sshd-session[4830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:50:52.982941 systemd-logind[1509]: New session 44 of user core. Jan 30 04:50:52.988006 systemd[1]: Started session-44.scope - Session 44 of User core. Jan 30 04:50:53.778577 sshd[4832]: Connection closed by 139.178.89.65 port 48020 Jan 30 04:50:53.779113 sshd-session[4830]: pam_unix(sshd:session): session closed for user core Jan 30 04:50:53.781862 systemd[1]: sshd@46-116.202.14.223:22-139.178.89.65:48020.service: Deactivated successfully. Jan 30 04:50:53.784389 systemd[1]: session-44.scope: Deactivated successfully. Jan 30 04:50:53.785773 systemd-logind[1509]: Session 44 logged out. Waiting for processes to exit. Jan 30 04:50:53.786974 systemd-logind[1509]: Removed session 44. Jan 30 04:50:53.948847 systemd[1]: Started sshd@47-116.202.14.223:22-139.178.89.65:48032.service - OpenSSH per-connection server daemon (139.178.89.65:48032). Jan 30 04:50:54.933159 sshd[4843]: Accepted publickey for core from 139.178.89.65 port 48032 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:50:54.934785 sshd-session[4843]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:50:54.939654 systemd-logind[1509]: New session 45 of user core. Jan 30 04:50:54.944029 systemd[1]: Started session-45.scope - Session 45 of User core. Jan 30 04:50:55.690240 sshd[4845]: Connection closed by 139.178.89.65 port 48032 Jan 30 04:50:55.691079 sshd-session[4843]: pam_unix(sshd:session): session closed for user core Jan 30 04:50:55.695615 systemd[1]: sshd@47-116.202.14.223:22-139.178.89.65:48032.service: Deactivated successfully. Jan 30 04:50:55.697984 systemd[1]: session-45.scope: Deactivated successfully. Jan 30 04:50:55.698760 systemd-logind[1509]: Session 45 logged out. Waiting for processes to exit. Jan 30 04:50:55.699878 systemd-logind[1509]: Removed session 45. Jan 30 04:51:00.861162 systemd[1]: Started sshd@48-116.202.14.223:22-139.178.89.65:48048.service - OpenSSH per-connection server daemon (139.178.89.65:48048). Jan 30 04:51:01.549183 systemd[1]: Started sshd@49-116.202.14.223:22-194.0.234.38:17944.service - OpenSSH per-connection server daemon (194.0.234.38:17944). Jan 30 04:51:01.837838 sshd[4858]: Accepted publickey for core from 139.178.89.65 port 48048 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:51:01.839536 sshd-session[4858]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:51:01.843940 systemd-logind[1509]: New session 46 of user core. Jan 30 04:51:01.850029 systemd[1]: Started session-46.scope - Session 46 of User core. Jan 30 04:51:02.564395 sshd[4862]: Connection closed by 139.178.89.65 port 48048 Jan 30 04:51:02.565000 sshd-session[4858]: pam_unix(sshd:session): session closed for user core Jan 30 04:51:02.568543 systemd[1]: sshd@48-116.202.14.223:22-139.178.89.65:48048.service: Deactivated successfully. Jan 30 04:51:02.570566 systemd[1]: session-46.scope: Deactivated successfully. Jan 30 04:51:02.571241 systemd-logind[1509]: Session 46 logged out. Waiting for processes to exit. Jan 30 04:51:02.572540 systemd-logind[1509]: Removed session 46. Jan 30 04:51:02.575398 sshd[4861]: Invalid user squid from 194.0.234.38 port 17944 Jan 30 04:51:02.638514 sshd[4861]: Connection closed by invalid user squid 194.0.234.38 port 17944 [preauth] Jan 30 04:51:02.641443 systemd[1]: sshd@49-116.202.14.223:22-194.0.234.38:17944.service: Deactivated successfully. Jan 30 04:51:07.738361 systemd[1]: Started sshd@50-116.202.14.223:22-139.178.89.65:35666.service - OpenSSH per-connection server daemon (139.178.89.65:35666). Jan 30 04:51:08.705936 sshd[4876]: Accepted publickey for core from 139.178.89.65 port 35666 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:51:08.706950 sshd-session[4876]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:51:08.716987 systemd-logind[1509]: New session 47 of user core. Jan 30 04:51:08.721123 systemd[1]: Started session-47.scope - Session 47 of User core. Jan 30 04:51:09.444926 sshd[4878]: Connection closed by 139.178.89.65 port 35666 Jan 30 04:51:09.446378 sshd-session[4876]: pam_unix(sshd:session): session closed for user core Jan 30 04:51:09.450657 systemd[1]: sshd@50-116.202.14.223:22-139.178.89.65:35666.service: Deactivated successfully. Jan 30 04:51:09.452955 systemd[1]: session-47.scope: Deactivated successfully. Jan 30 04:51:09.455078 systemd-logind[1509]: Session 47 logged out. Waiting for processes to exit. Jan 30 04:51:09.456778 systemd-logind[1509]: Removed session 47. Jan 30 04:51:14.616152 systemd[1]: Started sshd@51-116.202.14.223:22-139.178.89.65:52670.service - OpenSSH per-connection server daemon (139.178.89.65:52670). Jan 30 04:51:15.593536 sshd[4888]: Accepted publickey for core from 139.178.89.65 port 52670 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:51:15.595040 sshd-session[4888]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:51:15.599221 systemd-logind[1509]: New session 48 of user core. Jan 30 04:51:15.605032 systemd[1]: Started session-48.scope - Session 48 of User core. Jan 30 04:51:16.321071 sshd[4892]: Connection closed by 139.178.89.65 port 52670 Jan 30 04:51:16.321804 sshd-session[4888]: pam_unix(sshd:session): session closed for user core Jan 30 04:51:16.325204 systemd[1]: sshd@51-116.202.14.223:22-139.178.89.65:52670.service: Deactivated successfully. Jan 30 04:51:16.327588 systemd[1]: session-48.scope: Deactivated successfully. Jan 30 04:51:16.329665 systemd-logind[1509]: Session 48 logged out. Waiting for processes to exit. Jan 30 04:51:16.330802 systemd-logind[1509]: Removed session 48. Jan 30 04:51:21.498190 systemd[1]: Started sshd@52-116.202.14.223:22-139.178.89.65:60322.service - OpenSSH per-connection server daemon (139.178.89.65:60322). Jan 30 04:51:22.502648 sshd[4902]: Accepted publickey for core from 139.178.89.65 port 60322 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:51:22.504385 sshd-session[4902]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:51:22.509146 systemd-logind[1509]: New session 49 of user core. Jan 30 04:51:22.514087 systemd[1]: Started session-49.scope - Session 49 of User core. Jan 30 04:51:23.252104 sshd[4904]: Connection closed by 139.178.89.65 port 60322 Jan 30 04:51:23.252841 sshd-session[4902]: pam_unix(sshd:session): session closed for user core Jan 30 04:51:23.257603 systemd[1]: sshd@52-116.202.14.223:22-139.178.89.65:60322.service: Deactivated successfully. Jan 30 04:51:23.260313 systemd[1]: session-49.scope: Deactivated successfully. Jan 30 04:51:23.261382 systemd-logind[1509]: Session 49 logged out. Waiting for processes to exit. Jan 30 04:51:23.262782 systemd-logind[1509]: Removed session 49. Jan 30 04:51:28.427146 systemd[1]: Started sshd@53-116.202.14.223:22-139.178.89.65:60330.service - OpenSSH per-connection server daemon (139.178.89.65:60330). Jan 30 04:51:29.417471 sshd[4916]: Accepted publickey for core from 139.178.89.65 port 60330 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:51:29.419200 sshd-session[4916]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:51:29.423682 systemd-logind[1509]: New session 50 of user core. Jan 30 04:51:29.432049 systemd[1]: Started session-50.scope - Session 50 of User core. Jan 30 04:51:30.160776 sshd[4918]: Connection closed by 139.178.89.65 port 60330 Jan 30 04:51:30.162366 sshd-session[4916]: pam_unix(sshd:session): session closed for user core Jan 30 04:51:30.165519 systemd[1]: sshd@53-116.202.14.223:22-139.178.89.65:60330.service: Deactivated successfully. Jan 30 04:51:30.167844 systemd[1]: session-50.scope: Deactivated successfully. Jan 30 04:51:30.169522 systemd-logind[1509]: Session 50 logged out. Waiting for processes to exit. Jan 30 04:51:30.170667 systemd-logind[1509]: Removed session 50. Jan 30 04:51:35.327801 systemd[1]: Started sshd@54-116.202.14.223:22-139.178.89.65:60226.service - OpenSSH per-connection server daemon (139.178.89.65:60226). Jan 30 04:51:36.312158 sshd[4929]: Accepted publickey for core from 139.178.89.65 port 60226 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:51:36.313981 sshd-session[4929]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:51:36.319380 systemd-logind[1509]: New session 51 of user core. Jan 30 04:51:36.325071 systemd[1]: Started session-51.scope - Session 51 of User core. Jan 30 04:51:37.059083 sshd[4931]: Connection closed by 139.178.89.65 port 60226 Jan 30 04:51:37.060055 sshd-session[4929]: pam_unix(sshd:session): session closed for user core Jan 30 04:51:37.063456 systemd[1]: sshd@54-116.202.14.223:22-139.178.89.65:60226.service: Deactivated successfully. Jan 30 04:51:37.065751 systemd[1]: session-51.scope: Deactivated successfully. Jan 30 04:51:37.067512 systemd-logind[1509]: Session 51 logged out. Waiting for processes to exit. Jan 30 04:51:37.068851 systemd-logind[1509]: Removed session 51. Jan 30 04:51:42.235292 systemd[1]: Started sshd@55-116.202.14.223:22-139.178.89.65:49284.service - OpenSSH per-connection server daemon (139.178.89.65:49284). Jan 30 04:51:43.218880 sshd[4942]: Accepted publickey for core from 139.178.89.65 port 49284 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:51:43.221581 sshd-session[4942]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:51:43.227512 systemd-logind[1509]: New session 52 of user core. Jan 30 04:51:43.232113 systemd[1]: Started session-52.scope - Session 52 of User core. Jan 30 04:51:43.970299 sshd[4944]: Connection closed by 139.178.89.65 port 49284 Jan 30 04:51:43.971195 sshd-session[4942]: pam_unix(sshd:session): session closed for user core Jan 30 04:51:43.975997 systemd[1]: sshd@55-116.202.14.223:22-139.178.89.65:49284.service: Deactivated successfully. Jan 30 04:51:43.978313 systemd[1]: session-52.scope: Deactivated successfully. Jan 30 04:51:43.979576 systemd-logind[1509]: Session 52 logged out. Waiting for processes to exit. Jan 30 04:51:43.980860 systemd-logind[1509]: Removed session 52. Jan 30 04:51:49.149201 systemd[1]: Started sshd@56-116.202.14.223:22-139.178.89.65:49292.service - OpenSSH per-connection server daemon (139.178.89.65:49292). Jan 30 04:51:50.135559 sshd[4957]: Accepted publickey for core from 139.178.89.65 port 49292 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:51:50.137408 sshd-session[4957]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:51:50.142691 systemd-logind[1509]: New session 53 of user core. Jan 30 04:51:50.148024 systemd[1]: Started session-53.scope - Session 53 of User core. Jan 30 04:51:50.877674 sshd[4959]: Connection closed by 139.178.89.65 port 49292 Jan 30 04:51:50.878139 sshd-session[4957]: pam_unix(sshd:session): session closed for user core Jan 30 04:51:50.881741 systemd[1]: sshd@56-116.202.14.223:22-139.178.89.65:49292.service: Deactivated successfully. Jan 30 04:51:50.884201 systemd[1]: session-53.scope: Deactivated successfully. Jan 30 04:51:50.884910 systemd-logind[1509]: Session 53 logged out. Waiting for processes to exit. Jan 30 04:51:50.885972 systemd-logind[1509]: Removed session 53. Jan 30 04:51:56.057109 systemd[1]: Started sshd@57-116.202.14.223:22-139.178.89.65:35840.service - OpenSSH per-connection server daemon (139.178.89.65:35840). Jan 30 04:51:57.044226 sshd[4970]: Accepted publickey for core from 139.178.89.65 port 35840 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:51:57.046019 sshd-session[4970]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:51:57.051445 systemd-logind[1509]: New session 54 of user core. Jan 30 04:51:57.055031 systemd[1]: Started session-54.scope - Session 54 of User core. Jan 30 04:51:57.790377 sshd[4972]: Connection closed by 139.178.89.65 port 35840 Jan 30 04:51:57.791596 sshd-session[4970]: pam_unix(sshd:session): session closed for user core Jan 30 04:51:57.796804 systemd[1]: sshd@57-116.202.14.223:22-139.178.89.65:35840.service: Deactivated successfully. Jan 30 04:51:57.799323 systemd[1]: session-54.scope: Deactivated successfully. Jan 30 04:51:57.800845 systemd-logind[1509]: Session 54 logged out. Waiting for processes to exit. Jan 30 04:51:57.802266 systemd-logind[1509]: Removed session 54. Jan 30 04:52:02.962446 systemd[1]: Started sshd@58-116.202.14.223:22-139.178.89.65:38576.service - OpenSSH per-connection server daemon (139.178.89.65:38576). Jan 30 04:52:03.941909 sshd[4985]: Accepted publickey for core from 139.178.89.65 port 38576 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:52:03.944100 sshd-session[4985]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:52:03.949053 systemd-logind[1509]: New session 55 of user core. Jan 30 04:52:03.959114 systemd[1]: Started session-55.scope - Session 55 of User core. Jan 30 04:52:04.688639 sshd[4987]: Connection closed by 139.178.89.65 port 38576 Jan 30 04:52:04.689438 sshd-session[4985]: pam_unix(sshd:session): session closed for user core Jan 30 04:52:04.693193 systemd[1]: sshd@58-116.202.14.223:22-139.178.89.65:38576.service: Deactivated successfully. Jan 30 04:52:04.695326 systemd[1]: session-55.scope: Deactivated successfully. Jan 30 04:52:04.697290 systemd-logind[1509]: Session 55 logged out. Waiting for processes to exit. Jan 30 04:52:04.698652 systemd-logind[1509]: Removed session 55. Jan 30 04:52:09.865734 systemd[1]: Started sshd@59-116.202.14.223:22-139.178.89.65:38592.service - OpenSSH per-connection server daemon (139.178.89.65:38592). Jan 30 04:52:10.861541 sshd[4998]: Accepted publickey for core from 139.178.89.65 port 38592 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:52:10.864151 sshd-session[4998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:52:10.869661 systemd-logind[1509]: New session 56 of user core. Jan 30 04:52:10.875382 systemd[1]: Started session-56.scope - Session 56 of User core. Jan 30 04:52:11.624773 sshd[5000]: Connection closed by 139.178.89.65 port 38592 Jan 30 04:52:11.625368 sshd-session[4998]: pam_unix(sshd:session): session closed for user core Jan 30 04:52:11.628790 systemd-logind[1509]: Session 56 logged out. Waiting for processes to exit. Jan 30 04:52:11.629616 systemd[1]: sshd@59-116.202.14.223:22-139.178.89.65:38592.service: Deactivated successfully. Jan 30 04:52:11.631592 systemd[1]: session-56.scope: Deactivated successfully. Jan 30 04:52:11.632811 systemd-logind[1509]: Removed session 56. Jan 30 04:52:16.801310 systemd[1]: Started sshd@60-116.202.14.223:22-139.178.89.65:57494.service - OpenSSH per-connection server daemon (139.178.89.65:57494). Jan 30 04:52:17.797133 sshd[5013]: Accepted publickey for core from 139.178.89.65 port 57494 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:52:17.799607 sshd-session[5013]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:52:17.803808 systemd-logind[1509]: New session 57 of user core. Jan 30 04:52:17.811007 systemd[1]: Started session-57.scope - Session 57 of User core. Jan 30 04:52:18.552677 sshd[5015]: Connection closed by 139.178.89.65 port 57494 Jan 30 04:52:18.553045 sshd-session[5013]: pam_unix(sshd:session): session closed for user core Jan 30 04:52:18.556999 systemd-logind[1509]: Session 57 logged out. Waiting for processes to exit. Jan 30 04:52:18.557516 systemd[1]: sshd@60-116.202.14.223:22-139.178.89.65:57494.service: Deactivated successfully. Jan 30 04:52:18.560555 systemd[1]: session-57.scope: Deactivated successfully. Jan 30 04:52:18.562044 systemd-logind[1509]: Removed session 57. Jan 30 04:52:23.723370 systemd[1]: Started sshd@61-116.202.14.223:22-139.178.89.65:48046.service - OpenSSH per-connection server daemon (139.178.89.65:48046). Jan 30 04:52:24.718317 sshd[5026]: Accepted publickey for core from 139.178.89.65 port 48046 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:52:24.720282 sshd-session[5026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:52:24.725286 systemd-logind[1509]: New session 58 of user core. Jan 30 04:52:24.731050 systemd[1]: Started session-58.scope - Session 58 of User core. Jan 30 04:52:25.486802 sshd[5028]: Connection closed by 139.178.89.65 port 48046 Jan 30 04:52:25.488472 sshd-session[5026]: pam_unix(sshd:session): session closed for user core Jan 30 04:52:25.495376 systemd[1]: sshd@61-116.202.14.223:22-139.178.89.65:48046.service: Deactivated successfully. Jan 30 04:52:25.497267 systemd[1]: session-58.scope: Deactivated successfully. Jan 30 04:52:25.499548 systemd-logind[1509]: Session 58 logged out. Waiting for processes to exit. Jan 30 04:52:25.500877 systemd-logind[1509]: Removed session 58. Jan 30 04:52:30.654948 systemd[1]: Started sshd@62-116.202.14.223:22-139.178.89.65:48050.service - OpenSSH per-connection server daemon (139.178.89.65:48050). Jan 30 04:52:31.645621 sshd[5039]: Accepted publickey for core from 139.178.89.65 port 48050 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:52:31.647046 sshd-session[5039]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:52:31.651550 systemd-logind[1509]: New session 59 of user core. Jan 30 04:52:31.657056 systemd[1]: Started session-59.scope - Session 59 of User core. Jan 30 04:52:32.393647 sshd[5041]: Connection closed by 139.178.89.65 port 48050 Jan 30 04:52:32.394371 sshd-session[5039]: pam_unix(sshd:session): session closed for user core Jan 30 04:52:32.397587 systemd[1]: sshd@62-116.202.14.223:22-139.178.89.65:48050.service: Deactivated successfully. Jan 30 04:52:32.399969 systemd[1]: session-59.scope: Deactivated successfully. Jan 30 04:52:32.402240 systemd-logind[1509]: Session 59 logged out. Waiting for processes to exit. Jan 30 04:52:32.403722 systemd-logind[1509]: Removed session 59. Jan 30 04:52:37.564999 systemd[1]: Started sshd@63-116.202.14.223:22-139.178.89.65:46896.service - OpenSSH per-connection server daemon (139.178.89.65:46896). Jan 30 04:52:38.546878 sshd[5051]: Accepted publickey for core from 139.178.89.65 port 46896 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:52:38.548809 sshd-session[5051]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:52:38.553959 systemd-logind[1509]: New session 60 of user core. Jan 30 04:52:38.561077 systemd[1]: Started session-60.scope - Session 60 of User core. Jan 30 04:52:39.291359 sshd[5053]: Connection closed by 139.178.89.65 port 46896 Jan 30 04:52:39.292174 sshd-session[5051]: pam_unix(sshd:session): session closed for user core Jan 30 04:52:39.295475 systemd[1]: sshd@63-116.202.14.223:22-139.178.89.65:46896.service: Deactivated successfully. Jan 30 04:52:39.298128 systemd[1]: session-60.scope: Deactivated successfully. Jan 30 04:52:39.299876 systemd-logind[1509]: Session 60 logged out. Waiting for processes to exit. Jan 30 04:52:39.301735 systemd-logind[1509]: Removed session 60. Jan 30 04:52:44.469158 systemd[1]: Started sshd@64-116.202.14.223:22-139.178.89.65:40830.service - OpenSSH per-connection server daemon (139.178.89.65:40830). Jan 30 04:52:45.445476 sshd[5064]: Accepted publickey for core from 139.178.89.65 port 40830 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:52:45.447159 sshd-session[5064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:52:45.452599 systemd-logind[1509]: New session 61 of user core. Jan 30 04:52:45.457041 systemd[1]: Started session-61.scope - Session 61 of User core. Jan 30 04:52:46.213306 sshd[5068]: Connection closed by 139.178.89.65 port 40830 Jan 30 04:52:46.213988 sshd-session[5064]: pam_unix(sshd:session): session closed for user core Jan 30 04:52:46.217415 systemd-logind[1509]: Session 61 logged out. Waiting for processes to exit. Jan 30 04:52:46.218205 systemd[1]: sshd@64-116.202.14.223:22-139.178.89.65:40830.service: Deactivated successfully. Jan 30 04:52:46.221190 systemd[1]: session-61.scope: Deactivated successfully. Jan 30 04:52:46.222403 systemd-logind[1509]: Removed session 61. Jan 30 04:52:51.388142 systemd[1]: Started sshd@65-116.202.14.223:22-139.178.89.65:42682.service - OpenSSH per-connection server daemon (139.178.89.65:42682). Jan 30 04:52:52.380776 sshd[5078]: Accepted publickey for core from 139.178.89.65 port 42682 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:52:52.382307 sshd-session[5078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:52:52.386612 systemd-logind[1509]: New session 62 of user core. Jan 30 04:52:52.391003 systemd[1]: Started session-62.scope - Session 62 of User core. Jan 30 04:52:53.114551 sshd[5080]: Connection closed by 139.178.89.65 port 42682 Jan 30 04:52:53.115246 sshd-session[5078]: pam_unix(sshd:session): session closed for user core Jan 30 04:52:53.120421 systemd[1]: sshd@65-116.202.14.223:22-139.178.89.65:42682.service: Deactivated successfully. Jan 30 04:52:53.122675 systemd[1]: session-62.scope: Deactivated successfully. Jan 30 04:52:53.124469 systemd-logind[1509]: Session 62 logged out. Waiting for processes to exit. Jan 30 04:52:53.126030 systemd-logind[1509]: Removed session 62. Jan 30 04:52:58.290520 systemd[1]: Started sshd@66-116.202.14.223:22-139.178.89.65:42692.service - OpenSSH per-connection server daemon (139.178.89.65:42692). Jan 30 04:52:59.266023 sshd[5091]: Accepted publickey for core from 139.178.89.65 port 42692 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:52:59.267971 sshd-session[5091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:52:59.274095 systemd-logind[1509]: New session 63 of user core. Jan 30 04:52:59.280161 systemd[1]: Started session-63.scope - Session 63 of User core. Jan 30 04:53:00.007211 sshd[5095]: Connection closed by 139.178.89.65 port 42692 Jan 30 04:53:00.007920 sshd-session[5091]: pam_unix(sshd:session): session closed for user core Jan 30 04:53:00.012159 systemd-logind[1509]: Session 63 logged out. Waiting for processes to exit. Jan 30 04:53:00.013045 systemd[1]: sshd@66-116.202.14.223:22-139.178.89.65:42692.service: Deactivated successfully. Jan 30 04:53:00.015411 systemd[1]: session-63.scope: Deactivated successfully. Jan 30 04:53:00.016810 systemd-logind[1509]: Removed session 63. Jan 30 04:53:05.175406 systemd[1]: Started sshd@67-116.202.14.223:22-139.178.89.65:55880.service - OpenSSH per-connection server daemon (139.178.89.65:55880). Jan 30 04:53:06.148033 sshd[5106]: Accepted publickey for core from 139.178.89.65 port 55880 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:53:06.149637 sshd-session[5106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:53:06.154081 systemd-logind[1509]: New session 64 of user core. Jan 30 04:53:06.160392 systemd[1]: Started session-64.scope - Session 64 of User core. Jan 30 04:53:06.895561 sshd[5110]: Connection closed by 139.178.89.65 port 55880 Jan 30 04:53:06.896182 sshd-session[5106]: pam_unix(sshd:session): session closed for user core Jan 30 04:53:06.898943 systemd[1]: sshd@67-116.202.14.223:22-139.178.89.65:55880.service: Deactivated successfully. Jan 30 04:53:06.900654 systemd[1]: session-64.scope: Deactivated successfully. Jan 30 04:53:06.902104 systemd-logind[1509]: Session 64 logged out. Waiting for processes to exit. Jan 30 04:53:06.903282 systemd-logind[1509]: Removed session 64. Jan 30 04:53:09.095200 systemd[1]: Started sshd@68-116.202.14.223:22-116.255.254.185:41233.service - OpenSSH per-connection server daemon (116.255.254.185:41233). Jan 30 04:53:10.336490 sshd[5121]: Connection closed by authenticating user root 116.255.254.185 port 41233 [preauth] Jan 30 04:53:10.339314 systemd[1]: sshd@68-116.202.14.223:22-116.255.254.185:41233.service: Deactivated successfully. Jan 30 04:53:12.062519 systemd[1]: Started sshd@69-116.202.14.223:22-139.178.89.65:37274.service - OpenSSH per-connection server daemon (139.178.89.65:37274). Jan 30 04:53:13.030244 sshd[5126]: Accepted publickey for core from 139.178.89.65 port 37274 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:53:13.031923 sshd-session[5126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:53:13.038452 systemd-logind[1509]: New session 65 of user core. Jan 30 04:53:13.046035 systemd[1]: Started session-65.scope - Session 65 of User core. Jan 30 04:53:13.766127 sshd[5128]: Connection closed by 139.178.89.65 port 37274 Jan 30 04:53:13.766829 sshd-session[5126]: pam_unix(sshd:session): session closed for user core Jan 30 04:53:13.771229 systemd[1]: sshd@69-116.202.14.223:22-139.178.89.65:37274.service: Deactivated successfully. Jan 30 04:53:13.774122 systemd[1]: session-65.scope: Deactivated successfully. Jan 30 04:53:13.775469 systemd-logind[1509]: Session 65 logged out. Waiting for processes to exit. Jan 30 04:53:13.777297 systemd-logind[1509]: Removed session 65. Jan 30 04:53:18.938787 systemd[1]: Started sshd@70-116.202.14.223:22-139.178.89.65:37286.service - OpenSSH per-connection server daemon (139.178.89.65:37286). Jan 30 04:53:19.924956 sshd[5142]: Accepted publickey for core from 139.178.89.65 port 37286 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:53:19.926619 sshd-session[5142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:53:19.932090 systemd-logind[1509]: New session 66 of user core. Jan 30 04:53:19.936033 systemd[1]: Started session-66.scope - Session 66 of User core. Jan 30 04:53:20.675764 sshd[5144]: Connection closed by 139.178.89.65 port 37286 Jan 30 04:53:20.676724 sshd-session[5142]: pam_unix(sshd:session): session closed for user core Jan 30 04:53:20.680181 systemd[1]: sshd@70-116.202.14.223:22-139.178.89.65:37286.service: Deactivated successfully. Jan 30 04:53:20.682098 systemd[1]: session-66.scope: Deactivated successfully. Jan 30 04:53:20.684029 systemd-logind[1509]: Session 66 logged out. Waiting for processes to exit. Jan 30 04:53:20.685263 systemd-logind[1509]: Removed session 66. Jan 30 04:53:25.847119 systemd[1]: Started sshd@71-116.202.14.223:22-139.178.89.65:59736.service - OpenSSH per-connection server daemon (139.178.89.65:59736). Jan 30 04:53:26.815071 sshd[5155]: Accepted publickey for core from 139.178.89.65 port 59736 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:53:26.816725 sshd-session[5155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:53:26.821728 systemd-logind[1509]: New session 67 of user core. Jan 30 04:53:26.825081 systemd[1]: Started session-67.scope - Session 67 of User core. Jan 30 04:53:27.556376 sshd[5157]: Connection closed by 139.178.89.65 port 59736 Jan 30 04:53:27.556027 sshd-session[5155]: pam_unix(sshd:session): session closed for user core Jan 30 04:53:27.559286 systemd[1]: sshd@71-116.202.14.223:22-139.178.89.65:59736.service: Deactivated successfully. Jan 30 04:53:27.561838 systemd[1]: session-67.scope: Deactivated successfully. Jan 30 04:53:27.563779 systemd-logind[1509]: Session 67 logged out. Waiting for processes to exit. Jan 30 04:53:27.564785 systemd-logind[1509]: Removed session 67. Jan 30 04:53:32.732074 systemd[1]: Started sshd@72-116.202.14.223:22-139.178.89.65:52468.service - OpenSSH per-connection server daemon (139.178.89.65:52468). Jan 30 04:53:33.727304 sshd[5168]: Accepted publickey for core from 139.178.89.65 port 52468 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:53:33.729360 sshd-session[5168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:53:33.734752 systemd-logind[1509]: New session 68 of user core. Jan 30 04:53:33.740069 systemd[1]: Started session-68.scope - Session 68 of User core. Jan 30 04:53:34.474835 sshd[5170]: Connection closed by 139.178.89.65 port 52468 Jan 30 04:53:34.475680 sshd-session[5168]: pam_unix(sshd:session): session closed for user core Jan 30 04:53:34.479288 systemd[1]: sshd@72-116.202.14.223:22-139.178.89.65:52468.service: Deactivated successfully. Jan 30 04:53:34.481573 systemd[1]: session-68.scope: Deactivated successfully. Jan 30 04:53:34.483521 systemd-logind[1509]: Session 68 logged out. Waiting for processes to exit. Jan 30 04:53:34.484745 systemd-logind[1509]: Removed session 68. Jan 30 04:53:39.651209 systemd[1]: Started sshd@73-116.202.14.223:22-139.178.89.65:52476.service - OpenSSH per-connection server daemon (139.178.89.65:52476). Jan 30 04:53:40.634536 sshd[5181]: Accepted publickey for core from 139.178.89.65 port 52476 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:53:40.636205 sshd-session[5181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:53:40.641246 systemd-logind[1509]: New session 69 of user core. Jan 30 04:53:40.649033 systemd[1]: Started session-69.scope - Session 69 of User core. Jan 30 04:53:41.418421 sshd[5183]: Connection closed by 139.178.89.65 port 52476 Jan 30 04:53:41.420321 sshd-session[5181]: pam_unix(sshd:session): session closed for user core Jan 30 04:53:41.430074 systemd-logind[1509]: Session 69 logged out. Waiting for processes to exit. Jan 30 04:53:41.430236 systemd[1]: sshd@73-116.202.14.223:22-139.178.89.65:52476.service: Deactivated successfully. Jan 30 04:53:41.432869 systemd[1]: session-69.scope: Deactivated successfully. Jan 30 04:53:41.434355 systemd-logind[1509]: Removed session 69. Jan 30 04:53:46.592183 systemd[1]: Started sshd@74-116.202.14.223:22-139.178.89.65:51754.service - OpenSSH per-connection server daemon (139.178.89.65:51754). Jan 30 04:53:47.589299 sshd[5196]: Accepted publickey for core from 139.178.89.65 port 51754 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:53:47.590875 sshd-session[5196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:53:47.595532 systemd-logind[1509]: New session 70 of user core. Jan 30 04:53:47.598066 systemd[1]: Started session-70.scope - Session 70 of User core. Jan 30 04:53:48.367983 sshd[5198]: Connection closed by 139.178.89.65 port 51754 Jan 30 04:53:48.368745 sshd-session[5196]: pam_unix(sshd:session): session closed for user core Jan 30 04:53:48.371661 systemd[1]: sshd@74-116.202.14.223:22-139.178.89.65:51754.service: Deactivated successfully. Jan 30 04:53:48.373572 systemd[1]: session-70.scope: Deactivated successfully. Jan 30 04:53:48.374962 systemd-logind[1509]: Session 70 logged out. Waiting for processes to exit. Jan 30 04:53:48.376199 systemd-logind[1509]: Removed session 70. Jan 30 04:53:53.543149 systemd[1]: Started sshd@75-116.202.14.223:22-139.178.89.65:46836.service - OpenSSH per-connection server daemon (139.178.89.65:46836). Jan 30 04:53:54.515467 sshd[5209]: Accepted publickey for core from 139.178.89.65 port 46836 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:53:54.517410 sshd-session[5209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:53:54.522808 systemd-logind[1509]: New session 71 of user core. Jan 30 04:53:54.527996 systemd[1]: Started session-71.scope - Session 71 of User core. Jan 30 04:53:55.262611 sshd[5211]: Connection closed by 139.178.89.65 port 46836 Jan 30 04:53:55.263410 sshd-session[5209]: pam_unix(sshd:session): session closed for user core Jan 30 04:53:55.267130 systemd[1]: sshd@75-116.202.14.223:22-139.178.89.65:46836.service: Deactivated successfully. Jan 30 04:53:55.270322 systemd[1]: session-71.scope: Deactivated successfully. Jan 30 04:53:55.272616 systemd-logind[1509]: Session 71 logged out. Waiting for processes to exit. Jan 30 04:53:55.273912 systemd-logind[1509]: Removed session 71. Jan 30 04:54:00.434912 systemd[1]: Started sshd@76-116.202.14.223:22-139.178.89.65:46838.service - OpenSSH per-connection server daemon (139.178.89.65:46838). Jan 30 04:54:01.417773 sshd[5224]: Accepted publickey for core from 139.178.89.65 port 46838 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:54:01.419504 sshd-session[5224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:54:01.425940 systemd-logind[1509]: New session 72 of user core. Jan 30 04:54:01.432084 systemd[1]: Started session-72.scope - Session 72 of User core. Jan 30 04:54:02.167164 sshd[5226]: Connection closed by 139.178.89.65 port 46838 Jan 30 04:54:02.167940 sshd-session[5224]: pam_unix(sshd:session): session closed for user core Jan 30 04:54:02.172154 systemd[1]: sshd@76-116.202.14.223:22-139.178.89.65:46838.service: Deactivated successfully. Jan 30 04:54:02.174638 systemd[1]: session-72.scope: Deactivated successfully. Jan 30 04:54:02.175783 systemd-logind[1509]: Session 72 logged out. Waiting for processes to exit. Jan 30 04:54:02.177157 systemd-logind[1509]: Removed session 72. Jan 30 04:54:07.339237 systemd[1]: Started sshd@77-116.202.14.223:22-139.178.89.65:39432.service - OpenSSH per-connection server daemon (139.178.89.65:39432). Jan 30 04:54:08.305637 sshd[5238]: Accepted publickey for core from 139.178.89.65 port 39432 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:54:08.307483 sshd-session[5238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:54:08.312763 systemd-logind[1509]: New session 73 of user core. Jan 30 04:54:08.319044 systemd[1]: Started session-73.scope - Session 73 of User core. Jan 30 04:54:09.037404 sshd[5240]: Connection closed by 139.178.89.65 port 39432 Jan 30 04:54:09.038104 sshd-session[5238]: pam_unix(sshd:session): session closed for user core Jan 30 04:54:09.041335 systemd[1]: sshd@77-116.202.14.223:22-139.178.89.65:39432.service: Deactivated successfully. Jan 30 04:54:09.043580 systemd[1]: session-73.scope: Deactivated successfully. Jan 30 04:54:09.045035 systemd-logind[1509]: Session 73 logged out. Waiting for processes to exit. Jan 30 04:54:09.046429 systemd-logind[1509]: Removed session 73. Jan 30 04:54:14.209217 systemd[1]: Started sshd@78-116.202.14.223:22-139.178.89.65:40138.service - OpenSSH per-connection server daemon (139.178.89.65:40138). Jan 30 04:54:15.189471 sshd[5251]: Accepted publickey for core from 139.178.89.65 port 40138 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:54:15.191301 sshd-session[5251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:54:15.197112 systemd-logind[1509]: New session 74 of user core. Jan 30 04:54:15.206128 systemd[1]: Started session-74.scope - Session 74 of User core. Jan 30 04:54:15.930726 sshd[5255]: Connection closed by 139.178.89.65 port 40138 Jan 30 04:54:15.931458 sshd-session[5251]: pam_unix(sshd:session): session closed for user core Jan 30 04:54:15.934186 systemd[1]: sshd@78-116.202.14.223:22-139.178.89.65:40138.service: Deactivated successfully. Jan 30 04:54:15.936538 systemd[1]: session-74.scope: Deactivated successfully. Jan 30 04:54:15.938601 systemd-logind[1509]: Session 74 logged out. Waiting for processes to exit. Jan 30 04:54:15.940099 systemd-logind[1509]: Removed session 74. Jan 30 04:54:21.102860 systemd[1]: Started sshd@79-116.202.14.223:22-139.178.89.65:40146.service - OpenSSH per-connection server daemon (139.178.89.65:40146). Jan 30 04:54:21.107009 systemd[1]: Starting systemd-tmpfiles-clean.service - Cleanup of Temporary Directories... Jan 30 04:54:21.138384 systemd-tmpfiles[5267]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 04:54:21.139603 systemd-tmpfiles[5267]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 04:54:21.141021 systemd-tmpfiles[5267]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 04:54:21.141677 systemd-tmpfiles[5267]: ACLs are not supported, ignoring. Jan 30 04:54:21.141820 systemd-tmpfiles[5267]: ACLs are not supported, ignoring. Jan 30 04:54:21.146373 systemd-tmpfiles[5267]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 04:54:21.146471 systemd-tmpfiles[5267]: Skipping /boot Jan 30 04:54:21.154827 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Jan 30 04:54:21.155069 systemd[1]: Finished systemd-tmpfiles-clean.service - Cleanup of Temporary Directories. Jan 30 04:54:22.092575 sshd[5266]: Accepted publickey for core from 139.178.89.65 port 40146 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:54:22.094237 sshd-session[5266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:54:22.099058 systemd-logind[1509]: New session 75 of user core. Jan 30 04:54:22.109068 systemd[1]: Started session-75.scope - Session 75 of User core. Jan 30 04:54:22.835150 sshd[5271]: Connection closed by 139.178.89.65 port 40146 Jan 30 04:54:22.835968 sshd-session[5266]: pam_unix(sshd:session): session closed for user core Jan 30 04:54:22.838788 systemd[1]: sshd@79-116.202.14.223:22-139.178.89.65:40146.service: Deactivated successfully. Jan 30 04:54:22.840670 systemd[1]: session-75.scope: Deactivated successfully. Jan 30 04:54:22.842484 systemd-logind[1509]: Session 75 logged out. Waiting for processes to exit. Jan 30 04:54:22.843934 systemd-logind[1509]: Removed session 75. Jan 30 04:54:28.011408 systemd[1]: Started sshd@80-116.202.14.223:22-139.178.89.65:48948.service - OpenSSH per-connection server daemon (139.178.89.65:48948). Jan 30 04:54:28.999166 sshd[5281]: Accepted publickey for core from 139.178.89.65 port 48948 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:54:29.000711 sshd-session[5281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:54:29.005516 systemd-logind[1509]: New session 76 of user core. Jan 30 04:54:29.014018 systemd[1]: Started session-76.scope - Session 76 of User core. Jan 30 04:54:29.729854 sshd[5283]: Connection closed by 139.178.89.65 port 48948 Jan 30 04:54:29.730638 sshd-session[5281]: pam_unix(sshd:session): session closed for user core Jan 30 04:54:29.733873 systemd[1]: sshd@80-116.202.14.223:22-139.178.89.65:48948.service: Deactivated successfully. Jan 30 04:54:29.736142 systemd[1]: session-76.scope: Deactivated successfully. Jan 30 04:54:29.738071 systemd-logind[1509]: Session 76 logged out. Waiting for processes to exit. Jan 30 04:54:29.739376 systemd-logind[1509]: Removed session 76. Jan 30 04:54:34.898052 systemd[1]: Started sshd@81-116.202.14.223:22-139.178.89.65:51952.service - OpenSSH per-connection server daemon (139.178.89.65:51952). Jan 30 04:54:35.879870 sshd[5294]: Accepted publickey for core from 139.178.89.65 port 51952 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:54:35.881767 sshd-session[5294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:54:35.887545 systemd-logind[1509]: New session 77 of user core. Jan 30 04:54:35.892163 systemd[1]: Started session-77.scope - Session 77 of User core. Jan 30 04:54:36.617727 sshd[5296]: Connection closed by 139.178.89.65 port 51952 Jan 30 04:54:36.618432 sshd-session[5294]: pam_unix(sshd:session): session closed for user core Jan 30 04:54:36.622588 systemd[1]: sshd@81-116.202.14.223:22-139.178.89.65:51952.service: Deactivated successfully. Jan 30 04:54:36.625674 systemd[1]: session-77.scope: Deactivated successfully. Jan 30 04:54:36.626806 systemd-logind[1509]: Session 77 logged out. Waiting for processes to exit. Jan 30 04:54:36.628058 systemd-logind[1509]: Removed session 77. Jan 30 04:54:41.785519 systemd[1]: Started sshd@82-116.202.14.223:22-139.178.89.65:49382.service - OpenSSH per-connection server daemon (139.178.89.65:49382). Jan 30 04:54:42.769674 sshd[5308]: Accepted publickey for core from 139.178.89.65 port 49382 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:54:42.771247 sshd-session[5308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:54:42.775865 systemd-logind[1509]: New session 78 of user core. Jan 30 04:54:42.786009 systemd[1]: Started session-78.scope - Session 78 of User core. Jan 30 04:54:43.501100 sshd[5310]: Connection closed by 139.178.89.65 port 49382 Jan 30 04:54:43.502055 sshd-session[5308]: pam_unix(sshd:session): session closed for user core Jan 30 04:54:43.506360 systemd-logind[1509]: Session 78 logged out. Waiting for processes to exit. Jan 30 04:54:43.507260 systemd[1]: sshd@82-116.202.14.223:22-139.178.89.65:49382.service: Deactivated successfully. Jan 30 04:54:43.510851 systemd[1]: session-78.scope: Deactivated successfully. Jan 30 04:54:43.512718 systemd-logind[1509]: Removed session 78. Jan 30 04:54:48.679220 systemd[1]: Started sshd@83-116.202.14.223:22-139.178.89.65:49396.service - OpenSSH per-connection server daemon (139.178.89.65:49396). Jan 30 04:54:49.672646 sshd[5324]: Accepted publickey for core from 139.178.89.65 port 49396 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:54:49.674503 sshd-session[5324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:54:49.680014 systemd-logind[1509]: New session 79 of user core. Jan 30 04:54:49.684052 systemd[1]: Started session-79.scope - Session 79 of User core. Jan 30 04:54:50.407782 sshd[5326]: Connection closed by 139.178.89.65 port 49396 Jan 30 04:54:50.408699 sshd-session[5324]: pam_unix(sshd:session): session closed for user core Jan 30 04:54:50.413259 systemd[1]: sshd@83-116.202.14.223:22-139.178.89.65:49396.service: Deactivated successfully. Jan 30 04:54:50.416406 systemd[1]: session-79.scope: Deactivated successfully. Jan 30 04:54:50.417522 systemd-logind[1509]: Session 79 logged out. Waiting for processes to exit. Jan 30 04:54:50.418952 systemd-logind[1509]: Removed session 79. Jan 30 04:54:55.581238 systemd[1]: Started sshd@84-116.202.14.223:22-139.178.89.65:59140.service - OpenSSH per-connection server daemon (139.178.89.65:59140). Jan 30 04:54:56.570573 sshd[5337]: Accepted publickey for core from 139.178.89.65 port 59140 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:54:56.572342 sshd-session[5337]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:54:56.577712 systemd-logind[1509]: New session 80 of user core. Jan 30 04:54:56.583109 systemd[1]: Started session-80.scope - Session 80 of User core. Jan 30 04:54:57.307730 sshd[5339]: Connection closed by 139.178.89.65 port 59140 Jan 30 04:54:57.308619 sshd-session[5337]: pam_unix(sshd:session): session closed for user core Jan 30 04:54:57.312923 systemd[1]: sshd@84-116.202.14.223:22-139.178.89.65:59140.service: Deactivated successfully. Jan 30 04:54:57.315467 systemd[1]: session-80.scope: Deactivated successfully. Jan 30 04:54:57.316517 systemd-logind[1509]: Session 80 logged out. Waiting for processes to exit. Jan 30 04:54:57.317664 systemd-logind[1509]: Removed session 80. Jan 30 04:54:57.484574 systemd[1]: Started sshd@85-116.202.14.223:22-139.178.89.65:59156.service - OpenSSH per-connection server daemon (139.178.89.65:59156). Jan 30 04:54:58.474335 sshd[5350]: Accepted publickey for core from 139.178.89.65 port 59156 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:54:58.475964 sshd-session[5350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:54:58.482471 systemd-logind[1509]: New session 81 of user core. Jan 30 04:54:58.487039 systemd[1]: Started session-81.scope - Session 81 of User core. Jan 30 04:54:59.435145 sshd[5352]: Connection closed by 139.178.89.65 port 59156 Jan 30 04:54:59.436459 sshd-session[5350]: pam_unix(sshd:session): session closed for user core Jan 30 04:54:59.440794 systemd[1]: sshd@85-116.202.14.223:22-139.178.89.65:59156.service: Deactivated successfully. Jan 30 04:54:59.443543 systemd[1]: session-81.scope: Deactivated successfully. Jan 30 04:54:59.446458 systemd-logind[1509]: Session 81 logged out. Waiting for processes to exit. Jan 30 04:54:59.447682 systemd-logind[1509]: Removed session 81. Jan 30 04:54:59.604114 systemd[1]: Started sshd@86-116.202.14.223:22-139.178.89.65:59160.service - OpenSSH per-connection server daemon (139.178.89.65:59160). Jan 30 04:55:00.574739 sshd[5363]: Accepted publickey for core from 139.178.89.65 port 59160 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:55:00.576559 sshd-session[5363]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:55:00.581696 systemd-logind[1509]: New session 82 of user core. Jan 30 04:55:00.590059 systemd[1]: Started session-82.scope - Session 82 of User core. Jan 30 04:55:02.729383 sshd[5365]: Connection closed by 139.178.89.65 port 59160 Jan 30 04:55:02.730879 sshd-session[5363]: pam_unix(sshd:session): session closed for user core Jan 30 04:55:02.739237 systemd[1]: sshd@86-116.202.14.223:22-139.178.89.65:59160.service: Deactivated successfully. Jan 30 04:55:02.741611 systemd[1]: session-82.scope: Deactivated successfully. Jan 30 04:55:02.743448 systemd-logind[1509]: Session 82 logged out. Waiting for processes to exit. Jan 30 04:55:02.744379 systemd-logind[1509]: Removed session 82. Jan 30 04:55:02.909153 systemd[1]: Started sshd@87-116.202.14.223:22-139.178.89.65:37908.service - OpenSSH per-connection server daemon (139.178.89.65:37908). Jan 30 04:55:03.908480 sshd[5381]: Accepted publickey for core from 139.178.89.65 port 37908 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:55:03.910048 sshd-session[5381]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:55:03.914659 systemd-logind[1509]: New session 83 of user core. Jan 30 04:55:03.919212 systemd[1]: Started session-83.scope - Session 83 of User core. Jan 30 04:55:04.973049 sshd[5383]: Connection closed by 139.178.89.65 port 37908 Jan 30 04:55:04.974147 sshd-session[5381]: pam_unix(sshd:session): session closed for user core Jan 30 04:55:04.979739 systemd[1]: sshd@87-116.202.14.223:22-139.178.89.65:37908.service: Deactivated successfully. Jan 30 04:55:04.982579 systemd[1]: session-83.scope: Deactivated successfully. Jan 30 04:55:04.983551 systemd-logind[1509]: Session 83 logged out. Waiting for processes to exit. Jan 30 04:55:04.984732 systemd-logind[1509]: Removed session 83. Jan 30 04:55:05.152207 systemd[1]: Started sshd@88-116.202.14.223:22-139.178.89.65:37914.service - OpenSSH per-connection server daemon (139.178.89.65:37914). Jan 30 04:55:06.134291 sshd[5392]: Accepted publickey for core from 139.178.89.65 port 37914 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:55:06.136209 sshd-session[5392]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:55:06.141408 systemd-logind[1509]: New session 84 of user core. Jan 30 04:55:06.149105 systemd[1]: Started session-84.scope - Session 84 of User core. Jan 30 04:55:06.874706 sshd[5394]: Connection closed by 139.178.89.65 port 37914 Jan 30 04:55:06.875577 sshd-session[5392]: pam_unix(sshd:session): session closed for user core Jan 30 04:55:06.879212 systemd-logind[1509]: Session 84 logged out. Waiting for processes to exit. Jan 30 04:55:06.879665 systemd[1]: sshd@88-116.202.14.223:22-139.178.89.65:37914.service: Deactivated successfully. Jan 30 04:55:06.881827 systemd[1]: session-84.scope: Deactivated successfully. Jan 30 04:55:06.883654 systemd-logind[1509]: Removed session 84. Jan 30 04:55:12.049137 systemd[1]: Started sshd@89-116.202.14.223:22-139.178.89.65:53970.service - OpenSSH per-connection server daemon (139.178.89.65:53970). Jan 30 04:55:13.036442 sshd[5405]: Accepted publickey for core from 139.178.89.65 port 53970 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:55:13.037953 sshd-session[5405]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:55:13.042532 systemd-logind[1509]: New session 85 of user core. Jan 30 04:55:13.048034 systemd[1]: Started session-85.scope - Session 85 of User core. Jan 30 04:55:13.778193 sshd[5407]: Connection closed by 139.178.89.65 port 53970 Jan 30 04:55:13.778855 sshd-session[5405]: pam_unix(sshd:session): session closed for user core Jan 30 04:55:13.782257 systemd[1]: sshd@89-116.202.14.223:22-139.178.89.65:53970.service: Deactivated successfully. Jan 30 04:55:13.784706 systemd[1]: session-85.scope: Deactivated successfully. Jan 30 04:55:13.786914 systemd-logind[1509]: Session 85 logged out. Waiting for processes to exit. Jan 30 04:55:13.788223 systemd-logind[1509]: Removed session 85. Jan 30 04:55:18.953208 systemd[1]: Started sshd@90-116.202.14.223:22-139.178.89.65:53984.service - OpenSSH per-connection server daemon (139.178.89.65:53984). Jan 30 04:55:19.944152 sshd[5420]: Accepted publickey for core from 139.178.89.65 port 53984 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:55:19.945955 sshd-session[5420]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:55:19.951622 systemd-logind[1509]: New session 86 of user core. Jan 30 04:55:19.956064 systemd[1]: Started session-86.scope - Session 86 of User core. Jan 30 04:55:20.677862 sshd[5422]: Connection closed by 139.178.89.65 port 53984 Jan 30 04:55:20.678623 sshd-session[5420]: pam_unix(sshd:session): session closed for user core Jan 30 04:55:20.681766 systemd[1]: sshd@90-116.202.14.223:22-139.178.89.65:53984.service: Deactivated successfully. Jan 30 04:55:20.683816 systemd[1]: session-86.scope: Deactivated successfully. Jan 30 04:55:20.685677 systemd-logind[1509]: Session 86 logged out. Waiting for processes to exit. Jan 30 04:55:20.686935 systemd-logind[1509]: Removed session 86. Jan 30 04:55:25.849141 systemd[1]: Started sshd@91-116.202.14.223:22-139.178.89.65:37864.service - OpenSSH per-connection server daemon (139.178.89.65:37864). Jan 30 04:55:26.814932 sshd[5433]: Accepted publickey for core from 139.178.89.65 port 37864 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:55:26.816601 sshd-session[5433]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:55:26.821182 systemd-logind[1509]: New session 87 of user core. Jan 30 04:55:26.825071 systemd[1]: Started session-87.scope - Session 87 of User core. Jan 30 04:55:27.547802 sshd[5435]: Connection closed by 139.178.89.65 port 37864 Jan 30 04:55:27.548461 sshd-session[5433]: pam_unix(sshd:session): session closed for user core Jan 30 04:55:27.551399 systemd[1]: sshd@91-116.202.14.223:22-139.178.89.65:37864.service: Deactivated successfully. Jan 30 04:55:27.553632 systemd[1]: session-87.scope: Deactivated successfully. Jan 30 04:55:27.555621 systemd-logind[1509]: Session 87 logged out. Waiting for processes to exit. Jan 30 04:55:27.557139 systemd-logind[1509]: Removed session 87. Jan 30 04:55:32.721163 systemd[1]: Started sshd@92-116.202.14.223:22-139.178.89.65:56574.service - OpenSSH per-connection server daemon (139.178.89.65:56574). Jan 30 04:55:33.697738 sshd[5446]: Accepted publickey for core from 139.178.89.65 port 56574 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:55:33.699336 sshd-session[5446]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:55:33.704227 systemd-logind[1509]: New session 88 of user core. Jan 30 04:55:33.710027 systemd[1]: Started session-88.scope - Session 88 of User core. Jan 30 04:55:34.429917 sshd[5448]: Connection closed by 139.178.89.65 port 56574 Jan 30 04:55:34.430821 sshd-session[5446]: pam_unix(sshd:session): session closed for user core Jan 30 04:55:34.434289 systemd[1]: sshd@92-116.202.14.223:22-139.178.89.65:56574.service: Deactivated successfully. Jan 30 04:55:34.436971 systemd[1]: session-88.scope: Deactivated successfully. Jan 30 04:55:34.438959 systemd-logind[1509]: Session 88 logged out. Waiting for processes to exit. Jan 30 04:55:34.440677 systemd-logind[1509]: Removed session 88. Jan 30 04:55:39.599161 systemd[1]: Started sshd@93-116.202.14.223:22-139.178.89.65:56586.service - OpenSSH per-connection server daemon (139.178.89.65:56586). Jan 30 04:55:40.581555 sshd[5460]: Accepted publickey for core from 139.178.89.65 port 56586 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:55:40.583357 sshd-session[5460]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:55:40.588675 systemd-logind[1509]: New session 89 of user core. Jan 30 04:55:40.593086 systemd[1]: Started session-89.scope - Session 89 of User core. Jan 30 04:55:41.331389 sshd[5462]: Connection closed by 139.178.89.65 port 56586 Jan 30 04:55:41.332101 sshd-session[5460]: pam_unix(sshd:session): session closed for user core Jan 30 04:55:41.335056 systemd[1]: sshd@93-116.202.14.223:22-139.178.89.65:56586.service: Deactivated successfully. Jan 30 04:55:41.337321 systemd[1]: session-89.scope: Deactivated successfully. Jan 30 04:55:41.339040 systemd-logind[1509]: Session 89 logged out. Waiting for processes to exit. Jan 30 04:55:41.340143 systemd-logind[1509]: Removed session 89. Jan 30 04:55:46.512126 systemd[1]: Started sshd@94-116.202.14.223:22-139.178.89.65:57970.service - OpenSSH per-connection server daemon (139.178.89.65:57970). Jan 30 04:55:47.506340 sshd[5475]: Accepted publickey for core from 139.178.89.65 port 57970 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:55:47.508127 sshd-session[5475]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:55:47.512972 systemd-logind[1509]: New session 90 of user core. Jan 30 04:55:47.519066 systemd[1]: Started session-90.scope - Session 90 of User core. Jan 30 04:55:48.243575 sshd[5477]: Connection closed by 139.178.89.65 port 57970 Jan 30 04:55:48.244252 sshd-session[5475]: pam_unix(sshd:session): session closed for user core Jan 30 04:55:48.247034 systemd[1]: sshd@94-116.202.14.223:22-139.178.89.65:57970.service: Deactivated successfully. Jan 30 04:55:48.248850 systemd[1]: session-90.scope: Deactivated successfully. Jan 30 04:55:48.250666 systemd-logind[1509]: Session 90 logged out. Waiting for processes to exit. Jan 30 04:55:48.251852 systemd-logind[1509]: Removed session 90. Jan 30 04:55:53.412810 systemd[1]: Started sshd@95-116.202.14.223:22-139.178.89.65:52870.service - OpenSSH per-connection server daemon (139.178.89.65:52870). Jan 30 04:55:54.405714 sshd[5488]: Accepted publickey for core from 139.178.89.65 port 52870 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:55:54.407591 sshd-session[5488]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:55:54.411941 systemd-logind[1509]: New session 91 of user core. Jan 30 04:55:54.415035 systemd[1]: Started session-91.scope - Session 91 of User core. Jan 30 04:55:55.146852 sshd[5490]: Connection closed by 139.178.89.65 port 52870 Jan 30 04:55:55.147479 sshd-session[5488]: pam_unix(sshd:session): session closed for user core Jan 30 04:55:55.150267 systemd[1]: sshd@95-116.202.14.223:22-139.178.89.65:52870.service: Deactivated successfully. Jan 30 04:55:55.152101 systemd[1]: session-91.scope: Deactivated successfully. Jan 30 04:55:55.153630 systemd-logind[1509]: Session 91 logged out. Waiting for processes to exit. Jan 30 04:55:55.154739 systemd-logind[1509]: Removed session 91. Jan 30 04:56:00.320149 systemd[1]: Started sshd@96-116.202.14.223:22-139.178.89.65:52882.service - OpenSSH per-connection server daemon (139.178.89.65:52882). Jan 30 04:56:01.292633 sshd[5503]: Accepted publickey for core from 139.178.89.65 port 52882 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:56:01.294615 sshd-session[5503]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:56:01.299357 systemd-logind[1509]: New session 92 of user core. Jan 30 04:56:01.303042 systemd[1]: Started session-92.scope - Session 92 of User core. Jan 30 04:56:02.025341 sshd[5505]: Connection closed by 139.178.89.65 port 52882 Jan 30 04:56:02.026032 sshd-session[5503]: pam_unix(sshd:session): session closed for user core Jan 30 04:56:02.030450 systemd[1]: sshd@96-116.202.14.223:22-139.178.89.65:52882.service: Deactivated successfully. Jan 30 04:56:02.033302 systemd[1]: session-92.scope: Deactivated successfully. Jan 30 04:56:02.034218 systemd-logind[1509]: Session 92 logged out. Waiting for processes to exit. Jan 30 04:56:02.035410 systemd-logind[1509]: Removed session 92. Jan 30 04:56:07.199286 systemd[1]: Started sshd@97-116.202.14.223:22-139.178.89.65:49518.service - OpenSSH per-connection server daemon (139.178.89.65:49518). Jan 30 04:56:08.174984 sshd[5516]: Accepted publickey for core from 139.178.89.65 port 49518 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:56:08.176845 sshd-session[5516]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:56:08.183039 systemd-logind[1509]: New session 93 of user core. Jan 30 04:56:08.191096 systemd[1]: Started session-93.scope - Session 93 of User core. Jan 30 04:56:08.906771 sshd[5518]: Connection closed by 139.178.89.65 port 49518 Jan 30 04:56:08.907537 sshd-session[5516]: pam_unix(sshd:session): session closed for user core Jan 30 04:56:08.911864 systemd-logind[1509]: Session 93 logged out. Waiting for processes to exit. Jan 30 04:56:08.912717 systemd[1]: sshd@97-116.202.14.223:22-139.178.89.65:49518.service: Deactivated successfully. Jan 30 04:56:08.915280 systemd[1]: session-93.scope: Deactivated successfully. Jan 30 04:56:08.916372 systemd-logind[1509]: Removed session 93. Jan 30 04:56:14.079319 systemd[1]: Started sshd@98-116.202.14.223:22-139.178.89.65:49000.service - OpenSSH per-connection server daemon (139.178.89.65:49000). Jan 30 04:56:15.057068 sshd[5529]: Accepted publickey for core from 139.178.89.65 port 49000 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:56:15.058985 sshd-session[5529]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:56:15.064625 systemd-logind[1509]: New session 94 of user core. Jan 30 04:56:15.072094 systemd[1]: Started session-94.scope - Session 94 of User core. Jan 30 04:56:15.804066 sshd[5531]: Connection closed by 139.178.89.65 port 49000 Jan 30 04:56:15.804824 sshd-session[5529]: pam_unix(sshd:session): session closed for user core Jan 30 04:56:15.808708 systemd[1]: sshd@98-116.202.14.223:22-139.178.89.65:49000.service: Deactivated successfully. Jan 30 04:56:15.811829 systemd[1]: session-94.scope: Deactivated successfully. Jan 30 04:56:15.813574 systemd-logind[1509]: Session 94 logged out. Waiting for processes to exit. Jan 30 04:56:15.815076 systemd-logind[1509]: Removed session 94. Jan 30 04:56:20.983164 systemd[1]: Started sshd@99-116.202.14.223:22-139.178.89.65:49008.service - OpenSSH per-connection server daemon (139.178.89.65:49008). Jan 30 04:56:21.966197 sshd[5545]: Accepted publickey for core from 139.178.89.65 port 49008 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:56:21.967804 sshd-session[5545]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:56:21.972322 systemd-logind[1509]: New session 95 of user core. Jan 30 04:56:21.977019 systemd[1]: Started session-95.scope - Session 95 of User core. Jan 30 04:56:22.705091 sshd[5547]: Connection closed by 139.178.89.65 port 49008 Jan 30 04:56:22.705852 sshd-session[5545]: pam_unix(sshd:session): session closed for user core Jan 30 04:56:22.712163 systemd[1]: sshd@99-116.202.14.223:22-139.178.89.65:49008.service: Deactivated successfully. Jan 30 04:56:22.715471 systemd[1]: session-95.scope: Deactivated successfully. Jan 30 04:56:22.716741 systemd-logind[1509]: Session 95 logged out. Waiting for processes to exit. Jan 30 04:56:22.718061 systemd-logind[1509]: Removed session 95. Jan 30 04:56:27.878204 systemd[1]: Started sshd@100-116.202.14.223:22-139.178.89.65:35614.service - OpenSSH per-connection server daemon (139.178.89.65:35614). Jan 30 04:56:28.863539 sshd[5557]: Accepted publickey for core from 139.178.89.65 port 35614 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:56:28.865436 sshd-session[5557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:56:28.874051 systemd-logind[1509]: New session 96 of user core. Jan 30 04:56:28.877060 systemd[1]: Started session-96.scope - Session 96 of User core. Jan 30 04:56:29.600748 sshd[5560]: Connection closed by 139.178.89.65 port 35614 Jan 30 04:56:29.601507 sshd-session[5557]: pam_unix(sshd:session): session closed for user core Jan 30 04:56:29.604464 systemd[1]: sshd@100-116.202.14.223:22-139.178.89.65:35614.service: Deactivated successfully. Jan 30 04:56:29.606422 systemd[1]: session-96.scope: Deactivated successfully. Jan 30 04:56:29.608041 systemd-logind[1509]: Session 96 logged out. Waiting for processes to exit. Jan 30 04:56:29.609292 systemd-logind[1509]: Removed session 96. Jan 30 04:56:34.775136 systemd[1]: Started sshd@101-116.202.14.223:22-139.178.89.65:50410.service - OpenSSH per-connection server daemon (139.178.89.65:50410). Jan 30 04:56:35.741184 sshd[5571]: Accepted publickey for core from 139.178.89.65 port 50410 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:56:35.742755 sshd-session[5571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:56:35.747944 systemd-logind[1509]: New session 97 of user core. Jan 30 04:56:35.754061 systemd[1]: Started session-97.scope - Session 97 of User core. Jan 30 04:56:36.473903 sshd[5574]: Connection closed by 139.178.89.65 port 50410 Jan 30 04:56:36.474663 sshd-session[5571]: pam_unix(sshd:session): session closed for user core Jan 30 04:56:36.477505 systemd[1]: sshd@101-116.202.14.223:22-139.178.89.65:50410.service: Deactivated successfully. Jan 30 04:56:36.480254 systemd[1]: session-97.scope: Deactivated successfully. Jan 30 04:56:36.481744 systemd-logind[1509]: Session 97 logged out. Waiting for processes to exit. Jan 30 04:56:36.483165 systemd-logind[1509]: Removed session 97. Jan 30 04:56:41.649181 systemd[1]: Started sshd@102-116.202.14.223:22-139.178.89.65:36682.service - OpenSSH per-connection server daemon (139.178.89.65:36682). Jan 30 04:56:42.611166 sshd[5585]: Accepted publickey for core from 139.178.89.65 port 36682 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:56:42.612965 sshd-session[5585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:56:42.618363 systemd-logind[1509]: New session 98 of user core. Jan 30 04:56:42.629141 systemd[1]: Started session-98.scope - Session 98 of User core. Jan 30 04:56:43.335729 sshd[5588]: Connection closed by 139.178.89.65 port 36682 Jan 30 04:56:43.336496 sshd-session[5585]: pam_unix(sshd:session): session closed for user core Jan 30 04:56:43.340855 systemd-logind[1509]: Session 98 logged out. Waiting for processes to exit. Jan 30 04:56:43.341814 systemd[1]: sshd@102-116.202.14.223:22-139.178.89.65:36682.service: Deactivated successfully. Jan 30 04:56:43.344329 systemd[1]: session-98.scope: Deactivated successfully. Jan 30 04:56:43.345972 systemd-logind[1509]: Removed session 98. Jan 30 04:56:48.515147 systemd[1]: Started sshd@103-116.202.14.223:22-139.178.89.65:36690.service - OpenSSH per-connection server daemon (139.178.89.65:36690). Jan 30 04:56:49.494924 sshd[5601]: Accepted publickey for core from 139.178.89.65 port 36690 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:56:49.496507 sshd-session[5601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:56:49.501432 systemd-logind[1509]: New session 99 of user core. Jan 30 04:56:49.506027 systemd[1]: Started session-99.scope - Session 99 of User core. Jan 30 04:56:50.224368 sshd[5603]: Connection closed by 139.178.89.65 port 36690 Jan 30 04:56:50.225224 sshd-session[5601]: pam_unix(sshd:session): session closed for user core Jan 30 04:56:50.230085 systemd-logind[1509]: Session 99 logged out. Waiting for processes to exit. Jan 30 04:56:50.230464 systemd[1]: sshd@103-116.202.14.223:22-139.178.89.65:36690.service: Deactivated successfully. Jan 30 04:56:50.233271 systemd[1]: session-99.scope: Deactivated successfully. Jan 30 04:56:50.234436 systemd-logind[1509]: Removed session 99. Jan 30 04:56:55.391244 systemd[1]: Started sshd@104-116.202.14.223:22-139.178.89.65:58380.service - OpenSSH per-connection server daemon (139.178.89.65:58380). Jan 30 04:56:56.377136 sshd[5614]: Accepted publickey for core from 139.178.89.65 port 58380 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:56:56.378940 sshd-session[5614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:56:56.385015 systemd-logind[1509]: New session 100 of user core. Jan 30 04:56:56.393040 systemd[1]: Started session-100.scope - Session 100 of User core. Jan 30 04:56:57.108755 sshd[5616]: Connection closed by 139.178.89.65 port 58380 Jan 30 04:56:57.109535 sshd-session[5614]: pam_unix(sshd:session): session closed for user core Jan 30 04:56:57.113217 systemd[1]: sshd@104-116.202.14.223:22-139.178.89.65:58380.service: Deactivated successfully. Jan 30 04:56:57.115399 systemd[1]: session-100.scope: Deactivated successfully. Jan 30 04:56:57.116324 systemd-logind[1509]: Session 100 logged out. Waiting for processes to exit. Jan 30 04:56:57.117981 systemd-logind[1509]: Removed session 100. Jan 30 04:57:02.280967 systemd[1]: Started sshd@105-116.202.14.223:22-139.178.89.65:60288.service - OpenSSH per-connection server daemon (139.178.89.65:60288). Jan 30 04:57:03.270223 sshd[5630]: Accepted publickey for core from 139.178.89.65 port 60288 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:57:03.272159 sshd-session[5630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:57:03.278192 systemd-logind[1509]: New session 101 of user core. Jan 30 04:57:03.284099 systemd[1]: Started session-101.scope - Session 101 of User core. Jan 30 04:57:04.020667 sshd[5632]: Connection closed by 139.178.89.65 port 60288 Jan 30 04:57:04.021405 sshd-session[5630]: pam_unix(sshd:session): session closed for user core Jan 30 04:57:04.024636 systemd[1]: sshd@105-116.202.14.223:22-139.178.89.65:60288.service: Deactivated successfully. Jan 30 04:57:04.026755 systemd[1]: session-101.scope: Deactivated successfully. Jan 30 04:57:04.028615 systemd-logind[1509]: Session 101 logged out. Waiting for processes to exit. Jan 30 04:57:04.029684 systemd-logind[1509]: Removed session 101. Jan 30 04:57:09.194204 systemd[1]: Started sshd@106-116.202.14.223:22-139.178.89.65:60292.service - OpenSSH per-connection server daemon (139.178.89.65:60292). Jan 30 04:57:10.162196 sshd[5643]: Accepted publickey for core from 139.178.89.65 port 60292 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:57:10.164304 sshd-session[5643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:57:10.169062 systemd-logind[1509]: New session 102 of user core. Jan 30 04:57:10.174159 systemd[1]: Started session-102.scope - Session 102 of User core. Jan 30 04:57:10.895799 sshd[5645]: Connection closed by 139.178.89.65 port 60292 Jan 30 04:57:10.896515 sshd-session[5643]: pam_unix(sshd:session): session closed for user core Jan 30 04:57:10.899967 systemd[1]: sshd@106-116.202.14.223:22-139.178.89.65:60292.service: Deactivated successfully. Jan 30 04:57:10.902371 systemd[1]: session-102.scope: Deactivated successfully. Jan 30 04:57:10.904715 systemd-logind[1509]: Session 102 logged out. Waiting for processes to exit. Jan 30 04:57:10.905926 systemd-logind[1509]: Removed session 102. Jan 30 04:57:16.071094 systemd[1]: Started sshd@107-116.202.14.223:22-139.178.89.65:50882.service - OpenSSH per-connection server daemon (139.178.89.65:50882). Jan 30 04:57:17.046438 sshd[5658]: Accepted publickey for core from 139.178.89.65 port 50882 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:57:17.047928 sshd-session[5658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:57:17.052377 systemd-logind[1509]: New session 103 of user core. Jan 30 04:57:17.059043 systemd[1]: Started session-103.scope - Session 103 of User core. Jan 30 04:57:17.778039 sshd[5660]: Connection closed by 139.178.89.65 port 50882 Jan 30 04:57:17.778814 sshd-session[5658]: pam_unix(sshd:session): session closed for user core Jan 30 04:57:17.782257 systemd[1]: sshd@107-116.202.14.223:22-139.178.89.65:50882.service: Deactivated successfully. Jan 30 04:57:17.784796 systemd[1]: session-103.scope: Deactivated successfully. Jan 30 04:57:17.787188 systemd-logind[1509]: Session 103 logged out. Waiting for processes to exit. Jan 30 04:57:17.789128 systemd-logind[1509]: Removed session 103. Jan 30 04:57:22.948170 systemd[1]: Started sshd@108-116.202.14.223:22-139.178.89.65:57684.service - OpenSSH per-connection server daemon (139.178.89.65:57684). Jan 30 04:57:23.943373 sshd[5672]: Accepted publickey for core from 139.178.89.65 port 57684 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:57:23.945070 sshd-session[5672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:57:23.949475 systemd-logind[1509]: New session 104 of user core. Jan 30 04:57:23.955112 systemd[1]: Started session-104.scope - Session 104 of User core. Jan 30 04:57:24.674452 sshd[5674]: Connection closed by 139.178.89.65 port 57684 Jan 30 04:57:24.675235 sshd-session[5672]: pam_unix(sshd:session): session closed for user core Jan 30 04:57:24.678104 systemd[1]: sshd@108-116.202.14.223:22-139.178.89.65:57684.service: Deactivated successfully. Jan 30 04:57:24.680316 systemd[1]: session-104.scope: Deactivated successfully. Jan 30 04:57:24.681621 systemd-logind[1509]: Session 104 logged out. Waiting for processes to exit. Jan 30 04:57:24.683118 systemd-logind[1509]: Removed session 104. Jan 30 04:57:29.847134 systemd[1]: Started sshd@109-116.202.14.223:22-139.178.89.65:57686.service - OpenSSH per-connection server daemon (139.178.89.65:57686). Jan 30 04:57:30.823642 sshd[5685]: Accepted publickey for core from 139.178.89.65 port 57686 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:57:30.825467 sshd-session[5685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:57:30.831107 systemd-logind[1509]: New session 105 of user core. Jan 30 04:57:30.835027 systemd[1]: Started session-105.scope - Session 105 of User core. Jan 30 04:57:31.554624 sshd[5687]: Connection closed by 139.178.89.65 port 57686 Jan 30 04:57:31.555281 sshd-session[5685]: pam_unix(sshd:session): session closed for user core Jan 30 04:57:31.558957 systemd[1]: sshd@109-116.202.14.223:22-139.178.89.65:57686.service: Deactivated successfully. Jan 30 04:57:31.560961 systemd[1]: session-105.scope: Deactivated successfully. Jan 30 04:57:31.561511 systemd-logind[1509]: Session 105 logged out. Waiting for processes to exit. Jan 30 04:57:31.562757 systemd-logind[1509]: Removed session 105. Jan 30 04:57:36.728164 systemd[1]: Started sshd@110-116.202.14.223:22-139.178.89.65:44048.service - OpenSSH per-connection server daemon (139.178.89.65:44048). Jan 30 04:57:37.704785 sshd[5698]: Accepted publickey for core from 139.178.89.65 port 44048 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:57:37.706359 sshd-session[5698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:57:37.711377 systemd-logind[1509]: New session 106 of user core. Jan 30 04:57:37.713026 systemd[1]: Started session-106.scope - Session 106 of User core. Jan 30 04:57:38.437737 sshd[5700]: Connection closed by 139.178.89.65 port 44048 Jan 30 04:57:38.438304 sshd-session[5698]: pam_unix(sshd:session): session closed for user core Jan 30 04:57:38.442055 systemd[1]: sshd@110-116.202.14.223:22-139.178.89.65:44048.service: Deactivated successfully. Jan 30 04:57:38.444450 systemd[1]: session-106.scope: Deactivated successfully. Jan 30 04:57:38.445390 systemd-logind[1509]: Session 106 logged out. Waiting for processes to exit. Jan 30 04:57:38.446489 systemd-logind[1509]: Removed session 106. Jan 30 04:57:43.614193 systemd[1]: Started sshd@111-116.202.14.223:22-139.178.89.65:57244.service - OpenSSH per-connection server daemon (139.178.89.65:57244). Jan 30 04:57:44.588795 sshd[5710]: Accepted publickey for core from 139.178.89.65 port 57244 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:57:44.590731 sshd-session[5710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:57:44.596107 systemd-logind[1509]: New session 107 of user core. Jan 30 04:57:44.602080 systemd[1]: Started session-107.scope - Session 107 of User core. Jan 30 04:57:45.323160 sshd[5712]: Connection closed by 139.178.89.65 port 57244 Jan 30 04:57:45.323861 sshd-session[5710]: pam_unix(sshd:session): session closed for user core Jan 30 04:57:45.326963 systemd[1]: sshd@111-116.202.14.223:22-139.178.89.65:57244.service: Deactivated successfully. Jan 30 04:57:45.329058 systemd[1]: session-107.scope: Deactivated successfully. Jan 30 04:57:45.330687 systemd-logind[1509]: Session 107 logged out. Waiting for processes to exit. Jan 30 04:57:45.332071 systemd-logind[1509]: Removed session 107. Jan 30 04:57:50.500127 systemd[1]: Started sshd@112-116.202.14.223:22-139.178.89.65:57246.service - OpenSSH per-connection server daemon (139.178.89.65:57246). Jan 30 04:57:51.488129 sshd[5725]: Accepted publickey for core from 139.178.89.65 port 57246 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:57:51.490729 sshd-session[5725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:57:51.496683 systemd-logind[1509]: New session 108 of user core. Jan 30 04:57:51.500083 systemd[1]: Started session-108.scope - Session 108 of User core. Jan 30 04:57:52.243460 sshd[5727]: Connection closed by 139.178.89.65 port 57246 Jan 30 04:57:52.244191 sshd-session[5725]: pam_unix(sshd:session): session closed for user core Jan 30 04:57:52.246849 systemd[1]: sshd@112-116.202.14.223:22-139.178.89.65:57246.service: Deactivated successfully. Jan 30 04:57:52.248733 systemd[1]: session-108.scope: Deactivated successfully. Jan 30 04:57:52.250508 systemd-logind[1509]: Session 108 logged out. Waiting for processes to exit. Jan 30 04:57:52.251956 systemd-logind[1509]: Removed session 108. Jan 30 04:57:57.412227 systemd[1]: Started sshd@113-116.202.14.223:22-139.178.89.65:53626.service - OpenSSH per-connection server daemon (139.178.89.65:53626). Jan 30 04:57:58.400302 sshd[5738]: Accepted publickey for core from 139.178.89.65 port 53626 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:57:58.401949 sshd-session[5738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:57:58.406441 systemd-logind[1509]: New session 109 of user core. Jan 30 04:57:58.411072 systemd[1]: Started session-109.scope - Session 109 of User core. Jan 30 04:57:59.146850 sshd[5740]: Connection closed by 139.178.89.65 port 53626 Jan 30 04:57:59.147590 sshd-session[5738]: pam_unix(sshd:session): session closed for user core Jan 30 04:57:59.151774 systemd[1]: sshd@113-116.202.14.223:22-139.178.89.65:53626.service: Deactivated successfully. Jan 30 04:57:59.154240 systemd[1]: session-109.scope: Deactivated successfully. Jan 30 04:57:59.155119 systemd-logind[1509]: Session 109 logged out. Waiting for processes to exit. Jan 30 04:57:59.156448 systemd-logind[1509]: Removed session 109. Jan 30 04:58:04.317750 systemd[1]: Started sshd@114-116.202.14.223:22-139.178.89.65:37516.service - OpenSSH per-connection server daemon (139.178.89.65:37516). Jan 30 04:58:05.301417 sshd[5754]: Accepted publickey for core from 139.178.89.65 port 37516 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:58:05.302986 sshd-session[5754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:58:05.307409 systemd-logind[1509]: New session 110 of user core. Jan 30 04:58:05.312039 systemd[1]: Started session-110.scope - Session 110 of User core. Jan 30 04:58:06.033229 sshd[5756]: Connection closed by 139.178.89.65 port 37516 Jan 30 04:58:06.033985 sshd-session[5754]: pam_unix(sshd:session): session closed for user core Jan 30 04:58:06.037624 systemd[1]: sshd@114-116.202.14.223:22-139.178.89.65:37516.service: Deactivated successfully. Jan 30 04:58:06.040383 systemd[1]: session-110.scope: Deactivated successfully. Jan 30 04:58:06.042011 systemd-logind[1509]: Session 110 logged out. Waiting for processes to exit. Jan 30 04:58:06.043597 systemd-logind[1509]: Removed session 110. Jan 30 04:58:11.211168 systemd[1]: Started sshd@115-116.202.14.223:22-139.178.89.65:37530.service - OpenSSH per-connection server daemon (139.178.89.65:37530). Jan 30 04:58:12.198277 sshd[5767]: Accepted publickey for core from 139.178.89.65 port 37530 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:58:12.199942 sshd-session[5767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:58:12.204763 systemd-logind[1509]: New session 111 of user core. Jan 30 04:58:12.208003 systemd[1]: Started session-111.scope - Session 111 of User core. Jan 30 04:58:12.947879 sshd[5769]: Connection closed by 139.178.89.65 port 37530 Jan 30 04:58:12.948631 sshd-session[5767]: pam_unix(sshd:session): session closed for user core Jan 30 04:58:12.952419 systemd[1]: sshd@115-116.202.14.223:22-139.178.89.65:37530.service: Deactivated successfully. Jan 30 04:58:12.954833 systemd[1]: session-111.scope: Deactivated successfully. Jan 30 04:58:12.955691 systemd-logind[1509]: Session 111 logged out. Waiting for processes to exit. Jan 30 04:58:12.956775 systemd-logind[1509]: Removed session 111. Jan 30 04:58:18.127132 systemd[1]: Started sshd@116-116.202.14.223:22-139.178.89.65:39990.service - OpenSSH per-connection server daemon (139.178.89.65:39990). Jan 30 04:58:19.111349 sshd[5782]: Accepted publickey for core from 139.178.89.65 port 39990 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:58:19.113082 sshd-session[5782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:58:19.117636 systemd-logind[1509]: New session 112 of user core. Jan 30 04:58:19.123045 systemd[1]: Started session-112.scope - Session 112 of User core. Jan 30 04:58:19.865725 sshd[5784]: Connection closed by 139.178.89.65 port 39990 Jan 30 04:58:19.866396 sshd-session[5782]: pam_unix(sshd:session): session closed for user core Jan 30 04:58:19.870414 systemd[1]: sshd@116-116.202.14.223:22-139.178.89.65:39990.service: Deactivated successfully. Jan 30 04:58:19.873133 systemd[1]: session-112.scope: Deactivated successfully. Jan 30 04:58:19.874075 systemd-logind[1509]: Session 112 logged out. Waiting for processes to exit. Jan 30 04:58:19.875295 systemd-logind[1509]: Removed session 112. Jan 30 04:58:25.040058 systemd[1]: Started sshd@117-116.202.14.223:22-139.178.89.65:50794.service - OpenSSH per-connection server daemon (139.178.89.65:50794). Jan 30 04:58:26.027944 sshd[5795]: Accepted publickey for core from 139.178.89.65 port 50794 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:58:26.029473 sshd-session[5795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:58:26.033946 systemd-logind[1509]: New session 113 of user core. Jan 30 04:58:26.039012 systemd[1]: Started session-113.scope - Session 113 of User core. Jan 30 04:58:26.759951 sshd[5797]: Connection closed by 139.178.89.65 port 50794 Jan 30 04:58:26.760651 sshd-session[5795]: pam_unix(sshd:session): session closed for user core Jan 30 04:58:26.763295 systemd[1]: sshd@117-116.202.14.223:22-139.178.89.65:50794.service: Deactivated successfully. Jan 30 04:58:26.765443 systemd[1]: session-113.scope: Deactivated successfully. Jan 30 04:58:26.766762 systemd-logind[1509]: Session 113 logged out. Waiting for processes to exit. Jan 30 04:58:26.767915 systemd-logind[1509]: Removed session 113. Jan 30 04:58:31.936148 systemd[1]: Started sshd@118-116.202.14.223:22-139.178.89.65:47660.service - OpenSSH per-connection server daemon (139.178.89.65:47660). Jan 30 04:58:32.910161 sshd[5807]: Accepted publickey for core from 139.178.89.65 port 47660 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:58:32.911824 sshd-session[5807]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:58:32.916957 systemd-logind[1509]: New session 114 of user core. Jan 30 04:58:32.922041 systemd[1]: Started session-114.scope - Session 114 of User core. Jan 30 04:58:33.658643 sshd[5809]: Connection closed by 139.178.89.65 port 47660 Jan 30 04:58:33.659430 sshd-session[5807]: pam_unix(sshd:session): session closed for user core Jan 30 04:58:33.663589 systemd[1]: sshd@118-116.202.14.223:22-139.178.89.65:47660.service: Deactivated successfully. Jan 30 04:58:33.665654 systemd[1]: session-114.scope: Deactivated successfully. Jan 30 04:58:33.666647 systemd-logind[1509]: Session 114 logged out. Waiting for processes to exit. Jan 30 04:58:33.667609 systemd-logind[1509]: Removed session 114. Jan 30 04:58:38.835135 systemd[1]: Started sshd@119-116.202.14.223:22-139.178.89.65:47674.service - OpenSSH per-connection server daemon (139.178.89.65:47674). Jan 30 04:58:39.815077 sshd[5820]: Accepted publickey for core from 139.178.89.65 port 47674 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:58:39.817056 sshd-session[5820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:58:39.822963 systemd-logind[1509]: New session 115 of user core. Jan 30 04:58:39.830114 systemd[1]: Started session-115.scope - Session 115 of User core. Jan 30 04:58:40.549938 sshd[5822]: Connection closed by 139.178.89.65 port 47674 Jan 30 04:58:40.550508 sshd-session[5820]: pam_unix(sshd:session): session closed for user core Jan 30 04:58:40.553216 systemd[1]: sshd@119-116.202.14.223:22-139.178.89.65:47674.service: Deactivated successfully. Jan 30 04:58:40.555023 systemd[1]: session-115.scope: Deactivated successfully. Jan 30 04:58:40.556202 systemd-logind[1509]: Session 115 logged out. Waiting for processes to exit. Jan 30 04:58:40.557203 systemd-logind[1509]: Removed session 115. Jan 30 04:58:45.719956 systemd[1]: Started sshd@120-116.202.14.223:22-139.178.89.65:57832.service - OpenSSH per-connection server daemon (139.178.89.65:57832). Jan 30 04:58:46.705534 sshd[5835]: Accepted publickey for core from 139.178.89.65 port 57832 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:58:46.706924 sshd-session[5835]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:58:46.711960 systemd-logind[1509]: New session 116 of user core. Jan 30 04:58:46.717024 systemd[1]: Started session-116.scope - Session 116 of User core. Jan 30 04:58:47.439177 sshd[5837]: Connection closed by 139.178.89.65 port 57832 Jan 30 04:58:47.440470 sshd-session[5835]: pam_unix(sshd:session): session closed for user core Jan 30 04:58:47.443753 systemd[1]: sshd@120-116.202.14.223:22-139.178.89.65:57832.service: Deactivated successfully. Jan 30 04:58:47.446181 systemd[1]: session-116.scope: Deactivated successfully. Jan 30 04:58:47.448132 systemd-logind[1509]: Session 116 logged out. Waiting for processes to exit. Jan 30 04:58:47.449953 systemd-logind[1509]: Removed session 116. Jan 30 04:58:52.613144 systemd[1]: Started sshd@121-116.202.14.223:22-139.178.89.65:39010.service - OpenSSH per-connection server daemon (139.178.89.65:39010). Jan 30 04:58:53.600982 sshd[5848]: Accepted publickey for core from 139.178.89.65 port 39010 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:58:53.602695 sshd-session[5848]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:58:53.608413 systemd-logind[1509]: New session 117 of user core. Jan 30 04:58:53.614253 systemd[1]: Started session-117.scope - Session 117 of User core. Jan 30 04:58:54.333483 sshd[5850]: Connection closed by 139.178.89.65 port 39010 Jan 30 04:58:54.334159 sshd-session[5848]: pam_unix(sshd:session): session closed for user core Jan 30 04:58:54.336927 systemd[1]: sshd@121-116.202.14.223:22-139.178.89.65:39010.service: Deactivated successfully. Jan 30 04:58:54.338650 systemd[1]: session-117.scope: Deactivated successfully. Jan 30 04:58:54.340186 systemd-logind[1509]: Session 117 logged out. Waiting for processes to exit. Jan 30 04:58:54.341563 systemd-logind[1509]: Removed session 117. Jan 30 04:58:59.512311 systemd[1]: Started sshd@122-116.202.14.223:22-139.178.89.65:39014.service - OpenSSH per-connection server daemon (139.178.89.65:39014). Jan 30 04:59:00.509370 sshd[5863]: Accepted publickey for core from 139.178.89.65 port 39014 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:59:00.511062 sshd-session[5863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:59:00.515962 systemd-logind[1509]: New session 118 of user core. Jan 30 04:59:00.519075 systemd[1]: Started session-118.scope - Session 118 of User core. Jan 30 04:59:01.248423 sshd[5865]: Connection closed by 139.178.89.65 port 39014 Jan 30 04:59:01.249067 sshd-session[5863]: pam_unix(sshd:session): session closed for user core Jan 30 04:59:01.251914 systemd[1]: sshd@122-116.202.14.223:22-139.178.89.65:39014.service: Deactivated successfully. Jan 30 04:59:01.254046 systemd[1]: session-118.scope: Deactivated successfully. Jan 30 04:59:01.255423 systemd-logind[1509]: Session 118 logged out. Waiting for processes to exit. Jan 30 04:59:01.257095 systemd-logind[1509]: Removed session 118. Jan 30 04:59:06.426223 systemd[1]: Started sshd@123-116.202.14.223:22-139.178.89.65:34606.service - OpenSSH per-connection server daemon (139.178.89.65:34606). Jan 30 04:59:07.427600 sshd[5876]: Accepted publickey for core from 139.178.89.65 port 34606 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:59:07.429315 sshd-session[5876]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:59:07.435568 systemd-logind[1509]: New session 119 of user core. Jan 30 04:59:07.439046 systemd[1]: Started session-119.scope - Session 119 of User core. Jan 30 04:59:08.180757 sshd[5878]: Connection closed by 139.178.89.65 port 34606 Jan 30 04:59:08.181398 sshd-session[5876]: pam_unix(sshd:session): session closed for user core Jan 30 04:59:08.185262 systemd[1]: sshd@123-116.202.14.223:22-139.178.89.65:34606.service: Deactivated successfully. Jan 30 04:59:08.187692 systemd[1]: session-119.scope: Deactivated successfully. Jan 30 04:59:08.188645 systemd-logind[1509]: Session 119 logged out. Waiting for processes to exit. Jan 30 04:59:08.189858 systemd-logind[1509]: Removed session 119. Jan 30 04:59:08.355176 systemd[1]: Started sshd@124-116.202.14.223:22-139.178.89.65:34612.service - OpenSSH per-connection server daemon (139.178.89.65:34612). Jan 30 04:59:09.343595 sshd[5889]: Accepted publickey for core from 139.178.89.65 port 34612 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:59:09.345417 sshd-session[5889]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:59:09.350564 systemd-logind[1509]: New session 120 of user core. Jan 30 04:59:09.354085 systemd[1]: Started session-120.scope - Session 120 of User core. Jan 30 04:59:11.283308 systemd[1]: run-containerd-runc-k8s.io-e6e263608e8ab7352ec6ef9b0dd7c50f9ab9bd05cba61ea58585e7054d30e26a-runc.tJzmi0.mount: Deactivated successfully. Jan 30 04:59:11.294764 containerd[1524]: time="2025-01-30T04:59:11.294723525Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 04:59:11.354604 containerd[1524]: time="2025-01-30T04:59:11.354532386Z" level=info msg="StopContainer for \"e6e263608e8ab7352ec6ef9b0dd7c50f9ab9bd05cba61ea58585e7054d30e26a\" with timeout 2 (s)" Jan 30 04:59:11.358289 containerd[1524]: time="2025-01-30T04:59:11.358225557Z" level=info msg="Stop container \"e6e263608e8ab7352ec6ef9b0dd7c50f9ab9bd05cba61ea58585e7054d30e26a\" with signal terminated" Jan 30 04:59:11.366506 systemd-networkd[1383]: lxc_health: Link DOWN Jan 30 04:59:11.366513 systemd-networkd[1383]: lxc_health: Lost carrier Jan 30 04:59:11.391361 systemd[1]: cri-containerd-e6e263608e8ab7352ec6ef9b0dd7c50f9ab9bd05cba61ea58585e7054d30e26a.scope: Deactivated successfully. Jan 30 04:59:11.391645 systemd[1]: cri-containerd-e6e263608e8ab7352ec6ef9b0dd7c50f9ab9bd05cba61ea58585e7054d30e26a.scope: Consumed 9.722s CPU time. Jan 30 04:59:11.411521 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e6e263608e8ab7352ec6ef9b0dd7c50f9ab9bd05cba61ea58585e7054d30e26a-rootfs.mount: Deactivated successfully. Jan 30 04:59:11.424112 containerd[1524]: time="2025-01-30T04:59:11.424029306Z" level=info msg="shim disconnected" id=e6e263608e8ab7352ec6ef9b0dd7c50f9ab9bd05cba61ea58585e7054d30e26a namespace=k8s.io Jan 30 04:59:11.424112 containerd[1524]: time="2025-01-30T04:59:11.424101031Z" level=warning msg="cleaning up after shim disconnected" id=e6e263608e8ab7352ec6ef9b0dd7c50f9ab9bd05cba61ea58585e7054d30e26a namespace=k8s.io Jan 30 04:59:11.424112 containerd[1524]: time="2025-01-30T04:59:11.424110549Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 04:59:11.438998 containerd[1524]: time="2025-01-30T04:59:11.438492594Z" level=info msg="StopContainer for \"e6e263608e8ab7352ec6ef9b0dd7c50f9ab9bd05cba61ea58585e7054d30e26a\" returns successfully" Jan 30 04:59:11.443934 containerd[1524]: time="2025-01-30T04:59:11.443744784Z" level=info msg="StopPodSandbox for \"7c0376dc791b4c5ddb4c788600ceace473d5dfe4ebbe2483b8718bbfe07db89e\"" Jan 30 04:59:11.452942 containerd[1524]: time="2025-01-30T04:59:11.449223807Z" level=info msg="Container to stop \"e7535ba295c3f5caee3e3f19d32928c35e798d4a61b7605704ed7ae6b5a2625c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 04:59:11.452942 containerd[1524]: time="2025-01-30T04:59:11.452936826Z" level=info msg="Container to stop \"e6e263608e8ab7352ec6ef9b0dd7c50f9ab9bd05cba61ea58585e7054d30e26a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 04:59:11.452942 containerd[1524]: time="2025-01-30T04:59:11.452948157Z" level=info msg="Container to stop \"be83d8831adf4ca8569e77b6ee848087ceade6e0953b81c57777e71412d2df0e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 04:59:11.453139 containerd[1524]: time="2025-01-30T04:59:11.452955811Z" level=info msg="Container to stop \"7e8883405fdf4b8301d4918f827a16380823baa96378d038fc2906391591983b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 04:59:11.453139 containerd[1524]: time="2025-01-30T04:59:11.452963496Z" level=info msg="Container to stop \"65a41be782422e712b4f9f394fbe814baf0e24d4f98ed81772b6b3c751bd4d28\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 04:59:11.456244 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7c0376dc791b4c5ddb4c788600ceace473d5dfe4ebbe2483b8718bbfe07db89e-shm.mount: Deactivated successfully. Jan 30 04:59:11.465004 systemd[1]: cri-containerd-7c0376dc791b4c5ddb4c788600ceace473d5dfe4ebbe2483b8718bbfe07db89e.scope: Deactivated successfully. Jan 30 04:59:11.482552 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7c0376dc791b4c5ddb4c788600ceace473d5dfe4ebbe2483b8718bbfe07db89e-rootfs.mount: Deactivated successfully. Jan 30 04:59:11.487278 containerd[1524]: time="2025-01-30T04:59:11.487209567Z" level=info msg="shim disconnected" id=7c0376dc791b4c5ddb4c788600ceace473d5dfe4ebbe2483b8718bbfe07db89e namespace=k8s.io Jan 30 04:59:11.487278 containerd[1524]: time="2025-01-30T04:59:11.487268547Z" level=warning msg="cleaning up after shim disconnected" id=7c0376dc791b4c5ddb4c788600ceace473d5dfe4ebbe2483b8718bbfe07db89e namespace=k8s.io Jan 30 04:59:11.487278 containerd[1524]: time="2025-01-30T04:59:11.487277223Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 04:59:11.502419 containerd[1524]: time="2025-01-30T04:59:11.502387390Z" level=info msg="TearDown network for sandbox \"7c0376dc791b4c5ddb4c788600ceace473d5dfe4ebbe2483b8718bbfe07db89e\" successfully" Jan 30 04:59:11.502655 containerd[1524]: time="2025-01-30T04:59:11.502520741Z" level=info msg="StopPodSandbox for \"7c0376dc791b4c5ddb4c788600ceace473d5dfe4ebbe2483b8718bbfe07db89e\" returns successfully" Jan 30 04:59:11.637283 kubelet[2892]: I0130 04:59:11.636716 2892 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2-hubble-tls\") pod \"01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2\" (UID: \"01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2\") " Jan 30 04:59:11.637283 kubelet[2892]: I0130 04:59:11.636787 2892 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5ksc\" (UniqueName: \"kubernetes.io/projected/01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2-kube-api-access-v5ksc\") pod \"01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2\" (UID: \"01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2\") " Jan 30 04:59:11.637283 kubelet[2892]: I0130 04:59:11.636810 2892 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2-cni-path\") pod \"01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2\" (UID: \"01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2\") " Jan 30 04:59:11.637283 kubelet[2892]: I0130 04:59:11.636823 2892 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2-host-proc-sys-net\") pod \"01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2\" (UID: \"01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2\") " Jan 30 04:59:11.637283 kubelet[2892]: I0130 04:59:11.636839 2892 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2-cilium-cgroup\") pod \"01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2\" (UID: \"01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2\") " Jan 30 04:59:11.637283 kubelet[2892]: I0130 04:59:11.636854 2892 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2-xtables-lock\") pod \"01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2\" (UID: \"01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2\") " Jan 30 04:59:11.637844 kubelet[2892]: I0130 04:59:11.636870 2892 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2-etc-cni-netd\") pod \"01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2\" (UID: \"01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2\") " Jan 30 04:59:11.637844 kubelet[2892]: I0130 04:59:11.636897 2892 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2-hostproc\") pod \"01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2\" (UID: \"01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2\") " Jan 30 04:59:11.637844 kubelet[2892]: I0130 04:59:11.636915 2892 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2-clustermesh-secrets\") pod \"01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2\" (UID: \"01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2\") " Jan 30 04:59:11.637844 kubelet[2892]: I0130 04:59:11.636929 2892 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2-lib-modules\") pod \"01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2\" (UID: \"01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2\") " Jan 30 04:59:11.637844 kubelet[2892]: I0130 04:59:11.636941 2892 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2-host-proc-sys-kernel\") pod \"01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2\" (UID: \"01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2\") " Jan 30 04:59:11.637844 kubelet[2892]: I0130 04:59:11.636956 2892 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2-bpf-maps\") pod \"01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2\" (UID: \"01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2\") " Jan 30 04:59:11.638051 kubelet[2892]: I0130 04:59:11.636974 2892 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2-cilium-config-path\") pod \"01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2\" (UID: \"01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2\") " Jan 30 04:59:11.638051 kubelet[2892]: I0130 04:59:11.636987 2892 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2-cilium-run\") pod \"01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2\" (UID: \"01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2\") " Jan 30 04:59:11.652072 kubelet[2892]: I0130 04:59:11.650743 2892 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2-cni-path" (OuterVolumeSpecName: "cni-path") pod "01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2" (UID: "01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 04:59:11.652072 kubelet[2892]: I0130 04:59:11.650559 2892 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2" (UID: "01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 04:59:11.652072 kubelet[2892]: I0130 04:59:11.651881 2892 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2" (UID: "01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 04:59:11.652072 kubelet[2892]: I0130 04:59:11.651938 2892 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2" (UID: "01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 04:59:11.652072 kubelet[2892]: I0130 04:59:11.651956 2892 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2" (UID: "01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 04:59:11.652319 kubelet[2892]: I0130 04:59:11.651971 2892 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2" (UID: "01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 04:59:11.652319 kubelet[2892]: I0130 04:59:11.651986 2892 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2-hostproc" (OuterVolumeSpecName: "hostproc") pod "01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2" (UID: "01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 04:59:11.652319 kubelet[2892]: I0130 04:59:11.652076 2892 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2-kube-api-access-v5ksc" (OuterVolumeSpecName: "kube-api-access-v5ksc") pod "01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2" (UID: "01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2"). InnerVolumeSpecName "kube-api-access-v5ksc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 04:59:11.652450 kubelet[2892]: I0130 04:59:11.652429 2892 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2" (UID: "01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 04:59:11.653853 kubelet[2892]: I0130 04:59:11.653798 2892 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2" (UID: "01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 04:59:11.653853 kubelet[2892]: I0130 04:59:11.653838 2892 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2" (UID: "01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 04:59:11.654396 kubelet[2892]: I0130 04:59:11.654109 2892 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2" (UID: "01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 04:59:11.657304 kubelet[2892]: I0130 04:59:11.657270 2892 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2" (UID: "01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 04:59:11.660448 kubelet[2892]: I0130 04:59:11.660400 2892 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2" (UID: "01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 04:59:11.661083 containerd[1524]: time="2025-01-30T04:59:11.660680721Z" level=info msg="StopContainer for \"eb67a5010022c6d0f7ad1bd79a79ba1eee5119ab2fe41d5fa80c6b2a75fc2655\" with timeout 30 (s)" Jan 30 04:59:11.661903 containerd[1524]: time="2025-01-30T04:59:11.661838127Z" level=info msg="Stop container \"eb67a5010022c6d0f7ad1bd79a79ba1eee5119ab2fe41d5fa80c6b2a75fc2655\" with signal terminated" Jan 30 04:59:11.672875 systemd[1]: Removed slice kubepods-burstable-pod01878e3e_38d5_4b4e_a4b5_d1ade3a7f9c2.slice - libcontainer container kubepods-burstable-pod01878e3e_38d5_4b4e_a4b5_d1ade3a7f9c2.slice. Jan 30 04:59:11.673310 systemd[1]: kubepods-burstable-pod01878e3e_38d5_4b4e_a4b5_d1ade3a7f9c2.slice: Consumed 9.813s CPU time. Jan 30 04:59:11.683312 kubelet[2892]: I0130 04:59:11.681936 2892 scope.go:117] "RemoveContainer" containerID="e6e263608e8ab7352ec6ef9b0dd7c50f9ab9bd05cba61ea58585e7054d30e26a" Jan 30 04:59:11.686717 systemd[1]: cri-containerd-eb67a5010022c6d0f7ad1bd79a79ba1eee5119ab2fe41d5fa80c6b2a75fc2655.scope: Deactivated successfully. Jan 30 04:59:11.687376 systemd[1]: cri-containerd-eb67a5010022c6d0f7ad1bd79a79ba1eee5119ab2fe41d5fa80c6b2a75fc2655.scope: Consumed 1.276s CPU time. Jan 30 04:59:11.695777 containerd[1524]: time="2025-01-30T04:59:11.695531364Z" level=info msg="RemoveContainer for \"e6e263608e8ab7352ec6ef9b0dd7c50f9ab9bd05cba61ea58585e7054d30e26a\"" Jan 30 04:59:11.705505 containerd[1524]: time="2025-01-30T04:59:11.705477667Z" level=info msg="RemoveContainer for \"e6e263608e8ab7352ec6ef9b0dd7c50f9ab9bd05cba61ea58585e7054d30e26a\" returns successfully" Jan 30 04:59:11.707070 kubelet[2892]: I0130 04:59:11.706900 2892 scope.go:117] "RemoveContainer" containerID="e7535ba295c3f5caee3e3f19d32928c35e798d4a61b7605704ed7ae6b5a2625c" Jan 30 04:59:11.708406 containerd[1524]: time="2025-01-30T04:59:11.708162032Z" level=info msg="RemoveContainer for \"e7535ba295c3f5caee3e3f19d32928c35e798d4a61b7605704ed7ae6b5a2625c\"" Jan 30 04:59:11.711814 containerd[1524]: time="2025-01-30T04:59:11.711791203Z" level=info msg="RemoveContainer for \"e7535ba295c3f5caee3e3f19d32928c35e798d4a61b7605704ed7ae6b5a2625c\" returns successfully" Jan 30 04:59:11.713003 kubelet[2892]: I0130 04:59:11.712772 2892 scope.go:117] "RemoveContainer" containerID="65a41be782422e712b4f9f394fbe814baf0e24d4f98ed81772b6b3c751bd4d28" Jan 30 04:59:11.717068 containerd[1524]: time="2025-01-30T04:59:11.717027964Z" level=info msg="RemoveContainer for \"65a41be782422e712b4f9f394fbe814baf0e24d4f98ed81772b6b3c751bd4d28\"" Jan 30 04:59:11.722676 containerd[1524]: time="2025-01-30T04:59:11.722127697Z" level=info msg="RemoveContainer for \"65a41be782422e712b4f9f394fbe814baf0e24d4f98ed81772b6b3c751bd4d28\" returns successfully" Jan 30 04:59:11.722991 kubelet[2892]: I0130 04:59:11.722952 2892 scope.go:117] "RemoveContainer" containerID="7e8883405fdf4b8301d4918f827a16380823baa96378d038fc2906391591983b" Jan 30 04:59:11.724829 containerd[1524]: time="2025-01-30T04:59:11.724651862Z" level=info msg="RemoveContainer for \"7e8883405fdf4b8301d4918f827a16380823baa96378d038fc2906391591983b\"" Jan 30 04:59:11.729041 containerd[1524]: time="2025-01-30T04:59:11.729009987Z" level=info msg="RemoveContainer for \"7e8883405fdf4b8301d4918f827a16380823baa96378d038fc2906391591983b\" returns successfully" Jan 30 04:59:11.730398 kubelet[2892]: I0130 04:59:11.730277 2892 scope.go:117] "RemoveContainer" containerID="be83d8831adf4ca8569e77b6ee848087ceade6e0953b81c57777e71412d2df0e" Jan 30 04:59:11.734395 containerd[1524]: time="2025-01-30T04:59:11.734361623Z" level=info msg="RemoveContainer for \"be83d8831adf4ca8569e77b6ee848087ceade6e0953b81c57777e71412d2df0e\"" Jan 30 04:59:11.735911 containerd[1524]: time="2025-01-30T04:59:11.735766983Z" level=info msg="shim disconnected" id=eb67a5010022c6d0f7ad1bd79a79ba1eee5119ab2fe41d5fa80c6b2a75fc2655 namespace=k8s.io Jan 30 04:59:11.735911 containerd[1524]: time="2025-01-30T04:59:11.735802670Z" level=warning msg="cleaning up after shim disconnected" id=eb67a5010022c6d0f7ad1bd79a79ba1eee5119ab2fe41d5fa80c6b2a75fc2655 namespace=k8s.io Jan 30 04:59:11.735911 containerd[1524]: time="2025-01-30T04:59:11.735810534Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 04:59:11.738114 containerd[1524]: time="2025-01-30T04:59:11.737738593Z" level=info msg="RemoveContainer for \"be83d8831adf4ca8569e77b6ee848087ceade6e0953b81c57777e71412d2df0e\" returns successfully" Jan 30 04:59:11.738165 kubelet[2892]: I0130 04:59:11.738055 2892 scope.go:117] "RemoveContainer" containerID="e6e263608e8ab7352ec6ef9b0dd7c50f9ab9bd05cba61ea58585e7054d30e26a" Jan 30 04:59:11.738399 containerd[1524]: time="2025-01-30T04:59:11.738371837Z" level=error msg="ContainerStatus for \"e6e263608e8ab7352ec6ef9b0dd7c50f9ab9bd05cba61ea58585e7054d30e26a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e6e263608e8ab7352ec6ef9b0dd7c50f9ab9bd05cba61ea58585e7054d30e26a\": not found" Jan 30 04:59:11.740791 kubelet[2892]: I0130 04:59:11.740767 2892 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2-cni-path\") on node \"ci-4186-1-0-8-df2fd9e83c\" DevicePath \"\"" Jan 30 04:59:11.740961 kubelet[2892]: I0130 04:59:11.740941 2892 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2-host-proc-sys-net\") on node \"ci-4186-1-0-8-df2fd9e83c\" DevicePath \"\"" Jan 30 04:59:11.741626 kubelet[2892]: I0130 04:59:11.741039 2892 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-v5ksc\" (UniqueName: \"kubernetes.io/projected/01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2-kube-api-access-v5ksc\") on node \"ci-4186-1-0-8-df2fd9e83c\" DevicePath \"\"" Jan 30 04:59:11.741626 kubelet[2892]: I0130 04:59:11.741060 2892 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2-cilium-cgroup\") on node \"ci-4186-1-0-8-df2fd9e83c\" DevicePath \"\"" Jan 30 04:59:11.741626 kubelet[2892]: I0130 04:59:11.741074 2892 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2-hostproc\") on node \"ci-4186-1-0-8-df2fd9e83c\" DevicePath \"\"" Jan 30 04:59:11.741626 kubelet[2892]: I0130 04:59:11.741085 2892 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2-xtables-lock\") on node \"ci-4186-1-0-8-df2fd9e83c\" DevicePath \"\"" Jan 30 04:59:11.741626 kubelet[2892]: I0130 04:59:11.741096 2892 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2-etc-cni-netd\") on node \"ci-4186-1-0-8-df2fd9e83c\" DevicePath \"\"" Jan 30 04:59:11.741626 kubelet[2892]: I0130 04:59:11.741108 2892 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2-host-proc-sys-kernel\") on node \"ci-4186-1-0-8-df2fd9e83c\" DevicePath \"\"" Jan 30 04:59:11.741626 kubelet[2892]: I0130 04:59:11.741120 2892 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2-clustermesh-secrets\") on node \"ci-4186-1-0-8-df2fd9e83c\" DevicePath \"\"" Jan 30 04:59:11.741626 kubelet[2892]: I0130 04:59:11.741131 2892 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2-lib-modules\") on node \"ci-4186-1-0-8-df2fd9e83c\" DevicePath \"\"" Jan 30 04:59:11.742213 kubelet[2892]: I0130 04:59:11.741142 2892 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2-bpf-maps\") on node \"ci-4186-1-0-8-df2fd9e83c\" DevicePath \"\"" Jan 30 04:59:11.742213 kubelet[2892]: I0130 04:59:11.741153 2892 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2-cilium-config-path\") on node \"ci-4186-1-0-8-df2fd9e83c\" DevicePath \"\"" Jan 30 04:59:11.742213 kubelet[2892]: I0130 04:59:11.741163 2892 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2-cilium-run\") on node \"ci-4186-1-0-8-df2fd9e83c\" DevicePath \"\"" Jan 30 04:59:11.742213 kubelet[2892]: I0130 04:59:11.741175 2892 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2-hubble-tls\") on node \"ci-4186-1-0-8-df2fd9e83c\" DevicePath \"\"" Jan 30 04:59:11.746233 kubelet[2892]: E0130 04:59:11.746049 2892 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e6e263608e8ab7352ec6ef9b0dd7c50f9ab9bd05cba61ea58585e7054d30e26a\": not found" containerID="e6e263608e8ab7352ec6ef9b0dd7c50f9ab9bd05cba61ea58585e7054d30e26a" Jan 30 04:59:11.749813 kubelet[2892]: I0130 04:59:11.747525 2892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e6e263608e8ab7352ec6ef9b0dd7c50f9ab9bd05cba61ea58585e7054d30e26a"} err="failed to get container status \"e6e263608e8ab7352ec6ef9b0dd7c50f9ab9bd05cba61ea58585e7054d30e26a\": rpc error: code = NotFound desc = an error occurred when try to find container \"e6e263608e8ab7352ec6ef9b0dd7c50f9ab9bd05cba61ea58585e7054d30e26a\": not found" Jan 30 04:59:11.749813 kubelet[2892]: I0130 04:59:11.749711 2892 scope.go:117] "RemoveContainer" containerID="e7535ba295c3f5caee3e3f19d32928c35e798d4a61b7605704ed7ae6b5a2625c" Jan 30 04:59:11.750395 containerd[1524]: time="2025-01-30T04:59:11.750112661Z" level=error msg="ContainerStatus for \"e7535ba295c3f5caee3e3f19d32928c35e798d4a61b7605704ed7ae6b5a2625c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e7535ba295c3f5caee3e3f19d32928c35e798d4a61b7605704ed7ae6b5a2625c\": not found" Jan 30 04:59:11.750456 kubelet[2892]: E0130 04:59:11.750278 2892 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e7535ba295c3f5caee3e3f19d32928c35e798d4a61b7605704ed7ae6b5a2625c\": not found" containerID="e7535ba295c3f5caee3e3f19d32928c35e798d4a61b7605704ed7ae6b5a2625c" Jan 30 04:59:11.750456 kubelet[2892]: I0130 04:59:11.750307 2892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e7535ba295c3f5caee3e3f19d32928c35e798d4a61b7605704ed7ae6b5a2625c"} err="failed to get container status \"e7535ba295c3f5caee3e3f19d32928c35e798d4a61b7605704ed7ae6b5a2625c\": rpc error: code = NotFound desc = an error occurred when try to find container \"e7535ba295c3f5caee3e3f19d32928c35e798d4a61b7605704ed7ae6b5a2625c\": not found" Jan 30 04:59:11.750456 kubelet[2892]: I0130 04:59:11.750330 2892 scope.go:117] "RemoveContainer" containerID="65a41be782422e712b4f9f394fbe814baf0e24d4f98ed81772b6b3c751bd4d28" Jan 30 04:59:11.750947 containerd[1524]: time="2025-01-30T04:59:11.750775922Z" level=error msg="ContainerStatus for \"65a41be782422e712b4f9f394fbe814baf0e24d4f98ed81772b6b3c751bd4d28\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"65a41be782422e712b4f9f394fbe814baf0e24d4f98ed81772b6b3c751bd4d28\": not found" Jan 30 04:59:11.751095 kubelet[2892]: E0130 04:59:11.751015 2892 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"65a41be782422e712b4f9f394fbe814baf0e24d4f98ed81772b6b3c751bd4d28\": not found" containerID="65a41be782422e712b4f9f394fbe814baf0e24d4f98ed81772b6b3c751bd4d28" Jan 30 04:59:11.751095 kubelet[2892]: I0130 04:59:11.751051 2892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"65a41be782422e712b4f9f394fbe814baf0e24d4f98ed81772b6b3c751bd4d28"} err="failed to get container status \"65a41be782422e712b4f9f394fbe814baf0e24d4f98ed81772b6b3c751bd4d28\": rpc error: code = NotFound desc = an error occurred when try to find container \"65a41be782422e712b4f9f394fbe814baf0e24d4f98ed81772b6b3c751bd4d28\": not found" Jan 30 04:59:11.751095 kubelet[2892]: I0130 04:59:11.751076 2892 scope.go:117] "RemoveContainer" containerID="7e8883405fdf4b8301d4918f827a16380823baa96378d038fc2906391591983b" Jan 30 04:59:11.751256 containerd[1524]: time="2025-01-30T04:59:11.751218700Z" level=error msg="ContainerStatus for \"7e8883405fdf4b8301d4918f827a16380823baa96378d038fc2906391591983b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7e8883405fdf4b8301d4918f827a16380823baa96378d038fc2906391591983b\": not found" Jan 30 04:59:11.751447 kubelet[2892]: E0130 04:59:11.751338 2892 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7e8883405fdf4b8301d4918f827a16380823baa96378d038fc2906391591983b\": not found" containerID="7e8883405fdf4b8301d4918f827a16380823baa96378d038fc2906391591983b" Jan 30 04:59:11.751491 kubelet[2892]: I0130 04:59:11.751463 2892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7e8883405fdf4b8301d4918f827a16380823baa96378d038fc2906391591983b"} err="failed to get container status \"7e8883405fdf4b8301d4918f827a16380823baa96378d038fc2906391591983b\": rpc error: code = NotFound desc = an error occurred when try to find container \"7e8883405fdf4b8301d4918f827a16380823baa96378d038fc2906391591983b\": not found" Jan 30 04:59:11.751491 kubelet[2892]: I0130 04:59:11.751478 2892 scope.go:117] "RemoveContainer" containerID="be83d8831adf4ca8569e77b6ee848087ceade6e0953b81c57777e71412d2df0e" Jan 30 04:59:11.751902 containerd[1524]: time="2025-01-30T04:59:11.751635650Z" level=error msg="ContainerStatus for \"be83d8831adf4ca8569e77b6ee848087ceade6e0953b81c57777e71412d2df0e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"be83d8831adf4ca8569e77b6ee848087ceade6e0953b81c57777e71412d2df0e\": not found" Jan 30 04:59:11.751957 kubelet[2892]: E0130 04:59:11.751738 2892 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"be83d8831adf4ca8569e77b6ee848087ceade6e0953b81c57777e71412d2df0e\": not found" containerID="be83d8831adf4ca8569e77b6ee848087ceade6e0953b81c57777e71412d2df0e" Jan 30 04:59:11.751957 kubelet[2892]: I0130 04:59:11.751753 2892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"be83d8831adf4ca8569e77b6ee848087ceade6e0953b81c57777e71412d2df0e"} err="failed to get container status \"be83d8831adf4ca8569e77b6ee848087ceade6e0953b81c57777e71412d2df0e\": rpc error: code = NotFound desc = an error occurred when try to find container \"be83d8831adf4ca8569e77b6ee848087ceade6e0953b81c57777e71412d2df0e\": not found" Jan 30 04:59:11.753788 containerd[1524]: time="2025-01-30T04:59:11.753765818Z" level=info msg="StopContainer for \"eb67a5010022c6d0f7ad1bd79a79ba1eee5119ab2fe41d5fa80c6b2a75fc2655\" returns successfully" Jan 30 04:59:11.754251 containerd[1524]: time="2025-01-30T04:59:11.754230176Z" level=info msg="StopPodSandbox for \"21a6a90f7de75bd51d5ad9bec88841220c05183e48f22fa643b9242edde85553\"" Jan 30 04:59:11.754491 containerd[1524]: time="2025-01-30T04:59:11.754390065Z" level=info msg="Container to stop \"eb67a5010022c6d0f7ad1bd79a79ba1eee5119ab2fe41d5fa80c6b2a75fc2655\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 04:59:11.760527 systemd[1]: cri-containerd-21a6a90f7de75bd51d5ad9bec88841220c05183e48f22fa643b9242edde85553.scope: Deactivated successfully. Jan 30 04:59:11.784867 containerd[1524]: time="2025-01-30T04:59:11.784717331Z" level=info msg="shim disconnected" id=21a6a90f7de75bd51d5ad9bec88841220c05183e48f22fa643b9242edde85553 namespace=k8s.io Jan 30 04:59:11.784867 containerd[1524]: time="2025-01-30T04:59:11.784765552Z" level=warning msg="cleaning up after shim disconnected" id=21a6a90f7de75bd51d5ad9bec88841220c05183e48f22fa643b9242edde85553 namespace=k8s.io Jan 30 04:59:11.784867 containerd[1524]: time="2025-01-30T04:59:11.784773557Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 04:59:11.799021 containerd[1524]: time="2025-01-30T04:59:11.798980354Z" level=info msg="TearDown network for sandbox \"21a6a90f7de75bd51d5ad9bec88841220c05183e48f22fa643b9242edde85553\" successfully" Jan 30 04:59:11.799021 containerd[1524]: time="2025-01-30T04:59:11.799006363Z" level=info msg="StopPodSandbox for \"21a6a90f7de75bd51d5ad9bec88841220c05183e48f22fa643b9242edde85553\" returns successfully" Jan 30 04:59:11.841364 kubelet[2892]: I0130 04:59:11.841312 2892 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/110a281d-60bb-48e2-aad6-f2afaead73bb-cilium-config-path\") pod \"110a281d-60bb-48e2-aad6-f2afaead73bb\" (UID: \"110a281d-60bb-48e2-aad6-f2afaead73bb\") " Jan 30 04:59:11.841364 kubelet[2892]: I0130 04:59:11.841369 2892 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzzsk\" (UniqueName: \"kubernetes.io/projected/110a281d-60bb-48e2-aad6-f2afaead73bb-kube-api-access-zzzsk\") pod \"110a281d-60bb-48e2-aad6-f2afaead73bb\" (UID: \"110a281d-60bb-48e2-aad6-f2afaead73bb\") " Jan 30 04:59:11.844162 kubelet[2892]: I0130 04:59:11.844126 2892 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/110a281d-60bb-48e2-aad6-f2afaead73bb-kube-api-access-zzzsk" (OuterVolumeSpecName: "kube-api-access-zzzsk") pod "110a281d-60bb-48e2-aad6-f2afaead73bb" (UID: "110a281d-60bb-48e2-aad6-f2afaead73bb"). InnerVolumeSpecName "kube-api-access-zzzsk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 04:59:11.845465 kubelet[2892]: I0130 04:59:11.845419 2892 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/110a281d-60bb-48e2-aad6-f2afaead73bb-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "110a281d-60bb-48e2-aad6-f2afaead73bb" (UID: "110a281d-60bb-48e2-aad6-f2afaead73bb"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 04:59:11.942723 kubelet[2892]: I0130 04:59:11.942496 2892 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/110a281d-60bb-48e2-aad6-f2afaead73bb-cilium-config-path\") on node \"ci-4186-1-0-8-df2fd9e83c\" DevicePath \"\"" Jan 30 04:59:11.942723 kubelet[2892]: I0130 04:59:11.942534 2892 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-zzzsk\" (UniqueName: \"kubernetes.io/projected/110a281d-60bb-48e2-aad6-f2afaead73bb-kube-api-access-zzzsk\") on node \"ci-4186-1-0-8-df2fd9e83c\" DevicePath \"\"" Jan 30 04:59:12.279403 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eb67a5010022c6d0f7ad1bd79a79ba1eee5119ab2fe41d5fa80c6b2a75fc2655-rootfs.mount: Deactivated successfully. Jan 30 04:59:12.279533 systemd[1]: var-lib-kubelet-pods-01878e3e\x2d38d5\x2d4b4e\x2da4b5\x2dd1ade3a7f9c2-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 30 04:59:12.279670 systemd[1]: var-lib-kubelet-pods-01878e3e\x2d38d5\x2d4b4e\x2da4b5\x2dd1ade3a7f9c2-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 30 04:59:12.279774 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-21a6a90f7de75bd51d5ad9bec88841220c05183e48f22fa643b9242edde85553-rootfs.mount: Deactivated successfully. Jan 30 04:59:12.279872 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-21a6a90f7de75bd51d5ad9bec88841220c05183e48f22fa643b9242edde85553-shm.mount: Deactivated successfully. Jan 30 04:59:12.280425 systemd[1]: var-lib-kubelet-pods-110a281d\x2d60bb\x2d48e2\x2daad6\x2df2afaead73bb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzzzsk.mount: Deactivated successfully. Jan 30 04:59:12.280643 systemd[1]: var-lib-kubelet-pods-01878e3e\x2d38d5\x2d4b4e\x2da4b5\x2dd1ade3a7f9c2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dv5ksc.mount: Deactivated successfully. Jan 30 04:59:12.664055 kubelet[2892]: I0130 04:59:12.662718 2892 scope.go:117] "RemoveContainer" containerID="eb67a5010022c6d0f7ad1bd79a79ba1eee5119ab2fe41d5fa80c6b2a75fc2655" Jan 30 04:59:12.664468 containerd[1524]: time="2025-01-30T04:59:12.664432171Z" level=info msg="RemoveContainer for \"eb67a5010022c6d0f7ad1bd79a79ba1eee5119ab2fe41d5fa80c6b2a75fc2655\"" Jan 30 04:59:12.669106 containerd[1524]: time="2025-01-30T04:59:12.669011952Z" level=info msg="RemoveContainer for \"eb67a5010022c6d0f7ad1bd79a79ba1eee5119ab2fe41d5fa80c6b2a75fc2655\" returns successfully" Jan 30 04:59:12.669322 kubelet[2892]: I0130 04:59:12.669295 2892 scope.go:117] "RemoveContainer" containerID="eb67a5010022c6d0f7ad1bd79a79ba1eee5119ab2fe41d5fa80c6b2a75fc2655" Jan 30 04:59:12.669550 containerd[1524]: time="2025-01-30T04:59:12.669506046Z" level=error msg="ContainerStatus for \"eb67a5010022c6d0f7ad1bd79a79ba1eee5119ab2fe41d5fa80c6b2a75fc2655\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"eb67a5010022c6d0f7ad1bd79a79ba1eee5119ab2fe41d5fa80c6b2a75fc2655\": not found" Jan 30 04:59:12.669708 kubelet[2892]: E0130 04:59:12.669674 2892 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"eb67a5010022c6d0f7ad1bd79a79ba1eee5119ab2fe41d5fa80c6b2a75fc2655\": not found" containerID="eb67a5010022c6d0f7ad1bd79a79ba1eee5119ab2fe41d5fa80c6b2a75fc2655" Jan 30 04:59:12.669756 kubelet[2892]: I0130 04:59:12.669703 2892 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"eb67a5010022c6d0f7ad1bd79a79ba1eee5119ab2fe41d5fa80c6b2a75fc2655"} err="failed to get container status \"eb67a5010022c6d0f7ad1bd79a79ba1eee5119ab2fe41d5fa80c6b2a75fc2655\": rpc error: code = NotFound desc = an error occurred when try to find container \"eb67a5010022c6d0f7ad1bd79a79ba1eee5119ab2fe41d5fa80c6b2a75fc2655\": not found" Jan 30 04:59:12.672175 systemd[1]: Removed slice kubepods-besteffort-pod110a281d_60bb_48e2_aad6_f2afaead73bb.slice - libcontainer container kubepods-besteffort-pod110a281d_60bb_48e2_aad6_f2afaead73bb.slice. Jan 30 04:59:12.672289 systemd[1]: kubepods-besteffort-pod110a281d_60bb_48e2_aad6_f2afaead73bb.slice: Consumed 1.299s CPU time. Jan 30 04:59:12.705466 kubelet[2892]: I0130 04:59:12.705419 2892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2" path="/var/lib/kubelet/pods/01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2/volumes" Jan 30 04:59:12.706622 kubelet[2892]: I0130 04:59:12.706597 2892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="110a281d-60bb-48e2-aad6-f2afaead73bb" path="/var/lib/kubelet/pods/110a281d-60bb-48e2-aad6-f2afaead73bb/volumes" Jan 30 04:59:13.287516 sshd[5891]: Connection closed by 139.178.89.65 port 34612 Jan 30 04:59:13.288455 sshd-session[5889]: pam_unix(sshd:session): session closed for user core Jan 30 04:59:13.291731 systemd[1]: sshd@124-116.202.14.223:22-139.178.89.65:34612.service: Deactivated successfully. Jan 30 04:59:13.294123 systemd[1]: session-120.scope: Deactivated successfully. Jan 30 04:59:13.295981 systemd-logind[1509]: Session 120 logged out. Waiting for processes to exit. Jan 30 04:59:13.297182 systemd-logind[1509]: Removed session 120. Jan 30 04:59:13.462124 systemd[1]: Started sshd@125-116.202.14.223:22-139.178.89.65:44490.service - OpenSSH per-connection server daemon (139.178.89.65:44490). Jan 30 04:59:14.015167 kubelet[2892]: E0130 04:59:14.015103 2892 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 30 04:59:14.448748 sshd[6052]: Accepted publickey for core from 139.178.89.65 port 44490 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:59:14.450332 sshd-session[6052]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:59:14.455716 systemd-logind[1509]: New session 121 of user core. Jan 30 04:59:14.460040 systemd[1]: Started session-121.scope - Session 121 of User core. Jan 30 04:59:15.427998 kubelet[2892]: I0130 04:59:15.427918 2892 topology_manager.go:215] "Topology Admit Handler" podUID="1f2f2d19-2edc-475d-aac1-a93cade90f16" podNamespace="kube-system" podName="cilium-x4hlb" Jan 30 04:59:15.441743 kubelet[2892]: E0130 04:59:15.441703 2892 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2" containerName="apply-sysctl-overwrites" Jan 30 04:59:15.441743 kubelet[2892]: E0130 04:59:15.441746 2892 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2" containerName="cilium-agent" Jan 30 04:59:15.441743 kubelet[2892]: E0130 04:59:15.441755 2892 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="110a281d-60bb-48e2-aad6-f2afaead73bb" containerName="cilium-operator" Jan 30 04:59:15.441950 kubelet[2892]: E0130 04:59:15.441761 2892 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2" containerName="mount-cgroup" Jan 30 04:59:15.441950 kubelet[2892]: E0130 04:59:15.441769 2892 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2" containerName="mount-bpf-fs" Jan 30 04:59:15.441950 kubelet[2892]: E0130 04:59:15.441774 2892 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2" containerName="clean-cilium-state" Jan 30 04:59:15.445522 kubelet[2892]: I0130 04:59:15.445501 2892 memory_manager.go:354] "RemoveStaleState removing state" podUID="110a281d-60bb-48e2-aad6-f2afaead73bb" containerName="cilium-operator" Jan 30 04:59:15.445522 kubelet[2892]: I0130 04:59:15.445524 2892 memory_manager.go:354] "RemoveStaleState removing state" podUID="01878e3e-38d5-4b4e-a4b5-d1ade3a7f9c2" containerName="cilium-agent" Jan 30 04:59:15.481765 systemd[1]: Created slice kubepods-burstable-pod1f2f2d19_2edc_475d_aac1_a93cade90f16.slice - libcontainer container kubepods-burstable-pod1f2f2d19_2edc_475d_aac1_a93cade90f16.slice. Jan 30 04:59:15.568403 kubelet[2892]: I0130 04:59:15.568334 2892 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1f2f2d19-2edc-475d-aac1-a93cade90f16-host-proc-sys-net\") pod \"cilium-x4hlb\" (UID: \"1f2f2d19-2edc-475d-aac1-a93cade90f16\") " pod="kube-system/cilium-x4hlb" Jan 30 04:59:15.568403 kubelet[2892]: I0130 04:59:15.568378 2892 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1f2f2d19-2edc-475d-aac1-a93cade90f16-hostproc\") pod \"cilium-x4hlb\" (UID: \"1f2f2d19-2edc-475d-aac1-a93cade90f16\") " pod="kube-system/cilium-x4hlb" Jan 30 04:59:15.568403 kubelet[2892]: I0130 04:59:15.568400 2892 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1f2f2d19-2edc-475d-aac1-a93cade90f16-bpf-maps\") pod \"cilium-x4hlb\" (UID: \"1f2f2d19-2edc-475d-aac1-a93cade90f16\") " pod="kube-system/cilium-x4hlb" Jan 30 04:59:15.568843 kubelet[2892]: I0130 04:59:15.568417 2892 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1f2f2d19-2edc-475d-aac1-a93cade90f16-host-proc-sys-kernel\") pod \"cilium-x4hlb\" (UID: \"1f2f2d19-2edc-475d-aac1-a93cade90f16\") " pod="kube-system/cilium-x4hlb" Jan 30 04:59:15.568843 kubelet[2892]: I0130 04:59:15.568435 2892 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1f2f2d19-2edc-475d-aac1-a93cade90f16-cilium-run\") pod \"cilium-x4hlb\" (UID: \"1f2f2d19-2edc-475d-aac1-a93cade90f16\") " pod="kube-system/cilium-x4hlb" Jan 30 04:59:15.568843 kubelet[2892]: I0130 04:59:15.568451 2892 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1f2f2d19-2edc-475d-aac1-a93cade90f16-etc-cni-netd\") pod \"cilium-x4hlb\" (UID: \"1f2f2d19-2edc-475d-aac1-a93cade90f16\") " pod="kube-system/cilium-x4hlb" Jan 30 04:59:15.568843 kubelet[2892]: I0130 04:59:15.568468 2892 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzrmf\" (UniqueName: \"kubernetes.io/projected/1f2f2d19-2edc-475d-aac1-a93cade90f16-kube-api-access-kzrmf\") pod \"cilium-x4hlb\" (UID: \"1f2f2d19-2edc-475d-aac1-a93cade90f16\") " pod="kube-system/cilium-x4hlb" Jan 30 04:59:15.568843 kubelet[2892]: I0130 04:59:15.568486 2892 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1f2f2d19-2edc-475d-aac1-a93cade90f16-xtables-lock\") pod \"cilium-x4hlb\" (UID: \"1f2f2d19-2edc-475d-aac1-a93cade90f16\") " pod="kube-system/cilium-x4hlb" Jan 30 04:59:15.568843 kubelet[2892]: I0130 04:59:15.568504 2892 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1f2f2d19-2edc-475d-aac1-a93cade90f16-cilium-cgroup\") pod \"cilium-x4hlb\" (UID: \"1f2f2d19-2edc-475d-aac1-a93cade90f16\") " pod="kube-system/cilium-x4hlb" Jan 30 04:59:15.569034 kubelet[2892]: I0130 04:59:15.568519 2892 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1f2f2d19-2edc-475d-aac1-a93cade90f16-cni-path\") pod \"cilium-x4hlb\" (UID: \"1f2f2d19-2edc-475d-aac1-a93cade90f16\") " pod="kube-system/cilium-x4hlb" Jan 30 04:59:15.569034 kubelet[2892]: I0130 04:59:15.568552 2892 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1f2f2d19-2edc-475d-aac1-a93cade90f16-clustermesh-secrets\") pod \"cilium-x4hlb\" (UID: \"1f2f2d19-2edc-475d-aac1-a93cade90f16\") " pod="kube-system/cilium-x4hlb" Jan 30 04:59:15.569034 kubelet[2892]: I0130 04:59:15.568568 2892 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1f2f2d19-2edc-475d-aac1-a93cade90f16-cilium-ipsec-secrets\") pod \"cilium-x4hlb\" (UID: \"1f2f2d19-2edc-475d-aac1-a93cade90f16\") " pod="kube-system/cilium-x4hlb" Jan 30 04:59:15.569034 kubelet[2892]: I0130 04:59:15.568587 2892 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1f2f2d19-2edc-475d-aac1-a93cade90f16-lib-modules\") pod \"cilium-x4hlb\" (UID: \"1f2f2d19-2edc-475d-aac1-a93cade90f16\") " pod="kube-system/cilium-x4hlb" Jan 30 04:59:15.569034 kubelet[2892]: I0130 04:59:15.568604 2892 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1f2f2d19-2edc-475d-aac1-a93cade90f16-cilium-config-path\") pod \"cilium-x4hlb\" (UID: \"1f2f2d19-2edc-475d-aac1-a93cade90f16\") " pod="kube-system/cilium-x4hlb" Jan 30 04:59:15.569034 kubelet[2892]: I0130 04:59:15.568620 2892 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1f2f2d19-2edc-475d-aac1-a93cade90f16-hubble-tls\") pod \"cilium-x4hlb\" (UID: \"1f2f2d19-2edc-475d-aac1-a93cade90f16\") " pod="kube-system/cilium-x4hlb" Jan 30 04:59:15.658313 sshd[6054]: Connection closed by 139.178.89.65 port 44490 Jan 30 04:59:15.659090 sshd-session[6052]: pam_unix(sshd:session): session closed for user core Jan 30 04:59:15.662294 systemd[1]: sshd@125-116.202.14.223:22-139.178.89.65:44490.service: Deactivated successfully. Jan 30 04:59:15.664594 systemd[1]: session-121.scope: Deactivated successfully. Jan 30 04:59:15.666242 systemd-logind[1509]: Session 121 logged out. Waiting for processes to exit. Jan 30 04:59:15.667427 systemd-logind[1509]: Removed session 121. Jan 30 04:59:15.784763 containerd[1524]: time="2025-01-30T04:59:15.784697393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x4hlb,Uid:1f2f2d19-2edc-475d-aac1-a93cade90f16,Namespace:kube-system,Attempt:0,}" Jan 30 04:59:15.805862 containerd[1524]: time="2025-01-30T04:59:15.805769178Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 04:59:15.806023 containerd[1524]: time="2025-01-30T04:59:15.805846163Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 04:59:15.806023 containerd[1524]: time="2025-01-30T04:59:15.805866892Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 04:59:15.806650 containerd[1524]: time="2025-01-30T04:59:15.806601126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 04:59:15.831020 systemd[1]: Started cri-containerd-014ee588dd483f7706b98829660ceed58ef3bc5dea8f850e96f9b07fc296a917.scope - libcontainer container 014ee588dd483f7706b98829660ceed58ef3bc5dea8f850e96f9b07fc296a917. Jan 30 04:59:15.835950 systemd[1]: Started sshd@126-116.202.14.223:22-139.178.89.65:44506.service - OpenSSH per-connection server daemon (139.178.89.65:44506). Jan 30 04:59:15.861356 containerd[1524]: time="2025-01-30T04:59:15.861307718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x4hlb,Uid:1f2f2d19-2edc-475d-aac1-a93cade90f16,Namespace:kube-system,Attempt:0,} returns sandbox id \"014ee588dd483f7706b98829660ceed58ef3bc5dea8f850e96f9b07fc296a917\"" Jan 30 04:59:15.874103 containerd[1524]: time="2025-01-30T04:59:15.874077697Z" level=info msg="CreateContainer within sandbox \"014ee588dd483f7706b98829660ceed58ef3bc5dea8f850e96f9b07fc296a917\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 04:59:15.882816 containerd[1524]: time="2025-01-30T04:59:15.882748925Z" level=info msg="CreateContainer within sandbox \"014ee588dd483f7706b98829660ceed58ef3bc5dea8f850e96f9b07fc296a917\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"78de343d908ee353916b74347fe9f83f8234e7f96bc529d455e9748ac90edc8c\"" Jan 30 04:59:15.883849 containerd[1524]: time="2025-01-30T04:59:15.883198496Z" level=info msg="StartContainer for \"78de343d908ee353916b74347fe9f83f8234e7f96bc529d455e9748ac90edc8c\"" Jan 30 04:59:15.909024 systemd[1]: Started cri-containerd-78de343d908ee353916b74347fe9f83f8234e7f96bc529d455e9748ac90edc8c.scope - libcontainer container 78de343d908ee353916b74347fe9f83f8234e7f96bc529d455e9748ac90edc8c. Jan 30 04:59:15.932918 containerd[1524]: time="2025-01-30T04:59:15.932796539Z" level=info msg="StartContainer for \"78de343d908ee353916b74347fe9f83f8234e7f96bc529d455e9748ac90edc8c\" returns successfully" Jan 30 04:59:15.948340 systemd[1]: cri-containerd-78de343d908ee353916b74347fe9f83f8234e7f96bc529d455e9748ac90edc8c.scope: Deactivated successfully. Jan 30 04:59:15.978185 containerd[1524]: time="2025-01-30T04:59:15.978124038Z" level=info msg="shim disconnected" id=78de343d908ee353916b74347fe9f83f8234e7f96bc529d455e9748ac90edc8c namespace=k8s.io Jan 30 04:59:15.978185 containerd[1524]: time="2025-01-30T04:59:15.978175645Z" level=warning msg="cleaning up after shim disconnected" id=78de343d908ee353916b74347fe9f83f8234e7f96bc529d455e9748ac90edc8c namespace=k8s.io Jan 30 04:59:15.978185 containerd[1524]: time="2025-01-30T04:59:15.978183450Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 04:59:16.695311 containerd[1524]: time="2025-01-30T04:59:16.695244093Z" level=info msg="CreateContainer within sandbox \"014ee588dd483f7706b98829660ceed58ef3bc5dea8f850e96f9b07fc296a917\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 04:59:16.714482 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount412323735.mount: Deactivated successfully. Jan 30 04:59:16.716215 containerd[1524]: time="2025-01-30T04:59:16.715644963Z" level=info msg="CreateContainer within sandbox \"014ee588dd483f7706b98829660ceed58ef3bc5dea8f850e96f9b07fc296a917\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"96260267aec2f50b74c8a69a876f7531947b94adf47f2d081326377f8344a3bf\"" Jan 30 04:59:16.716922 containerd[1524]: time="2025-01-30T04:59:16.716849226Z" level=info msg="StartContainer for \"96260267aec2f50b74c8a69a876f7531947b94adf47f2d081326377f8344a3bf\"" Jan 30 04:59:16.743018 systemd[1]: Started cri-containerd-96260267aec2f50b74c8a69a876f7531947b94adf47f2d081326377f8344a3bf.scope - libcontainer container 96260267aec2f50b74c8a69a876f7531947b94adf47f2d081326377f8344a3bf. Jan 30 04:59:16.767878 containerd[1524]: time="2025-01-30T04:59:16.767788007Z" level=info msg="StartContainer for \"96260267aec2f50b74c8a69a876f7531947b94adf47f2d081326377f8344a3bf\" returns successfully" Jan 30 04:59:16.779476 systemd[1]: cri-containerd-96260267aec2f50b74c8a69a876f7531947b94adf47f2d081326377f8344a3bf.scope: Deactivated successfully. Jan 30 04:59:16.801345 containerd[1524]: time="2025-01-30T04:59:16.801275238Z" level=info msg="shim disconnected" id=96260267aec2f50b74c8a69a876f7531947b94adf47f2d081326377f8344a3bf namespace=k8s.io Jan 30 04:59:16.801345 containerd[1524]: time="2025-01-30T04:59:16.801335431Z" level=warning msg="cleaning up after shim disconnected" id=96260267aec2f50b74c8a69a876f7531947b94adf47f2d081326377f8344a3bf namespace=k8s.io Jan 30 04:59:16.801871 containerd[1524]: time="2025-01-30T04:59:16.801350911Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 04:59:16.823748 sshd[6098]: Accepted publickey for core from 139.178.89.65 port 44506 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:59:16.825219 sshd-session[6098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:59:16.829957 systemd-logind[1509]: New session 122 of user core. Jan 30 04:59:16.835030 systemd[1]: Started session-122.scope - Session 122 of User core. Jan 30 04:59:17.500274 sshd[6236]: Connection closed by 139.178.89.65 port 44506 Jan 30 04:59:17.501096 sshd-session[6098]: pam_unix(sshd:session): session closed for user core Jan 30 04:59:17.504313 systemd[1]: sshd@126-116.202.14.223:22-139.178.89.65:44506.service: Deactivated successfully. Jan 30 04:59:17.506634 systemd[1]: session-122.scope: Deactivated successfully. Jan 30 04:59:17.508672 systemd-logind[1509]: Session 122 logged out. Waiting for processes to exit. Jan 30 04:59:17.509990 systemd-logind[1509]: Removed session 122. Jan 30 04:59:17.676501 systemd[1]: Started sshd@127-116.202.14.223:22-139.178.89.65:44508.service - OpenSSH per-connection server daemon (139.178.89.65:44508). Jan 30 04:59:17.680842 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-96260267aec2f50b74c8a69a876f7531947b94adf47f2d081326377f8344a3bf-rootfs.mount: Deactivated successfully. Jan 30 04:59:17.703836 containerd[1524]: time="2025-01-30T04:59:17.703766667Z" level=info msg="CreateContainer within sandbox \"014ee588dd483f7706b98829660ceed58ef3bc5dea8f850e96f9b07fc296a917\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 04:59:17.723956 containerd[1524]: time="2025-01-30T04:59:17.723909854Z" level=info msg="CreateContainer within sandbox \"014ee588dd483f7706b98829660ceed58ef3bc5dea8f850e96f9b07fc296a917\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ed621d95c60d1a1e06801dd5be6820cef304e2b5a5f5907b23937e30778f399a\"" Jan 30 04:59:17.724915 containerd[1524]: time="2025-01-30T04:59:17.724450416Z" level=info msg="StartContainer for \"ed621d95c60d1a1e06801dd5be6820cef304e2b5a5f5907b23937e30778f399a\"" Jan 30 04:59:17.755090 systemd[1]: Started cri-containerd-ed621d95c60d1a1e06801dd5be6820cef304e2b5a5f5907b23937e30778f399a.scope - libcontainer container ed621d95c60d1a1e06801dd5be6820cef304e2b5a5f5907b23937e30778f399a. Jan 30 04:59:17.790002 containerd[1524]: time="2025-01-30T04:59:17.789937625Z" level=info msg="StartContainer for \"ed621d95c60d1a1e06801dd5be6820cef304e2b5a5f5907b23937e30778f399a\" returns successfully" Jan 30 04:59:17.795744 systemd[1]: cri-containerd-ed621d95c60d1a1e06801dd5be6820cef304e2b5a5f5907b23937e30778f399a.scope: Deactivated successfully. Jan 30 04:59:17.825459 containerd[1524]: time="2025-01-30T04:59:17.825407807Z" level=info msg="shim disconnected" id=ed621d95c60d1a1e06801dd5be6820cef304e2b5a5f5907b23937e30778f399a namespace=k8s.io Jan 30 04:59:17.826010 containerd[1524]: time="2025-01-30T04:59:17.825982643Z" level=warning msg="cleaning up after shim disconnected" id=ed621d95c60d1a1e06801dd5be6820cef304e2b5a5f5907b23937e30778f399a namespace=k8s.io Jan 30 04:59:17.826010 containerd[1524]: time="2025-01-30T04:59:17.826004434Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 04:59:18.652122 sshd[6242]: Accepted publickey for core from 139.178.89.65 port 44508 ssh2: RSA SHA256:kkEy55rURzL1eOh3ZgtGLrCZDR28qa36X7bjh8+Nfdg Jan 30 04:59:18.653664 sshd-session[6242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 04:59:18.659147 systemd-logind[1509]: New session 123 of user core. Jan 30 04:59:18.666109 systemd[1]: Started session-123.scope - Session 123 of User core. Jan 30 04:59:18.679701 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ed621d95c60d1a1e06801dd5be6820cef304e2b5a5f5907b23937e30778f399a-rootfs.mount: Deactivated successfully. Jan 30 04:59:18.705198 kubelet[2892]: E0130 04:59:18.704799 2892 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-rqs26" podUID="45f8009a-0c22-4d8c-9038-4e030021e956" Jan 30 04:59:18.727439 containerd[1524]: time="2025-01-30T04:59:18.727020565Z" level=info msg="CreateContainer within sandbox \"014ee588dd483f7706b98829660ceed58ef3bc5dea8f850e96f9b07fc296a917\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 04:59:18.742164 containerd[1524]: time="2025-01-30T04:59:18.742117840Z" level=info msg="CreateContainer within sandbox \"014ee588dd483f7706b98829660ceed58ef3bc5dea8f850e96f9b07fc296a917\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a0f568ab585bc203fcd2e46f093703c83bb3f8f07b1976737842bc8360147424\"" Jan 30 04:59:18.743693 containerd[1524]: time="2025-01-30T04:59:18.742665175Z" level=info msg="StartContainer for \"a0f568ab585bc203fcd2e46f093703c83bb3f8f07b1976737842bc8360147424\"" Jan 30 04:59:18.767661 systemd[1]: run-containerd-runc-k8s.io-a0f568ab585bc203fcd2e46f093703c83bb3f8f07b1976737842bc8360147424-runc.9z576I.mount: Deactivated successfully. Jan 30 04:59:18.774049 systemd[1]: Started cri-containerd-a0f568ab585bc203fcd2e46f093703c83bb3f8f07b1976737842bc8360147424.scope - libcontainer container a0f568ab585bc203fcd2e46f093703c83bb3f8f07b1976737842bc8360147424. Jan 30 04:59:18.801647 systemd[1]: cri-containerd-a0f568ab585bc203fcd2e46f093703c83bb3f8f07b1976737842bc8360147424.scope: Deactivated successfully. Jan 30 04:59:18.802867 containerd[1524]: time="2025-01-30T04:59:18.802590234Z" level=info msg="StartContainer for \"a0f568ab585bc203fcd2e46f093703c83bb3f8f07b1976737842bc8360147424\" returns successfully" Jan 30 04:59:18.824851 containerd[1524]: time="2025-01-30T04:59:18.824766918Z" level=info msg="shim disconnected" id=a0f568ab585bc203fcd2e46f093703c83bb3f8f07b1976737842bc8360147424 namespace=k8s.io Jan 30 04:59:18.824851 containerd[1524]: time="2025-01-30T04:59:18.824813706Z" level=warning msg="cleaning up after shim disconnected" id=a0f568ab585bc203fcd2e46f093703c83bb3f8f07b1976737842bc8360147424 namespace=k8s.io Jan 30 04:59:18.824851 containerd[1524]: time="2025-01-30T04:59:18.824836778Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 04:59:19.016184 kubelet[2892]: E0130 04:59:19.016139 2892 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 30 04:59:19.423909 kubelet[2892]: I0130 04:59:19.423619 2892 setters.go:580] "Node became not ready" node="ci-4186-1-0-8-df2fd9e83c" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-30T04:59:19Z","lastTransitionTime":"2025-01-30T04:59:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 30 04:59:19.679633 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a0f568ab585bc203fcd2e46f093703c83bb3f8f07b1976737842bc8360147424-rootfs.mount: Deactivated successfully. Jan 30 04:59:19.730018 containerd[1524]: time="2025-01-30T04:59:19.729792106Z" level=info msg="CreateContainer within sandbox \"014ee588dd483f7706b98829660ceed58ef3bc5dea8f850e96f9b07fc296a917\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 04:59:19.746344 containerd[1524]: time="2025-01-30T04:59:19.746269193Z" level=info msg="CreateContainer within sandbox \"014ee588dd483f7706b98829660ceed58ef3bc5dea8f850e96f9b07fc296a917\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6f18a2cbd651f85a747626f42c0aa233515bfe4e7295cb00ce12ff7b1acf7a45\"" Jan 30 04:59:19.749257 containerd[1524]: time="2025-01-30T04:59:19.747571861Z" level=info msg="StartContainer for \"6f18a2cbd651f85a747626f42c0aa233515bfe4e7295cb00ce12ff7b1acf7a45\"" Jan 30 04:59:19.777032 systemd[1]: Started cri-containerd-6f18a2cbd651f85a747626f42c0aa233515bfe4e7295cb00ce12ff7b1acf7a45.scope - libcontainer container 6f18a2cbd651f85a747626f42c0aa233515bfe4e7295cb00ce12ff7b1acf7a45. Jan 30 04:59:19.810963 containerd[1524]: time="2025-01-30T04:59:19.810928263Z" level=info msg="StartContainer for \"6f18a2cbd651f85a747626f42c0aa233515bfe4e7295cb00ce12ff7b1acf7a45\" returns successfully" Jan 30 04:59:20.405928 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 30 04:59:20.704635 kubelet[2892]: E0130 04:59:20.703993 2892 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-rqs26" podUID="45f8009a-0c22-4d8c-9038-4e030021e956" Jan 30 04:59:20.748171 kubelet[2892]: I0130 04:59:20.747375 2892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-x4hlb" podStartSLOduration=5.747356037 podStartE2EDuration="5.747356037s" podCreationTimestamp="2025-01-30 04:59:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 04:59:20.747120807 +0000 UTC m=+1042.132794280" watchObservedRunningTime="2025-01-30 04:59:20.747356037 +0000 UTC m=+1042.133029510" Jan 30 04:59:22.704454 kubelet[2892]: E0130 04:59:22.704398 2892 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-rqs26" podUID="45f8009a-0c22-4d8c-9038-4e030021e956" Jan 30 04:59:23.228181 systemd-networkd[1383]: lxc_health: Link UP Jan 30 04:59:23.234513 systemd-networkd[1383]: lxc_health: Gained carrier Jan 30 04:59:24.720685 systemd-networkd[1383]: lxc_health: Gained IPv6LL Jan 30 04:59:27.779521 systemd[1]: run-containerd-runc-k8s.io-6f18a2cbd651f85a747626f42c0aa233515bfe4e7295cb00ce12ff7b1acf7a45-runc.YrAuGv.mount: Deactivated successfully. Jan 30 04:59:27.826737 kubelet[2892]: E0130 04:59:27.826473 2892 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 127.0.0.1:42488->127.0.0.1:44741: read tcp 127.0.0.1:42488->127.0.0.1:44741: read: connection reset by peer Jan 30 04:59:31.963230 systemd[1]: run-containerd-runc-k8s.io-6f18a2cbd651f85a747626f42c0aa233515bfe4e7295cb00ce12ff7b1acf7a45-runc.Rb4IrT.mount: Deactivated successfully. Jan 30 04:59:42.429704 systemd[1]: run-containerd-runc-k8s.io-6f18a2cbd651f85a747626f42c0aa233515bfe4e7295cb00ce12ff7b1acf7a45-runc.kcUg82.mount: Deactivated successfully. Jan 30 04:59:50.796995 systemd[1]: run-containerd-runc-k8s.io-6f18a2cbd651f85a747626f42c0aa233515bfe4e7295cb00ce12ff7b1acf7a45-runc.tN7aWM.mount: Deactivated successfully. Jan 30 04:59:58.731943 containerd[1524]: time="2025-01-30T04:59:58.731881115Z" level=info msg="StopPodSandbox for \"21a6a90f7de75bd51d5ad9bec88841220c05183e48f22fa643b9242edde85553\"" Jan 30 04:59:58.732661 containerd[1524]: time="2025-01-30T04:59:58.732010056Z" level=info msg="TearDown network for sandbox \"21a6a90f7de75bd51d5ad9bec88841220c05183e48f22fa643b9242edde85553\" successfully" Jan 30 04:59:58.732661 containerd[1524]: time="2025-01-30T04:59:58.732033570Z" level=info msg="StopPodSandbox for \"21a6a90f7de75bd51d5ad9bec88841220c05183e48f22fa643b9242edde85553\" returns successfully" Jan 30 04:59:58.732661 containerd[1524]: time="2025-01-30T04:59:58.732518347Z" level=info msg="RemovePodSandbox for \"21a6a90f7de75bd51d5ad9bec88841220c05183e48f22fa643b9242edde85553\"" Jan 30 04:59:58.732661 containerd[1524]: time="2025-01-30T04:59:58.732577839Z" level=info msg="Forcibly stopping sandbox \"21a6a90f7de75bd51d5ad9bec88841220c05183e48f22fa643b9242edde85553\"" Jan 30 04:59:58.732857 containerd[1524]: time="2025-01-30T04:59:58.732648331Z" level=info msg="TearDown network for sandbox \"21a6a90f7de75bd51d5ad9bec88841220c05183e48f22fa643b9242edde85553\" successfully" Jan 30 04:59:58.736426 containerd[1524]: time="2025-01-30T04:59:58.736385305Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"21a6a90f7de75bd51d5ad9bec88841220c05183e48f22fa643b9242edde85553\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 04:59:58.736584 containerd[1524]: time="2025-01-30T04:59:58.736439606Z" level=info msg="RemovePodSandbox \"21a6a90f7de75bd51d5ad9bec88841220c05183e48f22fa643b9242edde85553\" returns successfully" Jan 30 04:59:58.737097 containerd[1524]: time="2025-01-30T04:59:58.736798890Z" level=info msg="StopPodSandbox for \"7c0376dc791b4c5ddb4c788600ceace473d5dfe4ebbe2483b8718bbfe07db89e\"" Jan 30 04:59:58.737097 containerd[1524]: time="2025-01-30T04:59:58.736938530Z" level=info msg="TearDown network for sandbox \"7c0376dc791b4c5ddb4c788600ceace473d5dfe4ebbe2483b8718bbfe07db89e\" successfully" Jan 30 04:59:58.737097 containerd[1524]: time="2025-01-30T04:59:58.737025273Z" level=info msg="StopPodSandbox for \"7c0376dc791b4c5ddb4c788600ceace473d5dfe4ebbe2483b8718bbfe07db89e\" returns successfully" Jan 30 04:59:58.737329 containerd[1524]: time="2025-01-30T04:59:58.737296611Z" level=info msg="RemovePodSandbox for \"7c0376dc791b4c5ddb4c788600ceace473d5dfe4ebbe2483b8718bbfe07db89e\"" Jan 30 04:59:58.737329 containerd[1524]: time="2025-01-30T04:59:58.737315116Z" level=info msg="Forcibly stopping sandbox \"7c0376dc791b4c5ddb4c788600ceace473d5dfe4ebbe2483b8718bbfe07db89e\"" Jan 30 04:59:58.737406 containerd[1524]: time="2025-01-30T04:59:58.737367053Z" level=info msg="TearDown network for sandbox \"7c0376dc791b4c5ddb4c788600ceace473d5dfe4ebbe2483b8718bbfe07db89e\" successfully" Jan 30 04:59:58.740637 containerd[1524]: time="2025-01-30T04:59:58.740605635Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7c0376dc791b4c5ddb4c788600ceace473d5dfe4ebbe2483b8718bbfe07db89e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 04:59:58.740708 containerd[1524]: time="2025-01-30T04:59:58.740643365Z" level=info msg="RemovePodSandbox \"7c0376dc791b4c5ddb4c788600ceace473d5dfe4ebbe2483b8718bbfe07db89e\" returns successfully" Jan 30 05:00:13.769016 systemd[1]: run-containerd-runc-k8s.io-6f18a2cbd651f85a747626f42c0aa233515bfe4e7295cb00ce12ff7b1acf7a45-runc.VUvPUF.mount: Deactivated successfully. Jan 30 05:00:17.950293 systemd[1]: run-containerd-runc-k8s.io-6f18a2cbd651f85a747626f42c0aa233515bfe4e7295cb00ce12ff7b1acf7a45-runc.2LZ05m.mount: Deactivated successfully. Jan 30 05:00:20.152420 sshd[6303]: Connection closed by 139.178.89.65 port 44508 Jan 30 05:00:20.153688 sshd-session[6242]: pam_unix(sshd:session): session closed for user core Jan 30 05:00:20.157762 systemd[1]: sshd@127-116.202.14.223:22-139.178.89.65:44508.service: Deactivated successfully. Jan 30 05:00:20.160611 systemd[1]: session-123.scope: Deactivated successfully. Jan 30 05:00:20.162659 systemd-logind[1509]: Session 123 logged out. Waiting for processes to exit. Jan 30 05:00:20.164517 systemd-logind[1509]: Removed session 123. Jan 30 05:00:36.749641 systemd[1]: cri-containerd-6783804cf24e209465e5955c53e94ebd98eca8266ceb246264543138325a347d.scope: Deactivated successfully. Jan 30 05:00:36.750286 systemd[1]: cri-containerd-6783804cf24e209465e5955c53e94ebd98eca8266ceb246264543138325a347d.scope: Consumed 13.326s CPU time, 22.4M memory peak, 0B memory swap peak. Jan 30 05:00:36.777344 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6783804cf24e209465e5955c53e94ebd98eca8266ceb246264543138325a347d-rootfs.mount: Deactivated successfully. Jan 30 05:00:36.782810 containerd[1524]: time="2025-01-30T05:00:36.782734372Z" level=info msg="shim disconnected" id=6783804cf24e209465e5955c53e94ebd98eca8266ceb246264543138325a347d namespace=k8s.io Jan 30 05:00:36.782810 containerd[1524]: time="2025-01-30T05:00:36.782788523Z" level=warning msg="cleaning up after shim disconnected" id=6783804cf24e209465e5955c53e94ebd98eca8266ceb246264543138325a347d namespace=k8s.io Jan 30 05:00:36.782810 containerd[1524]: time="2025-01-30T05:00:36.782798582Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 05:00:36.790903 kubelet[2892]: E0130 05:00:36.790833 2892 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:34518->10.0.0.2:2379: read: connection timed out" Jan 30 05:00:36.796369 systemd[1]: cri-containerd-257de490127b037c5e468102156976d0cd58283414130bcfd7a69f245c3560cb.scope: Deactivated successfully. Jan 30 05:00:36.796696 systemd[1]: cri-containerd-257de490127b037c5e468102156976d0cd58283414130bcfd7a69f245c3560cb.scope: Consumed 2.961s CPU time, 16.6M memory peak, 0B memory swap peak. Jan 30 05:00:36.819466 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-257de490127b037c5e468102156976d0cd58283414130bcfd7a69f245c3560cb-rootfs.mount: Deactivated successfully. Jan 30 05:00:36.825467 containerd[1524]: time="2025-01-30T05:00:36.825398422Z" level=info msg="shim disconnected" id=257de490127b037c5e468102156976d0cd58283414130bcfd7a69f245c3560cb namespace=k8s.io Jan 30 05:00:36.825467 containerd[1524]: time="2025-01-30T05:00:36.825448186Z" level=warning msg="cleaning up after shim disconnected" id=257de490127b037c5e468102156976d0cd58283414130bcfd7a69f245c3560cb namespace=k8s.io Jan 30 05:00:36.825467 containerd[1524]: time="2025-01-30T05:00:36.825457142Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 05:00:36.878343 kubelet[2892]: I0130 05:00:36.878300 2892 scope.go:117] "RemoveContainer" containerID="6783804cf24e209465e5955c53e94ebd98eca8266ceb246264543138325a347d" Jan 30 05:00:36.881279 containerd[1524]: time="2025-01-30T05:00:36.880782123Z" level=info msg="CreateContainer within sandbox \"adbec2bf8120ac1407302136f84209cedd2beb96152ff13ba2f69e89f90a3c2d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 30 05:00:36.881447 kubelet[2892]: I0130 05:00:36.881365 2892 scope.go:117] "RemoveContainer" containerID="257de490127b037c5e468102156976d0cd58283414130bcfd7a69f245c3560cb" Jan 30 05:00:36.884364 containerd[1524]: time="2025-01-30T05:00:36.884339331Z" level=info msg="CreateContainer within sandbox \"e89fec12a6e2c603f9f087eb9c32187ae4bbafecab6e536436311428f6bca1b6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 30 05:00:36.897805 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1825482409.mount: Deactivated successfully. Jan 30 05:00:36.901578 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2562415637.mount: Deactivated successfully. Jan 30 05:00:36.902054 containerd[1524]: time="2025-01-30T05:00:36.901907926Z" level=info msg="CreateContainer within sandbox \"adbec2bf8120ac1407302136f84209cedd2beb96152ff13ba2f69e89f90a3c2d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"73565ccd069bb09663cf19bdf91cb55227ad6a84d12f75af03c83e197e33670a\"" Jan 30 05:00:36.902879 containerd[1524]: time="2025-01-30T05:00:36.902814723Z" level=info msg="StartContainer for \"73565ccd069bb09663cf19bdf91cb55227ad6a84d12f75af03c83e197e33670a\"" Jan 30 05:00:36.904036 containerd[1524]: time="2025-01-30T05:00:36.903980315Z" level=info msg="CreateContainer within sandbox \"e89fec12a6e2c603f9f087eb9c32187ae4bbafecab6e536436311428f6bca1b6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"6752b2dc15e669053452b8b75caec28e8228bd5c851e36068723476afbb14a36\"" Jan 30 05:00:36.904444 containerd[1524]: time="2025-01-30T05:00:36.904423043Z" level=info msg="StartContainer for \"6752b2dc15e669053452b8b75caec28e8228bd5c851e36068723476afbb14a36\"" Jan 30 05:00:36.928292 systemd[1]: Started cri-containerd-73565ccd069bb09663cf19bdf91cb55227ad6a84d12f75af03c83e197e33670a.scope - libcontainer container 73565ccd069bb09663cf19bdf91cb55227ad6a84d12f75af03c83e197e33670a. Jan 30 05:00:36.946021 systemd[1]: Started cri-containerd-6752b2dc15e669053452b8b75caec28e8228bd5c851e36068723476afbb14a36.scope - libcontainer container 6752b2dc15e669053452b8b75caec28e8228bd5c851e36068723476afbb14a36. Jan 30 05:00:36.986771 containerd[1524]: time="2025-01-30T05:00:36.986511471Z" level=info msg="StartContainer for \"73565ccd069bb09663cf19bdf91cb55227ad6a84d12f75af03c83e197e33670a\" returns successfully" Jan 30 05:00:36.992229 containerd[1524]: time="2025-01-30T05:00:36.992183087Z" level=info msg="StartContainer for \"6752b2dc15e669053452b8b75caec28e8228bd5c851e36068723476afbb14a36\" returns successfully" Jan 30 05:00:39.027198 kubelet[2892]: E0130 05:00:39.027037 2892 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:34318->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4186-1-0-8-df2fd9e83c.181f5fb9777af519 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4186-1-0-8-df2fd9e83c,UID:d17c49c45fac2ff9fd874398841248ae,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4186-1-0-8-df2fd9e83c,},FirstTimestamp:2025-01-30 05:00:31.366436121 +0000 UTC m=+1112.752109604,LastTimestamp:2025-01-30 05:00:31.366436121 +0000 UTC m=+1112.752109604,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186-1-0-8-df2fd9e83c,}"