Sep 5 23:50:09.937493 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 5 23:50:09.937524 kernel: Linux version 6.6.103-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Sep 5 22:30:47 -00 2025 Sep 5 23:50:09.937537 kernel: KASLR enabled Sep 5 23:50:09.937543 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Sep 5 23:50:09.937549 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390c1018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 Sep 5 23:50:09.937554 kernel: random: crng init done Sep 5 23:50:09.937561 kernel: ACPI: Early table checksum verification disabled Sep 5 23:50:09.937568 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Sep 5 23:50:09.937574 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Sep 5 23:50:09.937582 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 23:50:09.937588 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 23:50:09.937594 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 23:50:09.937600 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 23:50:09.937606 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 23:50:09.937614 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 23:50:09.937622 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 23:50:09.937629 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 23:50:09.937635 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 23:50:09.937642 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Sep 5 23:50:09.937648 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Sep 5 23:50:09.937654 kernel: NUMA: Failed to initialise from firmware Sep 5 23:50:09.937661 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Sep 5 23:50:09.937667 kernel: NUMA: NODE_DATA [mem 0x13966e800-0x139673fff] Sep 5 23:50:09.937673 kernel: Zone ranges: Sep 5 23:50:09.937679 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Sep 5 23:50:09.937688 kernel: DMA32 empty Sep 5 23:50:09.937694 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Sep 5 23:50:09.937700 kernel: Movable zone start for each node Sep 5 23:50:09.937707 kernel: Early memory node ranges Sep 5 23:50:09.937713 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Sep 5 23:50:09.937720 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Sep 5 23:50:09.937726 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Sep 5 23:50:09.937732 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Sep 5 23:50:09.937739 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Sep 5 23:50:09.937745 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Sep 5 23:50:09.937751 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Sep 5 23:50:09.937757 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Sep 5 23:50:09.937765 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Sep 5 23:50:09.937772 kernel: psci: probing for conduit method from ACPI. Sep 5 23:50:09.937778 kernel: psci: PSCIv1.1 detected in firmware. Sep 5 23:50:09.937787 kernel: psci: Using standard PSCI v0.2 function IDs Sep 5 23:50:09.937794 kernel: psci: Trusted OS migration not required Sep 5 23:50:09.937800 kernel: psci: SMC Calling Convention v1.1 Sep 5 23:50:09.937809 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 5 23:50:09.937816 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 5 23:50:09.937823 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 5 23:50:09.937830 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 5 23:50:09.937837 kernel: Detected PIPT I-cache on CPU0 Sep 5 23:50:09.937844 kernel: CPU features: detected: GIC system register CPU interface Sep 5 23:50:09.937850 kernel: CPU features: detected: Hardware dirty bit management Sep 5 23:50:09.937857 kernel: CPU features: detected: Spectre-v4 Sep 5 23:50:09.937864 kernel: CPU features: detected: Spectre-BHB Sep 5 23:50:09.937871 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 5 23:50:09.937879 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 5 23:50:09.937887 kernel: CPU features: detected: ARM erratum 1418040 Sep 5 23:50:09.937893 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 5 23:50:09.937900 kernel: alternatives: applying boot alternatives Sep 5 23:50:09.937908 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=ac831c89fe9ee7829b7371dadfb138f8d0e2b31ae3a5a920e0eba13bbab016c3 Sep 5 23:50:09.937915 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 5 23:50:09.937922 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 5 23:50:09.937929 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 5 23:50:09.937936 kernel: Fallback order for Node 0: 0 Sep 5 23:50:09.937942 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Sep 5 23:50:09.937949 kernel: Policy zone: Normal Sep 5 23:50:09.937958 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 5 23:50:09.937965 kernel: software IO TLB: area num 2. Sep 5 23:50:09.937971 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Sep 5 23:50:09.937979 kernel: Memory: 3882804K/4096000K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 213196K reserved, 0K cma-reserved) Sep 5 23:50:09.937986 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 5 23:50:09.937993 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 5 23:50:09.938001 kernel: rcu: RCU event tracing is enabled. Sep 5 23:50:09.938008 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 5 23:50:09.938015 kernel: Trampoline variant of Tasks RCU enabled. Sep 5 23:50:09.938022 kernel: Tracing variant of Tasks RCU enabled. Sep 5 23:50:09.938028 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 5 23:50:09.938037 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 5 23:50:09.938044 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 5 23:50:09.938051 kernel: GICv3: 256 SPIs implemented Sep 5 23:50:09.938058 kernel: GICv3: 0 Extended SPIs implemented Sep 5 23:50:09.938064 kernel: Root IRQ handler: gic_handle_irq Sep 5 23:50:09.938071 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 5 23:50:09.938078 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 5 23:50:09.938084 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 5 23:50:09.938091 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Sep 5 23:50:09.938099 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Sep 5 23:50:09.938106 kernel: GICv3: using LPI property table @0x00000001000e0000 Sep 5 23:50:09.938113 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Sep 5 23:50:09.938121 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 5 23:50:09.938128 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 5 23:50:09.938135 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 5 23:50:09.938142 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 5 23:50:09.938149 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 5 23:50:09.938156 kernel: Console: colour dummy device 80x25 Sep 5 23:50:09.938163 kernel: ACPI: Core revision 20230628 Sep 5 23:50:09.938170 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 5 23:50:09.938177 kernel: pid_max: default: 32768 minimum: 301 Sep 5 23:50:09.938184 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 5 23:50:09.938193 kernel: landlock: Up and running. Sep 5 23:50:09.938199 kernel: SELinux: Initializing. Sep 5 23:50:09.938206 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 5 23:50:09.938257 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 5 23:50:09.938265 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 5 23:50:09.938272 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 5 23:50:09.938280 kernel: rcu: Hierarchical SRCU implementation. Sep 5 23:50:09.938287 kernel: rcu: Max phase no-delay instances is 400. Sep 5 23:50:09.938294 kernel: Platform MSI: ITS@0x8080000 domain created Sep 5 23:50:09.938304 kernel: PCI/MSI: ITS@0x8080000 domain created Sep 5 23:50:09.938311 kernel: Remapping and enabling EFI services. Sep 5 23:50:09.938318 kernel: smp: Bringing up secondary CPUs ... Sep 5 23:50:09.938325 kernel: Detected PIPT I-cache on CPU1 Sep 5 23:50:09.938332 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 5 23:50:09.938340 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Sep 5 23:50:09.938347 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 5 23:50:09.938354 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 5 23:50:09.938361 kernel: smp: Brought up 1 node, 2 CPUs Sep 5 23:50:09.938368 kernel: SMP: Total of 2 processors activated. Sep 5 23:50:09.938377 kernel: CPU features: detected: 32-bit EL0 Support Sep 5 23:50:09.938384 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 5 23:50:09.938397 kernel: CPU features: detected: Common not Private translations Sep 5 23:50:09.938437 kernel: CPU features: detected: CRC32 instructions Sep 5 23:50:09.938445 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 5 23:50:09.938452 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 5 23:50:09.938459 kernel: CPU features: detected: LSE atomic instructions Sep 5 23:50:09.938467 kernel: CPU features: detected: Privileged Access Never Sep 5 23:50:09.938474 kernel: CPU features: detected: RAS Extension Support Sep 5 23:50:09.938484 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 5 23:50:09.938491 kernel: CPU: All CPU(s) started at EL1 Sep 5 23:50:09.938499 kernel: alternatives: applying system-wide alternatives Sep 5 23:50:09.938506 kernel: devtmpfs: initialized Sep 5 23:50:09.938513 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 5 23:50:09.938521 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 5 23:50:09.938529 kernel: pinctrl core: initialized pinctrl subsystem Sep 5 23:50:09.938538 kernel: SMBIOS 3.0.0 present. Sep 5 23:50:09.938546 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Sep 5 23:50:09.938554 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 5 23:50:09.938562 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 5 23:50:09.938570 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 5 23:50:09.938578 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 5 23:50:09.938585 kernel: audit: initializing netlink subsys (disabled) Sep 5 23:50:09.938592 kernel: audit: type=2000 audit(0.017:1): state=initialized audit_enabled=0 res=1 Sep 5 23:50:09.938600 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 5 23:50:09.938608 kernel: cpuidle: using governor menu Sep 5 23:50:09.938616 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 5 23:50:09.938623 kernel: ASID allocator initialised with 32768 entries Sep 5 23:50:09.938630 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 5 23:50:09.938638 kernel: Serial: AMBA PL011 UART driver Sep 5 23:50:09.938645 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 5 23:50:09.938652 kernel: Modules: 0 pages in range for non-PLT usage Sep 5 23:50:09.938660 kernel: Modules: 509008 pages in range for PLT usage Sep 5 23:50:09.938667 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 5 23:50:09.938676 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 5 23:50:09.938683 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 5 23:50:09.938691 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 5 23:50:09.938698 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 5 23:50:09.938705 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 5 23:50:09.938712 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 5 23:50:09.938720 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 5 23:50:09.938727 kernel: ACPI: Added _OSI(Module Device) Sep 5 23:50:09.938734 kernel: ACPI: Added _OSI(Processor Device) Sep 5 23:50:09.938743 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 5 23:50:09.938750 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 5 23:50:09.938757 kernel: ACPI: Interpreter enabled Sep 5 23:50:09.938765 kernel: ACPI: Using GIC for interrupt routing Sep 5 23:50:09.938772 kernel: ACPI: MCFG table detected, 1 entries Sep 5 23:50:09.938780 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 5 23:50:09.938787 kernel: printk: console [ttyAMA0] enabled Sep 5 23:50:09.938794 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 5 23:50:09.938973 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 5 23:50:09.939060 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 5 23:50:09.939127 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 5 23:50:09.939192 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 5 23:50:09.939277 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 5 23:50:09.939289 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 5 23:50:09.939297 kernel: PCI host bridge to bus 0000:00 Sep 5 23:50:09.939378 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 5 23:50:09.940663 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 5 23:50:09.940741 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 5 23:50:09.940801 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 5 23:50:09.940892 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Sep 5 23:50:09.940974 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Sep 5 23:50:09.941043 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Sep 5 23:50:09.941118 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Sep 5 23:50:09.941196 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Sep 5 23:50:09.941287 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Sep 5 23:50:09.941374 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Sep 5 23:50:09.942478 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Sep 5 23:50:09.942591 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Sep 5 23:50:09.942660 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Sep 5 23:50:09.942743 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Sep 5 23:50:09.942810 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Sep 5 23:50:09.942887 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Sep 5 23:50:09.942953 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Sep 5 23:50:09.943027 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Sep 5 23:50:09.943096 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Sep 5 23:50:09.943169 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Sep 5 23:50:09.943253 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Sep 5 23:50:09.943330 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Sep 5 23:50:09.943411 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Sep 5 23:50:09.943515 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Sep 5 23:50:09.943584 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Sep 5 23:50:09.943674 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Sep 5 23:50:09.943744 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Sep 5 23:50:09.943828 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Sep 5 23:50:09.943911 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Sep 5 23:50:09.943994 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Sep 5 23:50:09.944070 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Sep 5 23:50:09.944162 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Sep 5 23:50:09.944247 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Sep 5 23:50:09.944369 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Sep 5 23:50:09.944503 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Sep 5 23:50:09.944582 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Sep 5 23:50:09.944683 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Sep 5 23:50:09.944760 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Sep 5 23:50:09.944850 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Sep 5 23:50:09.944922 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Sep 5 23:50:09.945001 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Sep 5 23:50:09.945073 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Sep 5 23:50:09.945145 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Sep 5 23:50:09.945271 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Sep 5 23:50:09.945358 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Sep 5 23:50:09.945452 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Sep 5 23:50:09.945529 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Sep 5 23:50:09.945604 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Sep 5 23:50:09.945674 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Sep 5 23:50:09.945743 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Sep 5 23:50:09.945822 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Sep 5 23:50:09.945893 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Sep 5 23:50:09.945961 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Sep 5 23:50:09.946033 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Sep 5 23:50:09.946103 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Sep 5 23:50:09.946172 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Sep 5 23:50:09.946262 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Sep 5 23:50:09.946519 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Sep 5 23:50:09.946615 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Sep 5 23:50:09.946691 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Sep 5 23:50:09.946762 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Sep 5 23:50:09.946832 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x000fffff] to [bus 05] add_size 200000 add_align 100000 Sep 5 23:50:09.946905 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Sep 5 23:50:09.946975 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Sep 5 23:50:09.947044 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Sep 5 23:50:09.947120 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Sep 5 23:50:09.947190 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Sep 5 23:50:09.947307 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Sep 5 23:50:09.947387 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Sep 5 23:50:09.947525 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Sep 5 23:50:09.947598 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Sep 5 23:50:09.947672 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Sep 5 23:50:09.947742 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Sep 5 23:50:09.947816 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Sep 5 23:50:09.947887 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Sep 5 23:50:09.947957 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Sep 5 23:50:09.948078 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Sep 5 23:50:09.948157 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Sep 5 23:50:09.948246 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Sep 5 23:50:09.948320 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Sep 5 23:50:09.948395 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Sep 5 23:50:09.949648 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Sep 5 23:50:09.949722 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Sep 5 23:50:09.949787 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Sep 5 23:50:09.949855 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Sep 5 23:50:09.949922 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Sep 5 23:50:09.949999 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Sep 5 23:50:09.950066 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Sep 5 23:50:09.950137 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Sep 5 23:50:09.950203 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Sep 5 23:50:09.950331 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Sep 5 23:50:09.951461 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Sep 5 23:50:09.951593 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Sep 5 23:50:09.951674 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Sep 5 23:50:09.951748 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Sep 5 23:50:09.951815 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Sep 5 23:50:09.951887 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Sep 5 23:50:09.951953 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Sep 5 23:50:09.952026 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Sep 5 23:50:09.952094 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Sep 5 23:50:09.952171 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Sep 5 23:50:09.952269 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Sep 5 23:50:09.952346 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Sep 5 23:50:09.953867 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Sep 5 23:50:09.953976 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Sep 5 23:50:09.954044 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Sep 5 23:50:09.954114 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Sep 5 23:50:09.954179 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Sep 5 23:50:09.954267 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Sep 5 23:50:09.954344 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Sep 5 23:50:09.954516 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Sep 5 23:50:09.954587 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Sep 5 23:50:09.954661 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Sep 5 23:50:09.954738 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Sep 5 23:50:09.954806 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Sep 5 23:50:09.954872 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Sep 5 23:50:09.954940 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Sep 5 23:50:09.955013 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Sep 5 23:50:09.955077 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Sep 5 23:50:09.955142 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Sep 5 23:50:09.955233 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Sep 5 23:50:09.955308 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Sep 5 23:50:09.955383 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Sep 5 23:50:09.956262 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Sep 5 23:50:09.956359 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Sep 5 23:50:09.956732 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Sep 5 23:50:09.956815 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Sep 5 23:50:09.956884 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Sep 5 23:50:09.956951 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Sep 5 23:50:09.957023 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Sep 5 23:50:09.957090 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Sep 5 23:50:09.957173 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Sep 5 23:50:09.957281 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Sep 5 23:50:09.957366 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Sep 5 23:50:09.957569 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Sep 5 23:50:09.957646 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Sep 5 23:50:09.957723 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Sep 5 23:50:09.957826 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Sep 5 23:50:09.957919 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Sep 5 23:50:09.957986 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Sep 5 23:50:09.958050 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Sep 5 23:50:09.958124 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Sep 5 23:50:09.958191 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Sep 5 23:50:09.958276 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Sep 5 23:50:09.958345 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Sep 5 23:50:09.958463 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Sep 5 23:50:09.958531 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Sep 5 23:50:09.958607 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Sep 5 23:50:09.958675 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Sep 5 23:50:09.958742 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Sep 5 23:50:09.958812 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Sep 5 23:50:09.958878 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Sep 5 23:50:09.958944 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Sep 5 23:50:09.959017 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Sep 5 23:50:09.959088 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Sep 5 23:50:09.959154 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Sep 5 23:50:09.959267 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Sep 5 23:50:09.959348 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Sep 5 23:50:09.959440 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Sep 5 23:50:09.959514 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Sep 5 23:50:09.959582 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Sep 5 23:50:09.959655 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Sep 5 23:50:09.959726 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 5 23:50:09.959790 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 5 23:50:09.959849 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 5 23:50:09.959934 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Sep 5 23:50:09.960019 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Sep 5 23:50:09.960086 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Sep 5 23:50:09.960163 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Sep 5 23:50:09.960243 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Sep 5 23:50:09.960310 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Sep 5 23:50:09.960385 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Sep 5 23:50:09.960519 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Sep 5 23:50:09.960586 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Sep 5 23:50:09.960663 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Sep 5 23:50:09.960730 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Sep 5 23:50:09.960793 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Sep 5 23:50:09.960874 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Sep 5 23:50:09.961047 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Sep 5 23:50:09.961112 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Sep 5 23:50:09.961184 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Sep 5 23:50:09.961269 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Sep 5 23:50:09.961337 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Sep 5 23:50:09.961531 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Sep 5 23:50:09.961606 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Sep 5 23:50:09.961742 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Sep 5 23:50:09.961822 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Sep 5 23:50:09.961883 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Sep 5 23:50:09.961942 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Sep 5 23:50:09.962009 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Sep 5 23:50:09.962068 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Sep 5 23:50:09.962128 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Sep 5 23:50:09.962141 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 5 23:50:09.962149 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 5 23:50:09.962157 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 5 23:50:09.962167 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 5 23:50:09.962175 kernel: iommu: Default domain type: Translated Sep 5 23:50:09.962183 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 5 23:50:09.962190 kernel: efivars: Registered efivars operations Sep 5 23:50:09.962198 kernel: vgaarb: loaded Sep 5 23:50:09.962206 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 5 23:50:09.962226 kernel: VFS: Disk quotas dquot_6.6.0 Sep 5 23:50:09.962234 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 5 23:50:09.962242 kernel: pnp: PnP ACPI init Sep 5 23:50:09.962337 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 5 23:50:09.962349 kernel: pnp: PnP ACPI: found 1 devices Sep 5 23:50:09.962357 kernel: NET: Registered PF_INET protocol family Sep 5 23:50:09.962365 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 5 23:50:09.962374 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 5 23:50:09.962385 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 5 23:50:09.962393 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 5 23:50:09.962417 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 5 23:50:09.962425 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 5 23:50:09.962434 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 5 23:50:09.962442 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 5 23:50:09.962450 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 5 23:50:09.962533 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Sep 5 23:50:09.962546 kernel: PCI: CLS 0 bytes, default 64 Sep 5 23:50:09.962557 kernel: kvm [1]: HYP mode not available Sep 5 23:50:09.962565 kernel: Initialise system trusted keyrings Sep 5 23:50:09.962572 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 5 23:50:09.962580 kernel: Key type asymmetric registered Sep 5 23:50:09.962588 kernel: Asymmetric key parser 'x509' registered Sep 5 23:50:09.962596 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 5 23:50:09.962604 kernel: io scheduler mq-deadline registered Sep 5 23:50:09.962611 kernel: io scheduler kyber registered Sep 5 23:50:09.962619 kernel: io scheduler bfq registered Sep 5 23:50:09.962630 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Sep 5 23:50:09.962700 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Sep 5 23:50:09.962767 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Sep 5 23:50:09.962834 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 5 23:50:09.962904 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Sep 5 23:50:09.962971 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Sep 5 23:50:09.963037 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 5 23:50:09.963111 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Sep 5 23:50:09.963177 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Sep 5 23:50:09.963260 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 5 23:50:09.963333 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Sep 5 23:50:09.963653 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Sep 5 23:50:09.963747 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 5 23:50:09.964554 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Sep 5 23:50:09.964639 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Sep 5 23:50:09.964705 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 5 23:50:09.964776 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Sep 5 23:50:09.964844 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Sep 5 23:50:09.964917 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 5 23:50:09.964988 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Sep 5 23:50:09.965054 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Sep 5 23:50:09.965119 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 5 23:50:09.965188 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Sep 5 23:50:09.965278 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Sep 5 23:50:09.965355 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 5 23:50:09.965371 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Sep 5 23:50:09.966657 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Sep 5 23:50:09.966754 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Sep 5 23:50:09.966822 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 5 23:50:09.966837 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 5 23:50:09.966846 kernel: ACPI: button: Power Button [PWRB] Sep 5 23:50:09.966855 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 5 23:50:09.966940 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Sep 5 23:50:09.967016 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Sep 5 23:50:09.967028 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 5 23:50:09.967039 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Sep 5 23:50:09.967107 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Sep 5 23:50:09.967118 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Sep 5 23:50:09.967126 kernel: thunder_xcv, ver 1.0 Sep 5 23:50:09.967134 kernel: thunder_bgx, ver 1.0 Sep 5 23:50:09.967143 kernel: nicpf, ver 1.0 Sep 5 23:50:09.967151 kernel: nicvf, ver 1.0 Sep 5 23:50:09.967247 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 5 23:50:09.967313 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-05T23:50:09 UTC (1757116209) Sep 5 23:50:09.967324 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 5 23:50:09.967332 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Sep 5 23:50:09.967340 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 5 23:50:09.967347 kernel: watchdog: Hard watchdog permanently disabled Sep 5 23:50:09.967358 kernel: NET: Registered PF_INET6 protocol family Sep 5 23:50:09.967366 kernel: Segment Routing with IPv6 Sep 5 23:50:09.967374 kernel: In-situ OAM (IOAM) with IPv6 Sep 5 23:50:09.967381 kernel: NET: Registered PF_PACKET protocol family Sep 5 23:50:09.967389 kernel: Key type dns_resolver registered Sep 5 23:50:09.968466 kernel: registered taskstats version 1 Sep 5 23:50:09.968490 kernel: Loading compiled-in X.509 certificates Sep 5 23:50:09.968499 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.103-flatcar: 5b16e1dfa86dac534548885fd675b87757ff9e20' Sep 5 23:50:09.968508 kernel: Key type .fscrypt registered Sep 5 23:50:09.968516 kernel: Key type fscrypt-provisioning registered Sep 5 23:50:09.968531 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 5 23:50:09.968539 kernel: ima: Allocated hash algorithm: sha1 Sep 5 23:50:09.968547 kernel: ima: No architecture policies found Sep 5 23:50:09.968555 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 5 23:50:09.968563 kernel: clk: Disabling unused clocks Sep 5 23:50:09.968571 kernel: Freeing unused kernel memory: 39424K Sep 5 23:50:09.968578 kernel: Run /init as init process Sep 5 23:50:09.968586 kernel: with arguments: Sep 5 23:50:09.968596 kernel: /init Sep 5 23:50:09.968603 kernel: with environment: Sep 5 23:50:09.968611 kernel: HOME=/ Sep 5 23:50:09.968618 kernel: TERM=linux Sep 5 23:50:09.968626 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 5 23:50:09.968636 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 5 23:50:09.968664 systemd[1]: Detected virtualization kvm. Sep 5 23:50:09.968672 systemd[1]: Detected architecture arm64. Sep 5 23:50:09.968682 systemd[1]: Running in initrd. Sep 5 23:50:09.968690 systemd[1]: No hostname configured, using default hostname. Sep 5 23:50:09.968698 systemd[1]: Hostname set to . Sep 5 23:50:09.968707 systemd[1]: Initializing machine ID from VM UUID. Sep 5 23:50:09.968715 systemd[1]: Queued start job for default target initrd.target. Sep 5 23:50:09.968723 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 23:50:09.968732 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 23:50:09.968741 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 5 23:50:09.968752 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 23:50:09.968760 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 5 23:50:09.968769 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 5 23:50:09.968780 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 5 23:50:09.968788 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 5 23:50:09.968797 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 23:50:09.968805 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 23:50:09.968815 systemd[1]: Reached target paths.target - Path Units. Sep 5 23:50:09.968823 systemd[1]: Reached target slices.target - Slice Units. Sep 5 23:50:09.968831 systemd[1]: Reached target swap.target - Swaps. Sep 5 23:50:09.968840 systemd[1]: Reached target timers.target - Timer Units. Sep 5 23:50:09.968848 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 23:50:09.968858 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 23:50:09.968866 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 5 23:50:09.968875 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 5 23:50:09.968885 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 5 23:50:09.968893 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 23:50:09.968901 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 23:50:09.968910 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 23:50:09.968918 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 5 23:50:09.968926 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 23:50:09.968936 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 5 23:50:09.968944 systemd[1]: Starting systemd-fsck-usr.service... Sep 5 23:50:09.968953 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 23:50:09.968963 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 23:50:09.968972 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 23:50:09.968980 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 5 23:50:09.968988 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 23:50:09.968996 systemd[1]: Finished systemd-fsck-usr.service. Sep 5 23:50:09.969039 systemd-journald[234]: Collecting audit messages is disabled. Sep 5 23:50:09.969063 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 5 23:50:09.969073 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 23:50:09.969084 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 5 23:50:09.969092 kernel: Bridge firewalling registered Sep 5 23:50:09.969101 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 23:50:09.969109 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 23:50:09.969118 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 23:50:09.969126 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 23:50:09.969136 systemd-journald[234]: Journal started Sep 5 23:50:09.969159 systemd-journald[234]: Runtime Journal (/run/log/journal/a9295c6f6aac473d94026367c9b1f4d8) is 8.0M, max 76.6M, 68.6M free. Sep 5 23:50:09.928782 systemd-modules-load[236]: Inserted module 'overlay' Sep 5 23:50:09.955497 systemd-modules-load[236]: Inserted module 'br_netfilter' Sep 5 23:50:09.974517 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 23:50:09.974585 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 23:50:09.978330 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 23:50:09.991344 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 23:50:10.000790 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 23:50:10.003485 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 23:50:10.012738 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 5 23:50:10.014621 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 23:50:10.027491 dracut-cmdline[271]: dracut-dracut-053 Sep 5 23:50:10.030745 dracut-cmdline[271]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=ac831c89fe9ee7829b7371dadfb138f8d0e2b31ae3a5a920e0eba13bbab016c3 Sep 5 23:50:10.030100 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 5 23:50:10.065622 systemd-resolved[277]: Positive Trust Anchors: Sep 5 23:50:10.066393 systemd-resolved[277]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 5 23:50:10.066450 systemd-resolved[277]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 5 23:50:10.076113 systemd-resolved[277]: Defaulting to hostname 'linux'. Sep 5 23:50:10.078342 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 5 23:50:10.079101 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 5 23:50:10.113471 kernel: SCSI subsystem initialized Sep 5 23:50:10.118459 kernel: Loading iSCSI transport class v2.0-870. Sep 5 23:50:10.126478 kernel: iscsi: registered transport (tcp) Sep 5 23:50:10.140424 kernel: iscsi: registered transport (qla4xxx) Sep 5 23:50:10.140475 kernel: QLogic iSCSI HBA Driver Sep 5 23:50:10.194558 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 5 23:50:10.199620 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 5 23:50:10.238578 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 5 23:50:10.238661 kernel: device-mapper: uevent: version 1.0.3 Sep 5 23:50:10.239420 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 5 23:50:10.291464 kernel: raid6: neonx8 gen() 15593 MB/s Sep 5 23:50:10.308472 kernel: raid6: neonx4 gen() 12143 MB/s Sep 5 23:50:10.325454 kernel: raid6: neonx2 gen() 13189 MB/s Sep 5 23:50:10.342462 kernel: raid6: neonx1 gen() 10460 MB/s Sep 5 23:50:10.359440 kernel: raid6: int64x8 gen() 6930 MB/s Sep 5 23:50:10.376496 kernel: raid6: int64x4 gen() 7312 MB/s Sep 5 23:50:10.393469 kernel: raid6: int64x2 gen() 6090 MB/s Sep 5 23:50:10.410487 kernel: raid6: int64x1 gen() 5039 MB/s Sep 5 23:50:10.410590 kernel: raid6: using algorithm neonx8 gen() 15593 MB/s Sep 5 23:50:10.427480 kernel: raid6: .... xor() 11997 MB/s, rmw enabled Sep 5 23:50:10.427574 kernel: raid6: using neon recovery algorithm Sep 5 23:50:10.432451 kernel: xor: measuring software checksum speed Sep 5 23:50:10.432530 kernel: 8regs : 19764 MB/sec Sep 5 23:50:10.432552 kernel: 32regs : 17780 MB/sec Sep 5 23:50:10.433512 kernel: arm64_neon : 26919 MB/sec Sep 5 23:50:10.433548 kernel: xor: using function: arm64_neon (26919 MB/sec) Sep 5 23:50:10.489454 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 5 23:50:10.504095 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 5 23:50:10.510741 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 23:50:10.527466 systemd-udevd[455]: Using default interface naming scheme 'v255'. Sep 5 23:50:10.531015 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 23:50:10.540630 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 5 23:50:10.559502 dracut-pre-trigger[463]: rd.md=0: removing MD RAID activation Sep 5 23:50:10.602472 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 23:50:10.608686 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 23:50:10.676480 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 23:50:10.687345 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 5 23:50:10.708100 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 5 23:50:10.712437 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 23:50:10.713543 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 23:50:10.715353 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 23:50:10.725351 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 5 23:50:10.747171 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 5 23:50:10.793462 kernel: scsi host0: Virtio SCSI HBA Sep 5 23:50:10.805496 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 5 23:50:10.806447 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Sep 5 23:50:10.825427 kernel: ACPI: bus type USB registered Sep 5 23:50:10.830474 kernel: usbcore: registered new interface driver usbfs Sep 5 23:50:10.830544 kernel: usbcore: registered new interface driver hub Sep 5 23:50:10.842437 kernel: usbcore: registered new device driver usb Sep 5 23:50:10.844769 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 5 23:50:10.845710 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 23:50:10.846673 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 23:50:10.848308 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 23:50:10.849053 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 23:50:10.855108 kernel: sr 0:0:0:0: Power-on or device reset occurred Sep 5 23:50:10.855330 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Sep 5 23:50:10.851095 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 23:50:10.858466 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 5 23:50:10.861848 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 23:50:10.867537 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Sep 5 23:50:10.878572 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Sep 5 23:50:10.878851 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Sep 5 23:50:10.878956 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Sep 5 23:50:10.879042 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Sep 5 23:50:10.879128 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Sep 5 23:50:10.879227 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Sep 5 23:50:10.880435 kernel: hub 1-0:1.0: USB hub found Sep 5 23:50:10.880685 kernel: hub 1-0:1.0: 4 ports detected Sep 5 23:50:10.880778 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Sep 5 23:50:10.880903 kernel: hub 2-0:1.0: USB hub found Sep 5 23:50:10.880998 kernel: hub 2-0:1.0: 4 ports detected Sep 5 23:50:10.895737 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 23:50:10.903170 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 23:50:10.905826 kernel: sd 0:0:0:1: Power-on or device reset occurred Sep 5 23:50:10.906028 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Sep 5 23:50:10.906740 kernel: sd 0:0:0:1: [sda] Write Protect is off Sep 5 23:50:10.906879 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Sep 5 23:50:10.907739 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 5 23:50:10.912737 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 5 23:50:10.912806 kernel: GPT:17805311 != 80003071 Sep 5 23:50:10.912817 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 5 23:50:10.912835 kernel: GPT:17805311 != 80003071 Sep 5 23:50:10.912844 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 5 23:50:10.914555 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 5 23:50:10.914637 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Sep 5 23:50:10.928123 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 23:50:10.965422 kernel: BTRFS: device fsid 045c118e-b098-46f0-884a-43665575c70e devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (524) Sep 5 23:50:10.967425 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (526) Sep 5 23:50:10.976271 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Sep 5 23:50:10.989001 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Sep 5 23:50:10.995444 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Sep 5 23:50:11.001946 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Sep 5 23:50:11.004570 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Sep 5 23:50:11.014748 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 5 23:50:11.028654 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 5 23:50:11.030633 disk-uuid[575]: Primary Header is updated. Sep 5 23:50:11.030633 disk-uuid[575]: Secondary Entries is updated. Sep 5 23:50:11.030633 disk-uuid[575]: Secondary Header is updated. Sep 5 23:50:11.122455 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Sep 5 23:50:11.259983 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Sep 5 23:50:11.260068 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Sep 5 23:50:11.260510 kernel: usbcore: registered new interface driver usbhid Sep 5 23:50:11.260537 kernel: usbhid: USB HID core driver Sep 5 23:50:11.366493 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Sep 5 23:50:11.496492 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Sep 5 23:50:11.552440 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Sep 5 23:50:12.048426 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 5 23:50:12.049788 disk-uuid[576]: The operation has completed successfully. Sep 5 23:50:12.108069 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 5 23:50:12.109488 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 5 23:50:12.123735 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 5 23:50:12.129090 sh[594]: Success Sep 5 23:50:12.144887 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 5 23:50:12.215945 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 5 23:50:12.219436 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 5 23:50:12.227629 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 5 23:50:12.243693 kernel: BTRFS info (device dm-0): first mount of filesystem 045c118e-b098-46f0-884a-43665575c70e Sep 5 23:50:12.243769 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 5 23:50:12.243785 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 5 23:50:12.244756 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 5 23:50:12.244806 kernel: BTRFS info (device dm-0): using free space tree Sep 5 23:50:12.253446 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 5 23:50:12.255618 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 5 23:50:12.256807 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 5 23:50:12.278790 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 5 23:50:12.284252 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 5 23:50:12.296722 kernel: BTRFS info (device sda6): first mount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 5 23:50:12.296803 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 5 23:50:12.296822 kernel: BTRFS info (device sda6): using free space tree Sep 5 23:50:12.304059 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 5 23:50:12.304148 kernel: BTRFS info (device sda6): auto enabling async discard Sep 5 23:50:12.318571 kernel: BTRFS info (device sda6): last unmount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 5 23:50:12.319066 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 5 23:50:12.328157 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 5 23:50:12.335701 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 5 23:50:12.433374 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 5 23:50:12.445699 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 5 23:50:12.447512 ignition[684]: Ignition 2.19.0 Sep 5 23:50:12.447521 ignition[684]: Stage: fetch-offline Sep 5 23:50:12.447575 ignition[684]: no configs at "/usr/lib/ignition/base.d" Sep 5 23:50:12.447586 ignition[684]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 5 23:50:12.447776 ignition[684]: parsed url from cmdline: "" Sep 5 23:50:12.447779 ignition[684]: no config URL provided Sep 5 23:50:12.447783 ignition[684]: reading system config file "/usr/lib/ignition/user.ign" Sep 5 23:50:12.447790 ignition[684]: no config at "/usr/lib/ignition/user.ign" Sep 5 23:50:12.447796 ignition[684]: failed to fetch config: resource requires networking Sep 5 23:50:12.448001 ignition[684]: Ignition finished successfully Sep 5 23:50:12.454755 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 5 23:50:12.474768 systemd-networkd[781]: lo: Link UP Sep 5 23:50:12.474780 systemd-networkd[781]: lo: Gained carrier Sep 5 23:50:12.476899 systemd-networkd[781]: Enumeration completed Sep 5 23:50:12.477293 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 5 23:50:12.478067 systemd[1]: Reached target network.target - Network. Sep 5 23:50:12.480130 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 23:50:12.480136 systemd-networkd[781]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 23:50:12.483748 systemd-networkd[781]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 23:50:12.483756 systemd-networkd[781]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 23:50:12.484609 systemd-networkd[781]: eth0: Link UP Sep 5 23:50:12.484613 systemd-networkd[781]: eth0: Gained carrier Sep 5 23:50:12.484623 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 23:50:12.486683 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 5 23:50:12.487844 systemd-networkd[781]: eth1: Link UP Sep 5 23:50:12.487849 systemd-networkd[781]: eth1: Gained carrier Sep 5 23:50:12.487862 systemd-networkd[781]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 23:50:12.507425 ignition[784]: Ignition 2.19.0 Sep 5 23:50:12.508225 ignition[784]: Stage: fetch Sep 5 23:50:12.508912 ignition[784]: no configs at "/usr/lib/ignition/base.d" Sep 5 23:50:12.509513 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 5 23:50:12.510546 ignition[784]: parsed url from cmdline: "" Sep 5 23:50:12.510623 ignition[784]: no config URL provided Sep 5 23:50:12.511063 ignition[784]: reading system config file "/usr/lib/ignition/user.ign" Sep 5 23:50:12.511821 ignition[784]: no config at "/usr/lib/ignition/user.ign" Sep 5 23:50:12.511882 ignition[784]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Sep 5 23:50:12.512795 ignition[784]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Sep 5 23:50:12.523529 systemd-networkd[781]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Sep 5 23:50:12.545540 systemd-networkd[781]: eth0: DHCPv4 address 91.99.216.181/32, gateway 172.31.1.1 acquired from 172.31.1.1 Sep 5 23:50:12.713475 ignition[784]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Sep 5 23:50:12.719930 ignition[784]: GET result: OK Sep 5 23:50:12.720104 ignition[784]: parsing config with SHA512: 1ee826221afa18dc2d45eced2f5e102e251bc5af1444b0cfed454282b87b1f93944d06383a7cf2045c7cb25143215fa8e247f13414cf698c5b0d4b19d37e10bb Sep 5 23:50:12.727332 unknown[784]: fetched base config from "system" Sep 5 23:50:12.727352 unknown[784]: fetched base config from "system" Sep 5 23:50:12.727985 ignition[784]: fetch: fetch complete Sep 5 23:50:12.727358 unknown[784]: fetched user config from "hetzner" Sep 5 23:50:12.727991 ignition[784]: fetch: fetch passed Sep 5 23:50:12.730235 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 5 23:50:12.728048 ignition[784]: Ignition finished successfully Sep 5 23:50:12.734716 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 5 23:50:12.753976 ignition[791]: Ignition 2.19.0 Sep 5 23:50:12.753997 ignition[791]: Stage: kargs Sep 5 23:50:12.754475 ignition[791]: no configs at "/usr/lib/ignition/base.d" Sep 5 23:50:12.754502 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 5 23:50:12.756395 ignition[791]: kargs: kargs passed Sep 5 23:50:12.756485 ignition[791]: Ignition finished successfully Sep 5 23:50:12.759041 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 5 23:50:12.764672 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 5 23:50:12.780130 ignition[797]: Ignition 2.19.0 Sep 5 23:50:12.780141 ignition[797]: Stage: disks Sep 5 23:50:12.780363 ignition[797]: no configs at "/usr/lib/ignition/base.d" Sep 5 23:50:12.780375 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 5 23:50:12.782987 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 5 23:50:12.781533 ignition[797]: disks: disks passed Sep 5 23:50:12.781595 ignition[797]: Ignition finished successfully Sep 5 23:50:12.785843 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 5 23:50:12.786531 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 5 23:50:12.787140 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 23:50:12.788139 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 23:50:12.789512 systemd[1]: Reached target basic.target - Basic System. Sep 5 23:50:12.795798 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 5 23:50:12.818999 systemd-fsck[806]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Sep 5 23:50:12.829412 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 5 23:50:12.848797 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 5 23:50:12.904456 kernel: EXT4-fs (sda9): mounted filesystem 72e55cb0-8368-4871-a3a0-8637412e72e8 r/w with ordered data mode. Quota mode: none. Sep 5 23:50:12.904149 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 5 23:50:12.905963 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 5 23:50:12.913577 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 23:50:12.916860 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 5 23:50:12.923635 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 5 23:50:12.924275 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 5 23:50:12.924311 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 5 23:50:12.931361 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 5 23:50:12.940712 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 5 23:50:12.946484 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (814) Sep 5 23:50:12.951050 kernel: BTRFS info (device sda6): first mount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 5 23:50:12.951118 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 5 23:50:12.951870 kernel: BTRFS info (device sda6): using free space tree Sep 5 23:50:12.979559 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 5 23:50:12.979646 kernel: BTRFS info (device sda6): auto enabling async discard Sep 5 23:50:12.986549 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 23:50:13.003797 coreos-metadata[816]: Sep 05 23:50:13.003 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Sep 5 23:50:13.006518 coreos-metadata[816]: Sep 05 23:50:13.005 INFO Fetch successful Sep 5 23:50:13.009835 coreos-metadata[816]: Sep 05 23:50:13.009 INFO wrote hostname ci-4081-3-5-n-f09ad01745 to /sysroot/etc/hostname Sep 5 23:50:13.010702 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 5 23:50:13.013965 initrd-setup-root[842]: cut: /sysroot/etc/passwd: No such file or directory Sep 5 23:50:13.020663 initrd-setup-root[849]: cut: /sysroot/etc/group: No such file or directory Sep 5 23:50:13.025347 initrd-setup-root[856]: cut: /sysroot/etc/shadow: No such file or directory Sep 5 23:50:13.030288 initrd-setup-root[863]: cut: /sysroot/etc/gshadow: No such file or directory Sep 5 23:50:13.142924 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 5 23:50:13.161617 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 5 23:50:13.168629 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 5 23:50:13.182514 kernel: BTRFS info (device sda6): last unmount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 5 23:50:13.205527 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 5 23:50:13.211220 ignition[931]: INFO : Ignition 2.19.0 Sep 5 23:50:13.211220 ignition[931]: INFO : Stage: mount Sep 5 23:50:13.212362 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 23:50:13.212362 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 5 23:50:13.213716 ignition[931]: INFO : mount: mount passed Sep 5 23:50:13.213716 ignition[931]: INFO : Ignition finished successfully Sep 5 23:50:13.215243 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 5 23:50:13.219615 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 5 23:50:13.244904 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 5 23:50:13.252691 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 23:50:13.267906 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (943) Sep 5 23:50:13.269683 kernel: BTRFS info (device sda6): first mount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 5 23:50:13.269739 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 5 23:50:13.269768 kernel: BTRFS info (device sda6): using free space tree Sep 5 23:50:13.274451 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 5 23:50:13.274517 kernel: BTRFS info (device sda6): auto enabling async discard Sep 5 23:50:13.278737 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 23:50:13.305683 ignition[960]: INFO : Ignition 2.19.0 Sep 5 23:50:13.306502 ignition[960]: INFO : Stage: files Sep 5 23:50:13.306926 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 23:50:13.306926 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 5 23:50:13.308858 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Sep 5 23:50:13.309928 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 5 23:50:13.309928 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 5 23:50:13.314067 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 5 23:50:13.315171 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 5 23:50:13.316025 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 5 23:50:13.315813 unknown[960]: wrote ssh authorized keys file for user: core Sep 5 23:50:13.318215 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 5 23:50:13.318215 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 5 23:50:13.318215 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 5 23:50:13.318215 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 5 23:50:13.456345 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 5 23:50:13.901654 systemd-networkd[781]: eth1: Gained IPv6LL Sep 5 23:50:14.224962 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 5 23:50:14.224962 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 5 23:50:14.224962 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 5 23:50:14.477712 systemd-networkd[781]: eth0: Gained IPv6LL Sep 5 23:50:14.534626 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Sep 5 23:50:15.019245 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 5 23:50:15.019245 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Sep 5 23:50:15.019245 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Sep 5 23:50:15.019245 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 5 23:50:15.019245 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 5 23:50:15.019245 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 5 23:50:15.019245 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 5 23:50:15.019245 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 5 23:50:15.019245 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 5 23:50:15.019245 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 5 23:50:15.019245 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 5 23:50:15.019245 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 5 23:50:15.019245 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 5 23:50:15.019245 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 5 23:50:15.019245 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Sep 5 23:50:15.238440 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Sep 5 23:50:15.433537 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 5 23:50:15.433537 ignition[960]: INFO : files: op(d): [started] processing unit "containerd.service" Sep 5 23:50:15.437578 ignition[960]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 5 23:50:15.437578 ignition[960]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 5 23:50:15.437578 ignition[960]: INFO : files: op(d): [finished] processing unit "containerd.service" Sep 5 23:50:15.437578 ignition[960]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Sep 5 23:50:15.437578 ignition[960]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 5 23:50:15.437578 ignition[960]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 5 23:50:15.437578 ignition[960]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Sep 5 23:50:15.437578 ignition[960]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Sep 5 23:50:15.437578 ignition[960]: INFO : files: op(11): op(12): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Sep 5 23:50:15.437578 ignition[960]: INFO : files: op(11): op(12): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Sep 5 23:50:15.437578 ignition[960]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Sep 5 23:50:15.437578 ignition[960]: INFO : files: op(13): [started] setting preset to enabled for "prepare-helm.service" Sep 5 23:50:15.437578 ignition[960]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-helm.service" Sep 5 23:50:15.437578 ignition[960]: INFO : files: createResultFile: createFiles: op(14): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 5 23:50:15.437578 ignition[960]: INFO : files: createResultFile: createFiles: op(14): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 5 23:50:15.437578 ignition[960]: INFO : files: files passed Sep 5 23:50:15.437578 ignition[960]: INFO : Ignition finished successfully Sep 5 23:50:15.439791 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 5 23:50:15.450652 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 5 23:50:15.455477 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 5 23:50:15.463166 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 5 23:50:15.463307 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 5 23:50:15.473501 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 23:50:15.473501 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 5 23:50:15.477258 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 23:50:15.480531 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 23:50:15.482691 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 5 23:50:15.489852 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 5 23:50:15.534592 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 5 23:50:15.534815 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 5 23:50:15.537365 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 5 23:50:15.541204 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 5 23:50:15.541976 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 5 23:50:15.548626 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 5 23:50:15.565033 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 23:50:15.577712 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 5 23:50:15.591480 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 5 23:50:15.592893 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 23:50:15.593693 systemd[1]: Stopped target timers.target - Timer Units. Sep 5 23:50:15.594248 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 5 23:50:15.594389 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 23:50:15.596593 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 5 23:50:15.598190 systemd[1]: Stopped target basic.target - Basic System. Sep 5 23:50:15.599226 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 5 23:50:15.600151 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 5 23:50:15.601205 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 5 23:50:15.602239 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 5 23:50:15.603251 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 23:50:15.604417 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 5 23:50:15.605396 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 5 23:50:15.606601 systemd[1]: Stopped target swap.target - Swaps. Sep 5 23:50:15.607387 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 5 23:50:15.607551 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 5 23:50:15.608876 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 5 23:50:15.609974 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 23:50:15.611030 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 5 23:50:15.615483 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 23:50:15.616961 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 5 23:50:15.617272 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 5 23:50:15.619035 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 5 23:50:15.619533 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 23:50:15.621054 systemd[1]: ignition-files.service: Deactivated successfully. Sep 5 23:50:15.621214 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 5 23:50:15.622146 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 5 23:50:15.622248 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 5 23:50:15.630898 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 5 23:50:15.632363 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 5 23:50:15.632797 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 23:50:15.636771 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 5 23:50:15.638558 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 5 23:50:15.638798 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 23:50:15.640729 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 5 23:50:15.640902 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 23:50:15.650699 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 5 23:50:15.650811 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 5 23:50:15.657431 ignition[1012]: INFO : Ignition 2.19.0 Sep 5 23:50:15.657431 ignition[1012]: INFO : Stage: umount Sep 5 23:50:15.657431 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 23:50:15.657431 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 5 23:50:15.663444 ignition[1012]: INFO : umount: umount passed Sep 5 23:50:15.663444 ignition[1012]: INFO : Ignition finished successfully Sep 5 23:50:15.661172 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 5 23:50:15.663262 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 5 23:50:15.663931 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 5 23:50:15.665087 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 5 23:50:15.665265 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 5 23:50:15.666950 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 5 23:50:15.667008 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 5 23:50:15.667775 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 5 23:50:15.667820 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 5 23:50:15.668815 systemd[1]: Stopped target network.target - Network. Sep 5 23:50:15.669742 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 5 23:50:15.669819 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 5 23:50:15.670791 systemd[1]: Stopped target paths.target - Path Units. Sep 5 23:50:15.671635 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 5 23:50:15.675563 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 23:50:15.677643 systemd[1]: Stopped target slices.target - Slice Units. Sep 5 23:50:15.679349 systemd[1]: Stopped target sockets.target - Socket Units. Sep 5 23:50:15.681566 systemd[1]: iscsid.socket: Deactivated successfully. Sep 5 23:50:15.681650 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 23:50:15.683183 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 5 23:50:15.683263 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 23:50:15.684808 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 5 23:50:15.684868 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 5 23:50:15.685680 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 5 23:50:15.685720 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 5 23:50:15.686768 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 5 23:50:15.687524 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 5 23:50:15.688248 systemd-networkd[781]: eth0: DHCPv6 lease lost Sep 5 23:50:15.688407 systemd-networkd[781]: eth1: DHCPv6 lease lost Sep 5 23:50:15.690672 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 5 23:50:15.690975 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 5 23:50:15.692277 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 5 23:50:15.692372 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 5 23:50:15.694759 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 5 23:50:15.694859 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 5 23:50:15.695698 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 5 23:50:15.695754 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 5 23:50:15.699658 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 5 23:50:15.700331 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 5 23:50:15.700421 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 5 23:50:15.703764 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 23:50:15.707056 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 5 23:50:15.707671 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 5 23:50:15.717101 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 5 23:50:15.717264 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 5 23:50:15.719361 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 5 23:50:15.719526 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 5 23:50:15.721750 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 5 23:50:15.721814 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 23:50:15.728836 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 5 23:50:15.729002 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 23:50:15.744073 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 5 23:50:15.744220 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 5 23:50:15.747097 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 5 23:50:15.747182 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 23:50:15.748644 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 5 23:50:15.748698 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 5 23:50:15.749981 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 5 23:50:15.750029 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 5 23:50:15.751487 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 5 23:50:15.751557 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 23:50:15.757704 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 5 23:50:15.758755 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 5 23:50:15.758856 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 23:50:15.760366 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 23:50:15.760466 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 23:50:15.761753 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 5 23:50:15.763451 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 5 23:50:15.774219 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 5 23:50:15.774397 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 5 23:50:15.776616 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 5 23:50:15.782877 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 5 23:50:15.799491 systemd[1]: Switching root. Sep 5 23:50:15.831685 systemd-journald[234]: Journal stopped Sep 5 23:50:16.853828 systemd-journald[234]: Received SIGTERM from PID 1 (systemd). Sep 5 23:50:16.853898 kernel: SELinux: policy capability network_peer_controls=1 Sep 5 23:50:16.853916 kernel: SELinux: policy capability open_perms=1 Sep 5 23:50:16.853926 kernel: SELinux: policy capability extended_socket_class=1 Sep 5 23:50:16.853945 kernel: SELinux: policy capability always_check_network=0 Sep 5 23:50:16.853959 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 5 23:50:16.853972 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 5 23:50:16.853982 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 5 23:50:16.853992 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 5 23:50:16.854002 kernel: audit: type=1403 audit(1757116216.082:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 5 23:50:16.854013 systemd[1]: Successfully loaded SELinux policy in 37.640ms. Sep 5 23:50:16.854038 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.196ms. Sep 5 23:50:16.854052 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 5 23:50:16.854064 systemd[1]: Detected virtualization kvm. Sep 5 23:50:16.854075 systemd[1]: Detected architecture arm64. Sep 5 23:50:16.854085 systemd[1]: Detected first boot. Sep 5 23:50:16.854096 systemd[1]: Hostname set to . Sep 5 23:50:16.854106 systemd[1]: Initializing machine ID from VM UUID. Sep 5 23:50:16.854153 zram_generator::config[1072]: No configuration found. Sep 5 23:50:16.854172 systemd[1]: Populated /etc with preset unit settings. Sep 5 23:50:16.854188 systemd[1]: Queued start job for default target multi-user.target. Sep 5 23:50:16.854199 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 5 23:50:16.854210 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 5 23:50:16.854226 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 5 23:50:16.854237 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 5 23:50:16.854248 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 5 23:50:16.854259 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 5 23:50:16.854272 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 5 23:50:16.854283 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 5 23:50:16.854293 systemd[1]: Created slice user.slice - User and Session Slice. Sep 5 23:50:16.854304 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 23:50:16.854316 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 23:50:16.854330 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 5 23:50:16.854340 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 5 23:50:16.854351 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 5 23:50:16.854362 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 23:50:16.854375 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 5 23:50:16.854385 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 23:50:16.854396 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 5 23:50:16.854426 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 23:50:16.854438 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 23:50:16.854449 systemd[1]: Reached target slices.target - Slice Units. Sep 5 23:50:16.854459 systemd[1]: Reached target swap.target - Swaps. Sep 5 23:50:16.854472 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 5 23:50:16.854483 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 5 23:50:16.854493 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 5 23:50:16.854504 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 5 23:50:16.854515 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 5 23:50:16.854526 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 23:50:16.854537 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 23:50:16.854548 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 5 23:50:16.854558 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 5 23:50:16.854570 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 5 23:50:16.854581 systemd[1]: Mounting media.mount - External Media Directory... Sep 5 23:50:16.854593 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 5 23:50:16.854604 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 5 23:50:16.854623 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 5 23:50:16.854638 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 5 23:50:16.854652 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 23:50:16.854662 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 23:50:16.854673 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 5 23:50:16.854683 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 23:50:16.854698 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 5 23:50:16.854708 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 23:50:16.854719 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 5 23:50:16.854730 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 23:50:16.854742 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 5 23:50:16.854753 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Sep 5 23:50:16.854764 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Sep 5 23:50:16.854775 kernel: ACPI: bus type drm_connector registered Sep 5 23:50:16.854785 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 23:50:16.854795 kernel: fuse: init (API version 7.39) Sep 5 23:50:16.854805 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 23:50:16.854816 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 5 23:50:16.854828 kernel: loop: module loaded Sep 5 23:50:16.854840 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 5 23:50:16.854850 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 23:50:16.854890 systemd-journald[1156]: Collecting audit messages is disabled. Sep 5 23:50:16.854914 systemd-journald[1156]: Journal started Sep 5 23:50:16.854936 systemd-journald[1156]: Runtime Journal (/run/log/journal/a9295c6f6aac473d94026367c9b1f4d8) is 8.0M, max 76.6M, 68.6M free. Sep 5 23:50:16.859897 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 23:50:16.862194 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 5 23:50:16.864247 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 5 23:50:16.865110 systemd[1]: Mounted media.mount - External Media Directory. Sep 5 23:50:16.865799 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 5 23:50:16.868594 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 5 23:50:16.869345 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 5 23:50:16.871981 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 23:50:16.873074 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 5 23:50:16.873310 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 5 23:50:16.874803 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 23:50:16.874963 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 23:50:16.878003 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 5 23:50:16.878203 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 5 23:50:16.879471 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 23:50:16.879638 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 23:50:16.880772 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 5 23:50:16.880935 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 5 23:50:16.881823 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 23:50:16.883767 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 23:50:16.884800 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 5 23:50:16.888075 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 23:50:16.890670 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 5 23:50:16.891798 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 5 23:50:16.905438 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 5 23:50:16.914664 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 5 23:50:16.918616 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 5 23:50:16.920769 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 5 23:50:16.933724 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 5 23:50:16.941691 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 5 23:50:16.942835 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 23:50:16.946032 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 5 23:50:16.949620 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 23:50:16.956000 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 23:50:16.964306 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 5 23:50:16.969045 systemd-journald[1156]: Time spent on flushing to /var/log/journal/a9295c6f6aac473d94026367c9b1f4d8 is 25.187ms for 1112 entries. Sep 5 23:50:16.969045 systemd-journald[1156]: System Journal (/var/log/journal/a9295c6f6aac473d94026367c9b1f4d8) is 8.0M, max 584.8M, 576.8M free. Sep 5 23:50:17.007633 systemd-journald[1156]: Received client request to flush runtime journal. Sep 5 23:50:16.971805 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 5 23:50:16.973649 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 5 23:50:16.993916 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 23:50:17.004661 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 5 23:50:17.017081 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 5 23:50:17.021016 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 5 23:50:17.025058 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 5 23:50:17.029316 systemd-tmpfiles[1208]: ACLs are not supported, ignoring. Sep 5 23:50:17.029336 systemd-tmpfiles[1208]: ACLs are not supported, ignoring. Sep 5 23:50:17.033692 udevadm[1215]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 5 23:50:17.034750 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 23:50:17.047671 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 5 23:50:17.053017 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 23:50:17.082965 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 5 23:50:17.095634 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 23:50:17.109373 systemd-tmpfiles[1230]: ACLs are not supported, ignoring. Sep 5 23:50:17.109748 systemd-tmpfiles[1230]: ACLs are not supported, ignoring. Sep 5 23:50:17.115994 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 23:50:17.569281 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 5 23:50:17.576798 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 23:50:17.603650 systemd-udevd[1236]: Using default interface naming scheme 'v255'. Sep 5 23:50:17.629546 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 23:50:17.645899 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 5 23:50:17.664238 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 5 23:50:17.745592 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Sep 5 23:50:17.747612 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 5 23:50:17.840209 systemd-networkd[1244]: lo: Link UP Sep 5 23:50:17.840607 systemd-networkd[1244]: lo: Gained carrier Sep 5 23:50:17.844672 systemd-networkd[1244]: Enumeration completed Sep 5 23:50:17.846156 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 5 23:50:17.849226 systemd-networkd[1244]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 23:50:17.849467 systemd-networkd[1244]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 23:50:17.850926 systemd-networkd[1244]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 23:50:17.851055 systemd-networkd[1244]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 23:50:17.851703 systemd-networkd[1244]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 23:50:17.852334 systemd-networkd[1244]: eth0: Link UP Sep 5 23:50:17.852433 systemd-networkd[1244]: eth0: Gained carrier Sep 5 23:50:17.852449 systemd-networkd[1244]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 23:50:17.858644 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 5 23:50:17.862979 systemd-networkd[1244]: eth1: Link UP Sep 5 23:50:17.862988 systemd-networkd[1244]: eth1: Gained carrier Sep 5 23:50:17.863010 systemd-networkd[1244]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 23:50:17.864444 kernel: mousedev: PS/2 mouse device common for all mice Sep 5 23:50:17.883547 systemd-networkd[1244]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 23:50:17.891805 systemd-networkd[1244]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Sep 5 23:50:17.903461 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1238) Sep 5 23:50:17.909753 systemd-networkd[1244]: eth0: DHCPv4 address 91.99.216.181/32, gateway 172.31.1.1 acquired from 172.31.1.1 Sep 5 23:50:17.960592 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Sep 5 23:50:18.006992 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Sep 5 23:50:18.007072 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Sep 5 23:50:18.007088 kernel: [drm] features: -context_init Sep 5 23:50:18.009574 kernel: [drm] number of scanouts: 1 Sep 5 23:50:18.012429 kernel: [drm] number of cap sets: 0 Sep 5 23:50:18.021667 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 23:50:18.028428 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Sep 5 23:50:18.033816 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 23:50:18.039648 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 23:50:18.046669 kernel: Console: switching to colour frame buffer device 160x50 Sep 5 23:50:18.054608 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Sep 5 23:50:18.058689 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 23:50:18.059265 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 5 23:50:18.059312 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 5 23:50:18.059775 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 23:50:18.059951 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 23:50:18.064325 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 23:50:18.064553 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 23:50:18.068127 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 23:50:18.069937 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 23:50:18.074072 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 23:50:18.074262 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 23:50:18.078892 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 23:50:18.153518 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 23:50:18.192812 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 5 23:50:18.199648 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 5 23:50:18.220422 lvm[1304]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 5 23:50:18.251355 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 5 23:50:18.252521 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 23:50:18.259805 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 5 23:50:18.266624 lvm[1307]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 5 23:50:18.294320 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 5 23:50:18.297970 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 5 23:50:18.300337 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 5 23:50:18.300536 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 23:50:18.301189 systemd[1]: Reached target machines.target - Containers. Sep 5 23:50:18.303145 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 5 23:50:18.309714 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 5 23:50:18.314661 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 5 23:50:18.316703 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 23:50:18.323753 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 5 23:50:18.326965 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 5 23:50:18.340203 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 5 23:50:18.350074 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 5 23:50:18.351963 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 5 23:50:18.358223 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 5 23:50:18.363595 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 5 23:50:18.374501 kernel: loop0: detected capacity change from 0 to 114328 Sep 5 23:50:18.400632 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 5 23:50:18.417493 kernel: loop1: detected capacity change from 0 to 203944 Sep 5 23:50:18.459445 kernel: loop2: detected capacity change from 0 to 8 Sep 5 23:50:18.480476 kernel: loop3: detected capacity change from 0 to 114432 Sep 5 23:50:18.513458 kernel: loop4: detected capacity change from 0 to 114328 Sep 5 23:50:18.529310 kernel: loop5: detected capacity change from 0 to 203944 Sep 5 23:50:18.548707 kernel: loop6: detected capacity change from 0 to 8 Sep 5 23:50:18.550621 kernel: loop7: detected capacity change from 0 to 114432 Sep 5 23:50:18.562462 (sd-merge)[1329]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Sep 5 23:50:18.563063 (sd-merge)[1329]: Merged extensions into '/usr'. Sep 5 23:50:18.568538 systemd[1]: Reloading requested from client PID 1315 ('systemd-sysext') (unit systemd-sysext.service)... Sep 5 23:50:18.568734 systemd[1]: Reloading... Sep 5 23:50:18.661667 zram_generator::config[1358]: No configuration found. Sep 5 23:50:18.801686 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 23:50:18.864174 systemd[1]: Reloading finished in 294 ms. Sep 5 23:50:18.864382 ldconfig[1311]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 5 23:50:18.878244 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 5 23:50:18.880999 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 5 23:50:18.890755 systemd[1]: Starting ensure-sysext.service... Sep 5 23:50:18.895736 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 23:50:18.904041 systemd[1]: Reloading requested from client PID 1402 ('systemctl') (unit ensure-sysext.service)... Sep 5 23:50:18.904060 systemd[1]: Reloading... Sep 5 23:50:18.935878 systemd-tmpfiles[1403]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 5 23:50:18.936190 systemd-tmpfiles[1403]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 5 23:50:18.936889 systemd-tmpfiles[1403]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 5 23:50:18.937157 systemd-tmpfiles[1403]: ACLs are not supported, ignoring. Sep 5 23:50:18.937205 systemd-tmpfiles[1403]: ACLs are not supported, ignoring. Sep 5 23:50:18.941960 systemd-tmpfiles[1403]: Detected autofs mount point /boot during canonicalization of boot. Sep 5 23:50:18.942149 systemd-tmpfiles[1403]: Skipping /boot Sep 5 23:50:18.950748 systemd-tmpfiles[1403]: Detected autofs mount point /boot during canonicalization of boot. Sep 5 23:50:18.950898 systemd-tmpfiles[1403]: Skipping /boot Sep 5 23:50:18.957497 systemd-networkd[1244]: eth0: Gained IPv6LL Sep 5 23:50:19.013428 zram_generator::config[1432]: No configuration found. Sep 5 23:50:19.131753 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 23:50:19.196655 systemd[1]: Reloading finished in 292 ms. Sep 5 23:50:19.213307 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 5 23:50:19.214655 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 23:50:19.253719 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 5 23:50:19.261752 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 5 23:50:19.267735 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 5 23:50:19.274781 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 5 23:50:19.281174 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 5 23:50:19.292616 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 23:50:19.299696 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 23:50:19.306823 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 23:50:19.321353 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 23:50:19.322037 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 23:50:19.323023 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 23:50:19.323228 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 23:50:19.340192 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 5 23:50:19.343110 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 23:50:19.343791 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 23:50:19.344066 augenrules[1508]: No rules Sep 5 23:50:19.346626 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 23:50:19.347157 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 23:50:19.351364 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 5 23:50:19.363889 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 5 23:50:19.374927 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 23:50:19.383942 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 23:50:19.392694 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 5 23:50:19.396244 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 23:50:19.405428 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 23:50:19.408635 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 23:50:19.430027 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 5 23:50:19.438296 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 5 23:50:19.441718 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 23:50:19.441890 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 23:50:19.444720 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 5 23:50:19.444911 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 5 23:50:19.447245 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 23:50:19.447456 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 23:50:19.449146 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 23:50:19.449318 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 23:50:19.465550 systemd-resolved[1488]: Positive Trust Anchors: Sep 5 23:50:19.465573 systemd-resolved[1488]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 5 23:50:19.465606 systemd-resolved[1488]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 5 23:50:19.466218 systemd[1]: Finished ensure-sysext.service. Sep 5 23:50:19.470330 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 5 23:50:19.474743 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 23:50:19.474839 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 23:50:19.477356 systemd-resolved[1488]: Using system hostname 'ci-4081-3-5-n-f09ad01745'. Sep 5 23:50:19.490895 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 5 23:50:19.492528 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 5 23:50:19.493034 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 5 23:50:19.494889 systemd[1]: Reached target network.target - Network. Sep 5 23:50:19.495543 systemd[1]: Reached target network-online.target - Network is Online. Sep 5 23:50:19.496384 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 5 23:50:19.544875 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 5 23:50:19.547810 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 23:50:19.548591 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 5 23:50:19.549342 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 5 23:50:19.550186 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 5 23:50:19.551027 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 5 23:50:19.551059 systemd[1]: Reached target paths.target - Path Units. Sep 5 23:50:19.551879 systemd[1]: Reached target time-set.target - System Time Set. Sep 5 23:50:19.552731 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 5 23:50:19.553508 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 5 23:50:19.554266 systemd[1]: Reached target timers.target - Timer Units. Sep 5 23:50:19.555956 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 5 23:50:19.558451 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 5 23:50:19.561022 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 5 23:50:19.565576 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 5 23:50:19.567007 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 23:50:19.568280 systemd[1]: Reached target basic.target - Basic System. Sep 5 23:50:19.569206 systemd[1]: System is tainted: cgroupsv1 Sep 5 23:50:19.569265 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 5 23:50:19.569291 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 5 23:50:19.571803 systemd[1]: Starting containerd.service - containerd container runtime... Sep 5 23:50:19.585796 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 5 23:50:19.590118 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 5 23:50:19.593664 systemd-timesyncd[1540]: Contacted time server 85.215.166.214:123 (0.flatcar.pool.ntp.org). Sep 5 23:50:19.596583 systemd-timesyncd[1540]: Initial clock synchronization to Fri 2025-09-05 23:50:19.894969 UTC. Sep 5 23:50:19.605565 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 5 23:50:19.609708 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 5 23:50:19.610386 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 5 23:50:19.618645 jq[1550]: false Sep 5 23:50:19.618636 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:50:19.625125 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 5 23:50:19.631675 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 5 23:50:19.649591 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 5 23:50:19.658365 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Sep 5 23:50:19.671565 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 5 23:50:19.680561 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 5 23:50:19.690747 dbus-daemon[1546]: [system] SELinux support is enabled Sep 5 23:50:19.694346 coreos-metadata[1545]: Sep 05 23:50:19.694 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Sep 5 23:50:19.697335 extend-filesystems[1551]: Found loop4 Sep 5 23:50:19.697335 extend-filesystems[1551]: Found loop5 Sep 5 23:50:19.697335 extend-filesystems[1551]: Found loop6 Sep 5 23:50:19.697335 extend-filesystems[1551]: Found loop7 Sep 5 23:50:19.697335 extend-filesystems[1551]: Found sda Sep 5 23:50:19.697335 extend-filesystems[1551]: Found sda1 Sep 5 23:50:19.697335 extend-filesystems[1551]: Found sda2 Sep 5 23:50:19.697335 extend-filesystems[1551]: Found sda3 Sep 5 23:50:19.697335 extend-filesystems[1551]: Found usr Sep 5 23:50:19.697335 extend-filesystems[1551]: Found sda4 Sep 5 23:50:19.697335 extend-filesystems[1551]: Found sda6 Sep 5 23:50:19.697335 extend-filesystems[1551]: Found sda7 Sep 5 23:50:19.697335 extend-filesystems[1551]: Found sda9 Sep 5 23:50:19.697335 extend-filesystems[1551]: Checking size of /dev/sda9 Sep 5 23:50:19.710047 coreos-metadata[1545]: Sep 05 23:50:19.702 INFO Fetch successful Sep 5 23:50:19.710047 coreos-metadata[1545]: Sep 05 23:50:19.702 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Sep 5 23:50:19.710047 coreos-metadata[1545]: Sep 05 23:50:19.707 INFO Fetch successful Sep 5 23:50:19.702953 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 5 23:50:19.706388 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 5 23:50:19.711980 systemd[1]: Starting update-engine.service - Update Engine... Sep 5 23:50:19.732715 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 5 23:50:19.744750 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 5 23:50:19.751956 extend-filesystems[1551]: Resized partition /dev/sda9 Sep 5 23:50:19.756612 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 5 23:50:19.756918 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 5 23:50:19.771237 extend-filesystems[1588]: resize2fs 1.47.1 (20-May-2024) Sep 5 23:50:19.780428 jq[1574]: true Sep 5 23:50:19.781966 systemd[1]: motdgen.service: Deactivated successfully. Sep 5 23:50:19.782351 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 5 23:50:19.799211 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Sep 5 23:50:19.799018 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 5 23:50:19.799474 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 5 23:50:19.806297 update_engine[1570]: I20250905 23:50:19.806005 1570 main.cc:92] Flatcar Update Engine starting Sep 5 23:50:19.814000 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 5 23:50:19.814920 update_engine[1570]: I20250905 23:50:19.814862 1570 update_check_scheduler.cc:74] Next update check in 9m4s Sep 5 23:50:19.841974 (ntainerd)[1598]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 5 23:50:19.853448 jq[1596]: true Sep 5 23:50:19.854010 systemd-networkd[1244]: eth1: Gained IPv6LL Sep 5 23:50:19.881351 systemd-logind[1566]: New seat seat0. Sep 5 23:50:19.889587 tar[1591]: linux-arm64/helm Sep 5 23:50:19.887977 systemd[1]: Started update-engine.service - Update Engine. Sep 5 23:50:19.889660 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 5 23:50:19.889693 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 5 23:50:19.896326 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 5 23:50:19.896361 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 5 23:50:19.905330 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 5 23:50:19.912414 systemd-logind[1566]: Watching system buttons on /dev/input/event0 (Power Button) Sep 5 23:50:19.912443 systemd-logind[1566]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Sep 5 23:50:19.933750 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 5 23:50:19.944941 systemd[1]: Started systemd-logind.service - User Login Management. Sep 5 23:50:20.027209 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 5 23:50:20.028814 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 5 23:50:20.037820 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1240) Sep 5 23:50:20.076359 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Sep 5 23:50:20.116131 bash[1635]: Updated "/home/core/.ssh/authorized_keys" Sep 5 23:50:20.116478 extend-filesystems[1588]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Sep 5 23:50:20.116478 extend-filesystems[1588]: old_desc_blocks = 1, new_desc_blocks = 5 Sep 5 23:50:20.116478 extend-filesystems[1588]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Sep 5 23:50:20.138792 extend-filesystems[1551]: Resized filesystem in /dev/sda9 Sep 5 23:50:20.138792 extend-filesystems[1551]: Found sr0 Sep 5 23:50:20.124946 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 5 23:50:20.125278 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 5 23:50:20.130580 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 5 23:50:20.149868 systemd[1]: Starting sshkeys.service... Sep 5 23:50:20.173592 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 5 23:50:20.201021 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 5 23:50:20.252695 locksmithd[1620]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 5 23:50:20.271495 coreos-metadata[1650]: Sep 05 23:50:20.271 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Sep 5 23:50:20.273535 coreos-metadata[1650]: Sep 05 23:50:20.272 INFO Fetch successful Sep 5 23:50:20.277949 unknown[1650]: wrote ssh authorized keys file for user: core Sep 5 23:50:20.323310 containerd[1598]: time="2025-09-05T23:50:20.323177055Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 5 23:50:20.326963 update-ssh-keys[1660]: Updated "/home/core/.ssh/authorized_keys" Sep 5 23:50:20.331104 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 5 23:50:20.344100 systemd[1]: Finished sshkeys.service. Sep 5 23:50:20.382523 containerd[1598]: time="2025-09-05T23:50:20.382234736Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 5 23:50:20.384155 containerd[1598]: time="2025-09-05T23:50:20.384096746Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.103-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 5 23:50:20.385464 containerd[1598]: time="2025-09-05T23:50:20.384292328Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 5 23:50:20.385464 containerd[1598]: time="2025-09-05T23:50:20.384319390Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 5 23:50:20.385464 containerd[1598]: time="2025-09-05T23:50:20.384522070Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 5 23:50:20.385464 containerd[1598]: time="2025-09-05T23:50:20.384545853Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 5 23:50:20.385464 containerd[1598]: time="2025-09-05T23:50:20.384612182Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 23:50:20.385464 containerd[1598]: time="2025-09-05T23:50:20.384625755Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 5 23:50:20.385464 containerd[1598]: time="2025-09-05T23:50:20.384864047Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 23:50:20.385464 containerd[1598]: time="2025-09-05T23:50:20.384880484Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 5 23:50:20.385464 containerd[1598]: time="2025-09-05T23:50:20.384895634Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 23:50:20.385464 containerd[1598]: time="2025-09-05T23:50:20.384906799Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 5 23:50:20.385464 containerd[1598]: time="2025-09-05T23:50:20.384985663Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 5 23:50:20.385464 containerd[1598]: time="2025-09-05T23:50:20.385232921Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 5 23:50:20.385747 containerd[1598]: time="2025-09-05T23:50:20.385364416Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 23:50:20.385747 containerd[1598]: time="2025-09-05T23:50:20.385379939Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 5 23:50:20.385869 containerd[1598]: time="2025-09-05T23:50:20.385845732Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 5 23:50:20.385989 containerd[1598]: time="2025-09-05T23:50:20.385973242Z" level=info msg="metadata content store policy set" policy=shared Sep 5 23:50:20.400498 containerd[1598]: time="2025-09-05T23:50:20.400317527Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 5 23:50:20.401271 containerd[1598]: time="2025-09-05T23:50:20.401235084Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 5 23:50:20.401439 containerd[1598]: time="2025-09-05T23:50:20.401420289Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 5 23:50:20.401532 containerd[1598]: time="2025-09-05T23:50:20.401518080Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 5 23:50:20.401613 containerd[1598]: time="2025-09-05T23:50:20.401599268Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 5 23:50:20.401980 containerd[1598]: time="2025-09-05T23:50:20.401953240Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 5 23:50:20.403465 containerd[1598]: time="2025-09-05T23:50:20.402626984Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 5 23:50:20.403465 containerd[1598]: time="2025-09-05T23:50:20.402853696Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 5 23:50:20.403465 containerd[1598]: time="2025-09-05T23:50:20.402878061Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 5 23:50:20.403465 containerd[1598]: time="2025-09-05T23:50:20.402893626Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 5 23:50:20.403465 containerd[1598]: time="2025-09-05T23:50:20.402911184Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 5 23:50:20.403465 containerd[1598]: time="2025-09-05T23:50:20.402924756Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 5 23:50:20.403465 containerd[1598]: time="2025-09-05T23:50:20.402945552Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 5 23:50:20.403465 containerd[1598]: time="2025-09-05T23:50:20.402962071Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 5 23:50:20.403465 containerd[1598]: time="2025-09-05T23:50:20.402979131Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 5 23:50:20.403465 containerd[1598]: time="2025-09-05T23:50:20.403015782Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 5 23:50:20.403465 containerd[1598]: time="2025-09-05T23:50:20.403034252Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 5 23:50:20.403465 containerd[1598]: time="2025-09-05T23:50:20.403048863Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 5 23:50:20.403465 containerd[1598]: time="2025-09-05T23:50:20.403074265Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 5 23:50:20.403465 containerd[1598]: time="2025-09-05T23:50:20.403096679Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 5 23:50:20.403785 containerd[1598]: time="2025-09-05T23:50:20.403115565Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 5 23:50:20.403785 containerd[1598]: time="2025-09-05T23:50:20.403131047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 5 23:50:20.403785 containerd[1598]: time="2025-09-05T23:50:20.403145035Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 5 23:50:20.403785 containerd[1598]: time="2025-09-05T23:50:20.403160766Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 5 23:50:20.403785 containerd[1598]: time="2025-09-05T23:50:20.403174671Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 5 23:50:20.403785 containerd[1598]: time="2025-09-05T23:50:20.403189489Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 5 23:50:20.403785 containerd[1598]: time="2025-09-05T23:50:20.403204348Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 5 23:50:20.403785 containerd[1598]: time="2025-09-05T23:50:20.403220370Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 5 23:50:20.403785 containerd[1598]: time="2025-09-05T23:50:20.403236350Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 5 23:50:20.403785 containerd[1598]: time="2025-09-05T23:50:20.403251335Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 5 23:50:20.403785 containerd[1598]: time="2025-09-05T23:50:20.403265613Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 5 23:50:20.403785 containerd[1598]: time="2025-09-05T23:50:20.403292053Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 5 23:50:20.403785 containerd[1598]: time="2025-09-05T23:50:20.403316833Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 5 23:50:20.403785 containerd[1598]: time="2025-09-05T23:50:20.403331236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 5 23:50:20.403785 containerd[1598]: time="2025-09-05T23:50:20.403344020Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 5 23:50:20.404845 containerd[1598]: time="2025-09-05T23:50:20.404492730Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 5 23:50:20.404845 containerd[1598]: time="2025-09-05T23:50:20.404679097Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 5 23:50:20.404845 containerd[1598]: time="2025-09-05T23:50:20.404696904Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 5 23:50:20.404845 containerd[1598]: time="2025-09-05T23:50:20.404711016Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 5 23:50:20.404845 containerd[1598]: time="2025-09-05T23:50:20.404726664Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 5 23:50:20.404845 containerd[1598]: time="2025-09-05T23:50:20.404741358Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 5 23:50:20.404845 containerd[1598]: time="2025-09-05T23:50:20.404776390Z" level=info msg="NRI interface is disabled by configuration." Sep 5 23:50:20.404845 containerd[1598]: time="2025-09-05T23:50:20.404787763Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 5 23:50:20.405751 containerd[1598]: time="2025-09-05T23:50:20.405445153Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 5 23:50:20.405751 containerd[1598]: time="2025-09-05T23:50:20.405525884Z" level=info msg="Connect containerd service" Sep 5 23:50:20.405751 containerd[1598]: time="2025-09-05T23:50:20.405564403Z" level=info msg="using legacy CRI server" Sep 5 23:50:20.405751 containerd[1598]: time="2025-09-05T23:50:20.405584243Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 5 23:50:20.405751 containerd[1598]: time="2025-09-05T23:50:20.405719307Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 5 23:50:20.407957 containerd[1598]: time="2025-09-05T23:50:20.407191482Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 5 23:50:20.407957 containerd[1598]: time="2025-09-05T23:50:20.407400471Z" level=info msg="Start subscribing containerd event" Sep 5 23:50:20.407957 containerd[1598]: time="2025-09-05T23:50:20.407471531Z" level=info msg="Start recovering state" Sep 5 23:50:20.407957 containerd[1598]: time="2025-09-05T23:50:20.407546908Z" level=info msg="Start event monitor" Sep 5 23:50:20.407957 containerd[1598]: time="2025-09-05T23:50:20.407558198Z" level=info msg="Start snapshots syncer" Sep 5 23:50:20.407957 containerd[1598]: time="2025-09-05T23:50:20.407568160Z" level=info msg="Start cni network conf syncer for default" Sep 5 23:50:20.407957 containerd[1598]: time="2025-09-05T23:50:20.407575548Z" level=info msg="Start streaming server" Sep 5 23:50:20.408778 containerd[1598]: time="2025-09-05T23:50:20.408751113Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 5 23:50:20.408935 containerd[1598]: time="2025-09-05T23:50:20.408862643Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 5 23:50:20.413609 containerd[1598]: time="2025-09-05T23:50:20.411494029Z" level=info msg="containerd successfully booted in 0.096848s" Sep 5 23:50:20.412678 systemd[1]: Started containerd.service - containerd container runtime. Sep 5 23:50:20.730070 tar[1591]: linux-arm64/LICENSE Sep 5 23:50:20.730279 tar[1591]: linux-arm64/README.md Sep 5 23:50:20.763127 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 5 23:50:20.949720 sshd_keygen[1606]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 5 23:50:20.988939 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 5 23:50:21.000835 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 5 23:50:21.011937 systemd[1]: issuegen.service: Deactivated successfully. Sep 5 23:50:21.012243 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 5 23:50:21.024590 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 5 23:50:21.037996 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 5 23:50:21.043935 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 5 23:50:21.048132 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 5 23:50:21.051409 systemd[1]: Reached target getty.target - Login Prompts. Sep 5 23:50:21.199668 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:50:21.201673 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 5 23:50:21.202415 systemd[1]: Startup finished in 7.230s (kernel) + 5.157s (userspace) = 12.388s. Sep 5 23:50:21.227410 (kubelet)[1704]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 23:50:21.824262 kubelet[1704]: E0905 23:50:21.824177 1704 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 23:50:21.827032 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 23:50:21.827248 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 23:50:32.078359 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 5 23:50:32.088752 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:50:32.241706 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:50:32.246992 (kubelet)[1728]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 23:50:32.309891 kubelet[1728]: E0905 23:50:32.309828 1728 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 23:50:32.314723 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 23:50:32.314964 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 23:50:42.566358 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 5 23:50:42.575731 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:50:42.723744 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:50:42.738115 (kubelet)[1748]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 23:50:42.797879 kubelet[1748]: E0905 23:50:42.797813 1748 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 23:50:42.803794 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 23:50:42.804148 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 23:50:48.730366 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 5 23:50:48.737827 systemd[1]: Started sshd@0-91.99.216.181:22-139.178.68.195:49074.service - OpenSSH per-connection server daemon (139.178.68.195:49074). Sep 5 23:50:49.736786 sshd[1756]: Accepted publickey for core from 139.178.68.195 port 49074 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:50:49.739959 sshd[1756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:50:49.755839 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 5 23:50:49.762729 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 5 23:50:49.766665 systemd-logind[1566]: New session 1 of user core. Sep 5 23:50:49.791281 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 5 23:50:49.805060 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 5 23:50:49.809100 (systemd)[1762]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 5 23:50:49.929335 systemd[1762]: Queued start job for default target default.target. Sep 5 23:50:49.929898 systemd[1762]: Created slice app.slice - User Application Slice. Sep 5 23:50:49.929917 systemd[1762]: Reached target paths.target - Paths. Sep 5 23:50:49.929929 systemd[1762]: Reached target timers.target - Timers. Sep 5 23:50:49.937564 systemd[1762]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 5 23:50:49.946720 systemd[1762]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 5 23:50:49.946838 systemd[1762]: Reached target sockets.target - Sockets. Sep 5 23:50:49.946872 systemd[1762]: Reached target basic.target - Basic System. Sep 5 23:50:49.946959 systemd[1762]: Reached target default.target - Main User Target. Sep 5 23:50:49.947017 systemd[1762]: Startup finished in 130ms. Sep 5 23:50:49.947270 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 5 23:50:49.956927 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 5 23:50:50.656840 systemd[1]: Started sshd@1-91.99.216.181:22-139.178.68.195:41096.service - OpenSSH per-connection server daemon (139.178.68.195:41096). Sep 5 23:50:51.651333 sshd[1774]: Accepted publickey for core from 139.178.68.195 port 41096 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:50:51.653848 sshd[1774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:50:51.659313 systemd-logind[1566]: New session 2 of user core. Sep 5 23:50:51.666779 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 5 23:50:52.342792 sshd[1774]: pam_unix(sshd:session): session closed for user core Sep 5 23:50:52.349768 systemd[1]: sshd@1-91.99.216.181:22-139.178.68.195:41096.service: Deactivated successfully. Sep 5 23:50:52.351676 systemd-logind[1566]: Session 2 logged out. Waiting for processes to exit. Sep 5 23:50:52.353740 systemd[1]: session-2.scope: Deactivated successfully. Sep 5 23:50:52.355033 systemd-logind[1566]: Removed session 2. Sep 5 23:50:52.543929 systemd[1]: Started sshd@2-91.99.216.181:22-139.178.68.195:41112.service - OpenSSH per-connection server daemon (139.178.68.195:41112). Sep 5 23:50:53.054945 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 5 23:50:53.070751 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:50:53.206028 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:50:53.212638 (kubelet)[1796]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 23:50:53.280416 kubelet[1796]: E0905 23:50:53.280238 1796 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 23:50:53.284101 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 23:50:53.285293 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 23:50:53.602126 sshd[1782]: Accepted publickey for core from 139.178.68.195 port 41112 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:50:53.605474 sshd[1782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:50:53.612681 systemd-logind[1566]: New session 3 of user core. Sep 5 23:50:53.621978 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 5 23:50:54.330708 sshd[1782]: pam_unix(sshd:session): session closed for user core Sep 5 23:50:54.338543 systemd[1]: sshd@2-91.99.216.181:22-139.178.68.195:41112.service: Deactivated successfully. Sep 5 23:50:54.342704 systemd[1]: session-3.scope: Deactivated successfully. Sep 5 23:50:54.358224 systemd-logind[1566]: Session 3 logged out. Waiting for processes to exit. Sep 5 23:50:54.360913 systemd-logind[1566]: Removed session 3. Sep 5 23:50:54.516942 systemd[1]: Started sshd@3-91.99.216.181:22-139.178.68.195:41114.service - OpenSSH per-connection server daemon (139.178.68.195:41114). Sep 5 23:50:55.568616 sshd[1810]: Accepted publickey for core from 139.178.68.195 port 41114 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:50:55.570903 sshd[1810]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:50:55.576307 systemd-logind[1566]: New session 4 of user core. Sep 5 23:50:55.583994 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 5 23:50:56.301766 sshd[1810]: pam_unix(sshd:session): session closed for user core Sep 5 23:50:56.306019 systemd[1]: sshd@3-91.99.216.181:22-139.178.68.195:41114.service: Deactivated successfully. Sep 5 23:50:56.310342 systemd-logind[1566]: Session 4 logged out. Waiting for processes to exit. Sep 5 23:50:56.310947 systemd[1]: session-4.scope: Deactivated successfully. Sep 5 23:50:56.312093 systemd-logind[1566]: Removed session 4. Sep 5 23:50:56.461903 systemd[1]: Started sshd@4-91.99.216.181:22-139.178.68.195:41128.service - OpenSSH per-connection server daemon (139.178.68.195:41128). Sep 5 23:50:57.451245 sshd[1818]: Accepted publickey for core from 139.178.68.195 port 41128 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:50:57.453831 sshd[1818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:50:57.458700 systemd-logind[1566]: New session 5 of user core. Sep 5 23:50:57.464919 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 5 23:50:57.991900 sudo[1822]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 5 23:50:57.992190 sudo[1822]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 23:50:58.007206 sudo[1822]: pam_unix(sudo:session): session closed for user root Sep 5 23:50:58.169953 sshd[1818]: pam_unix(sshd:session): session closed for user core Sep 5 23:50:58.176439 systemd[1]: sshd@4-91.99.216.181:22-139.178.68.195:41128.service: Deactivated successfully. Sep 5 23:50:58.180356 systemd-logind[1566]: Session 5 logged out. Waiting for processes to exit. Sep 5 23:50:58.181085 systemd[1]: session-5.scope: Deactivated successfully. Sep 5 23:50:58.182377 systemd-logind[1566]: Removed session 5. Sep 5 23:50:58.342865 systemd[1]: Started sshd@5-91.99.216.181:22-139.178.68.195:41140.service - OpenSSH per-connection server daemon (139.178.68.195:41140). Sep 5 23:50:59.335978 sshd[1827]: Accepted publickey for core from 139.178.68.195 port 41140 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:50:59.337927 sshd[1827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:50:59.343541 systemd-logind[1566]: New session 6 of user core. Sep 5 23:50:59.350987 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 5 23:50:59.865933 sudo[1832]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 5 23:50:59.866792 sudo[1832]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 23:50:59.871586 sudo[1832]: pam_unix(sudo:session): session closed for user root Sep 5 23:50:59.877983 sudo[1831]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 5 23:50:59.878793 sudo[1831]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 23:50:59.893859 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 5 23:50:59.897998 auditctl[1835]: No rules Sep 5 23:50:59.898370 systemd[1]: audit-rules.service: Deactivated successfully. Sep 5 23:50:59.898649 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 5 23:50:59.913291 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 5 23:50:59.940826 augenrules[1854]: No rules Sep 5 23:50:59.943885 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 5 23:50:59.946938 sudo[1831]: pam_unix(sudo:session): session closed for user root Sep 5 23:51:00.108280 sshd[1827]: pam_unix(sshd:session): session closed for user core Sep 5 23:51:00.112974 systemd[1]: sshd@5-91.99.216.181:22-139.178.68.195:41140.service: Deactivated successfully. Sep 5 23:51:00.116760 systemd[1]: session-6.scope: Deactivated successfully. Sep 5 23:51:00.118229 systemd-logind[1566]: Session 6 logged out. Waiting for processes to exit. Sep 5 23:51:00.119200 systemd-logind[1566]: Removed session 6. Sep 5 23:51:00.287872 systemd[1]: Started sshd@6-91.99.216.181:22-139.178.68.195:50880.service - OpenSSH per-connection server daemon (139.178.68.195:50880). Sep 5 23:51:01.334004 sshd[1863]: Accepted publickey for core from 139.178.68.195 port 50880 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:51:01.336379 sshd[1863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:51:01.344555 systemd-logind[1566]: New session 7 of user core. Sep 5 23:51:01.350851 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 5 23:51:01.890395 sudo[1867]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 5 23:51:01.890725 sudo[1867]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 23:51:02.216033 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 5 23:51:02.216264 (dockerd)[1882]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 5 23:51:02.462461 dockerd[1882]: time="2025-09-05T23:51:02.462085140Z" level=info msg="Starting up" Sep 5 23:51:02.572816 dockerd[1882]: time="2025-09-05T23:51:02.572743142Z" level=info msg="Loading containers: start." Sep 5 23:51:02.685460 kernel: Initializing XFRM netlink socket Sep 5 23:51:02.770497 systemd-networkd[1244]: docker0: Link UP Sep 5 23:51:02.792929 dockerd[1882]: time="2025-09-05T23:51:02.792812278Z" level=info msg="Loading containers: done." Sep 5 23:51:02.811377 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4185639270-merged.mount: Deactivated successfully. Sep 5 23:51:02.813856 dockerd[1882]: time="2025-09-05T23:51:02.813780135Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 5 23:51:02.813980 dockerd[1882]: time="2025-09-05T23:51:02.813940806Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 5 23:51:02.814118 dockerd[1882]: time="2025-09-05T23:51:02.814080867Z" level=info msg="Daemon has completed initialization" Sep 5 23:51:02.857768 dockerd[1882]: time="2025-09-05T23:51:02.857429042Z" level=info msg="API listen on /run/docker.sock" Sep 5 23:51:02.857939 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 5 23:51:03.296882 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 5 23:51:03.306758 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:51:03.432637 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:51:03.448172 (kubelet)[2031]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 23:51:03.493536 kubelet[2031]: E0905 23:51:03.493209 2031 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 23:51:03.497631 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 23:51:03.497929 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 23:51:03.894573 containerd[1598]: time="2025-09-05T23:51:03.894529577Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 5 23:51:04.560171 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2029762975.mount: Deactivated successfully. Sep 5 23:51:04.816740 update_engine[1570]: I20250905 23:51:04.816439 1570 update_attempter.cc:509] Updating boot flags... Sep 5 23:51:04.858864 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2092) Sep 5 23:51:05.338207 containerd[1598]: time="2025-09-05T23:51:05.338131554Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:05.340445 containerd[1598]: time="2025-09-05T23:51:05.340159319Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.12: active requests=0, bytes read=25652533" Sep 5 23:51:05.340445 containerd[1598]: time="2025-09-05T23:51:05.340199294Z" level=info msg="ImageCreate event name:\"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:05.345435 containerd[1598]: time="2025-09-05T23:51:05.343787527Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:05.347007 containerd[1598]: time="2025-09-05T23:51:05.346941397Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.12\" with image id \"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\", size \"25649241\" in 1.452360919s" Sep 5 23:51:05.347094 containerd[1598]: time="2025-09-05T23:51:05.347005221Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\"" Sep 5 23:51:05.349808 containerd[1598]: time="2025-09-05T23:51:05.349777786Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 5 23:51:06.505499 containerd[1598]: time="2025-09-05T23:51:06.505445152Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:06.506799 containerd[1598]: time="2025-09-05T23:51:06.506757583Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.12: active requests=0, bytes read=22460329" Sep 5 23:51:06.509422 containerd[1598]: time="2025-09-05T23:51:06.507990626Z" level=info msg="ImageCreate event name:\"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:06.511310 containerd[1598]: time="2025-09-05T23:51:06.511270603Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:06.512758 containerd[1598]: time="2025-09-05T23:51:06.512720283Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.12\" with image id \"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\", size \"23997423\" in 1.16281625s" Sep 5 23:51:06.512890 containerd[1598]: time="2025-09-05T23:51:06.512873058Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\"" Sep 5 23:51:06.513357 containerd[1598]: time="2025-09-05T23:51:06.513336624Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 5 23:51:07.571992 containerd[1598]: time="2025-09-05T23:51:07.571933273Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:07.576775 containerd[1598]: time="2025-09-05T23:51:07.576699101Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.12: active requests=0, bytes read=17125923" Sep 5 23:51:07.578685 containerd[1598]: time="2025-09-05T23:51:07.578574982Z" level=info msg="ImageCreate event name:\"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:07.585995 containerd[1598]: time="2025-09-05T23:51:07.585855510Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:07.587269 containerd[1598]: time="2025-09-05T23:51:07.587032912Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.12\" with image id \"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\", size \"18663035\" in 1.073454162s" Sep 5 23:51:07.587269 containerd[1598]: time="2025-09-05T23:51:07.587085010Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\"" Sep 5 23:51:07.588017 containerd[1598]: time="2025-09-05T23:51:07.587820261Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 5 23:51:08.542785 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3629441186.mount: Deactivated successfully. Sep 5 23:51:08.867752 containerd[1598]: time="2025-09-05T23:51:08.867673162Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:08.869308 containerd[1598]: time="2025-09-05T23:51:08.869141320Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.12: active requests=0, bytes read=26916121" Sep 5 23:51:08.871085 containerd[1598]: time="2025-09-05T23:51:08.870217991Z" level=info msg="ImageCreate event name:\"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:08.872594 containerd[1598]: time="2025-09-05T23:51:08.872555632Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:08.873503 containerd[1598]: time="2025-09-05T23:51:08.873474011Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.12\" with image id \"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\", repo tag \"registry.k8s.io/kube-proxy:v1.31.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\", size \"26915114\" in 1.285617098s" Sep 5 23:51:08.873608 containerd[1598]: time="2025-09-05T23:51:08.873590889Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\"" Sep 5 23:51:08.874190 containerd[1598]: time="2025-09-05T23:51:08.874168397Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 5 23:51:09.445181 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount532710977.mount: Deactivated successfully. Sep 5 23:51:10.262661 containerd[1598]: time="2025-09-05T23:51:10.261354538Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:10.263211 containerd[1598]: time="2025-09-05T23:51:10.263178478Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951714" Sep 5 23:51:10.264047 containerd[1598]: time="2025-09-05T23:51:10.263984597Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:10.267981 containerd[1598]: time="2025-09-05T23:51:10.267922565Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:10.269579 containerd[1598]: time="2025-09-05T23:51:10.269531042Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.395252169s" Sep 5 23:51:10.269579 containerd[1598]: time="2025-09-05T23:51:10.269576775Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 5 23:51:10.270061 containerd[1598]: time="2025-09-05T23:51:10.270022187Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 5 23:51:10.795304 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2275869503.mount: Deactivated successfully. Sep 5 23:51:10.804108 containerd[1598]: time="2025-09-05T23:51:10.803116070Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:10.804108 containerd[1598]: time="2025-09-05T23:51:10.804062910Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Sep 5 23:51:10.804948 containerd[1598]: time="2025-09-05T23:51:10.804895517Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:10.808303 containerd[1598]: time="2025-09-05T23:51:10.808237388Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:10.809294 containerd[1598]: time="2025-09-05T23:51:10.809164023Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 539.104945ms" Sep 5 23:51:10.809294 containerd[1598]: time="2025-09-05T23:51:10.809201274Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 5 23:51:10.810276 containerd[1598]: time="2025-09-05T23:51:10.810245983Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 5 23:51:11.424128 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount493350335.mount: Deactivated successfully. Sep 5 23:51:12.890482 containerd[1598]: time="2025-09-05T23:51:12.890362959Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:12.892294 containerd[1598]: time="2025-09-05T23:51:12.892246669Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66537235" Sep 5 23:51:12.893339 containerd[1598]: time="2025-09-05T23:51:12.892708674Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:12.896690 containerd[1598]: time="2025-09-05T23:51:12.896618253Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:12.899230 containerd[1598]: time="2025-09-05T23:51:12.899175385Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.088842058s" Sep 5 23:51:12.899230 containerd[1598]: time="2025-09-05T23:51:12.899222398Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Sep 5 23:51:13.547249 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Sep 5 23:51:13.563871 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:51:13.695181 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:51:13.715871 (kubelet)[2254]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 23:51:13.776667 kubelet[2254]: E0905 23:51:13.776602 2254 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 23:51:13.782124 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 23:51:13.782296 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 23:51:18.081630 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:51:18.103806 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:51:18.142212 systemd[1]: Reloading requested from client PID 2280 ('systemctl') (unit session-7.scope)... Sep 5 23:51:18.142232 systemd[1]: Reloading... Sep 5 23:51:18.267429 zram_generator::config[2325]: No configuration found. Sep 5 23:51:18.369124 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 23:51:18.440511 systemd[1]: Reloading finished in 297 ms. Sep 5 23:51:18.491652 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:51:18.496556 systemd[1]: kubelet.service: Deactivated successfully. Sep 5 23:51:18.496812 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:51:18.503785 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:51:18.620630 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:51:18.632110 (kubelet)[2383]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 5 23:51:18.677846 kubelet[2383]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 23:51:18.677846 kubelet[2383]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 5 23:51:18.677846 kubelet[2383]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 23:51:18.678302 kubelet[2383]: I0905 23:51:18.677892 2383 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 5 23:51:20.072928 kubelet[2383]: I0905 23:51:20.072884 2383 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 5 23:51:20.074957 kubelet[2383]: I0905 23:51:20.073363 2383 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 5 23:51:20.074957 kubelet[2383]: I0905 23:51:20.073826 2383 server.go:934] "Client rotation is on, will bootstrap in background" Sep 5 23:51:20.105349 kubelet[2383]: E0905 23:51:20.105311 2383 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://91.99.216.181:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 91.99.216.181:6443: connect: connection refused" logger="UnhandledError" Sep 5 23:51:20.107061 kubelet[2383]: I0905 23:51:20.107029 2383 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 23:51:20.115418 kubelet[2383]: E0905 23:51:20.115348 2383 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 5 23:51:20.115571 kubelet[2383]: I0905 23:51:20.115484 2383 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 5 23:51:20.119456 kubelet[2383]: I0905 23:51:20.119230 2383 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 5 23:51:20.120585 kubelet[2383]: I0905 23:51:20.120536 2383 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 5 23:51:20.120771 kubelet[2383]: I0905 23:51:20.120708 2383 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 5 23:51:20.120977 kubelet[2383]: I0905 23:51:20.120753 2383 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-5-n-f09ad01745","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 5 23:51:20.120977 kubelet[2383]: I0905 23:51:20.120972 2383 topology_manager.go:138] "Creating topology manager with none policy" Sep 5 23:51:20.121141 kubelet[2383]: I0905 23:51:20.120983 2383 container_manager_linux.go:300] "Creating device plugin manager" Sep 5 23:51:20.121268 kubelet[2383]: I0905 23:51:20.121227 2383 state_mem.go:36] "Initialized new in-memory state store" Sep 5 23:51:20.125451 kubelet[2383]: I0905 23:51:20.125370 2383 kubelet.go:408] "Attempting to sync node with API server" Sep 5 23:51:20.125451 kubelet[2383]: I0905 23:51:20.125430 2383 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 5 23:51:20.125451 kubelet[2383]: I0905 23:51:20.125459 2383 kubelet.go:314] "Adding apiserver pod source" Sep 5 23:51:20.125640 kubelet[2383]: I0905 23:51:20.125575 2383 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 5 23:51:20.131727 kubelet[2383]: W0905 23:51:20.130746 2383 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://91.99.216.181:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-n-f09ad01745&limit=500&resourceVersion=0": dial tcp 91.99.216.181:6443: connect: connection refused Sep 5 23:51:20.131727 kubelet[2383]: E0905 23:51:20.130892 2383 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://91.99.216.181:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-n-f09ad01745&limit=500&resourceVersion=0\": dial tcp 91.99.216.181:6443: connect: connection refused" logger="UnhandledError" Sep 5 23:51:20.131727 kubelet[2383]: W0905 23:51:20.131317 2383 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://91.99.216.181:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 91.99.216.181:6443: connect: connection refused Sep 5 23:51:20.131727 kubelet[2383]: E0905 23:51:20.131355 2383 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://91.99.216.181:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 91.99.216.181:6443: connect: connection refused" logger="UnhandledError" Sep 5 23:51:20.132206 kubelet[2383]: I0905 23:51:20.132185 2383 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 5 23:51:20.132985 kubelet[2383]: I0905 23:51:20.132964 2383 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 5 23:51:20.133229 kubelet[2383]: W0905 23:51:20.133218 2383 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 5 23:51:20.135543 kubelet[2383]: I0905 23:51:20.135504 2383 server.go:1274] "Started kubelet" Sep 5 23:51:20.145417 kubelet[2383]: I0905 23:51:20.145371 2383 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 5 23:51:20.148561 kubelet[2383]: E0905 23:51:20.147066 2383 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://91.99.216.181:6443/api/v1/namespaces/default/events\": dial tcp 91.99.216.181:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-5-n-f09ad01745.186287f27a87e36b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-5-n-f09ad01745,UID:ci-4081-3-5-n-f09ad01745,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-5-n-f09ad01745,},FirstTimestamp:2025-09-05 23:51:20.135476075 +0000 UTC m=+1.498173618,LastTimestamp:2025-09-05 23:51:20.135476075 +0000 UTC m=+1.498173618,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-5-n-f09ad01745,}" Sep 5 23:51:20.149641 kubelet[2383]: I0905 23:51:20.149602 2383 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 5 23:51:20.150899 kubelet[2383]: I0905 23:51:20.150876 2383 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 5 23:51:20.152570 kubelet[2383]: I0905 23:51:20.152549 2383 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 5 23:51:20.153255 kubelet[2383]: E0905 23:51:20.153043 2383 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-5-n-f09ad01745\" not found" Sep 5 23:51:20.154704 kubelet[2383]: I0905 23:51:20.154674 2383 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 5 23:51:20.154838 kubelet[2383]: I0905 23:51:20.154828 2383 reconciler.go:26] "Reconciler: start to sync state" Sep 5 23:51:20.155840 kubelet[2383]: I0905 23:51:20.155821 2383 server.go:449] "Adding debug handlers to kubelet server" Sep 5 23:51:20.157212 kubelet[2383]: I0905 23:51:20.157155 2383 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 5 23:51:20.157552 kubelet[2383]: I0905 23:51:20.157536 2383 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 5 23:51:20.158609 kubelet[2383]: W0905 23:51:20.158212 2383 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://91.99.216.181:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 91.99.216.181:6443: connect: connection refused Sep 5 23:51:20.158609 kubelet[2383]: E0905 23:51:20.158267 2383 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://91.99.216.181:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 91.99.216.181:6443: connect: connection refused" logger="UnhandledError" Sep 5 23:51:20.158609 kubelet[2383]: E0905 23:51:20.158321 2383 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.216.181:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-n-f09ad01745?timeout=10s\": dial tcp 91.99.216.181:6443: connect: connection refused" interval="200ms" Sep 5 23:51:20.159353 kubelet[2383]: I0905 23:51:20.159329 2383 factory.go:221] Registration of the systemd container factory successfully Sep 5 23:51:20.160209 kubelet[2383]: I0905 23:51:20.159561 2383 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 5 23:51:20.162165 kubelet[2383]: I0905 23:51:20.161434 2383 factory.go:221] Registration of the containerd container factory successfully Sep 5 23:51:20.174591 kubelet[2383]: I0905 23:51:20.174542 2383 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 5 23:51:20.175977 kubelet[2383]: I0905 23:51:20.175938 2383 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 5 23:51:20.176099 kubelet[2383]: I0905 23:51:20.176089 2383 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 5 23:51:20.176156 kubelet[2383]: I0905 23:51:20.176149 2383 kubelet.go:2321] "Starting kubelet main sync loop" Sep 5 23:51:20.176252 kubelet[2383]: E0905 23:51:20.176234 2383 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 5 23:51:20.184778 kubelet[2383]: E0905 23:51:20.184745 2383 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 5 23:51:20.188750 kubelet[2383]: W0905 23:51:20.185187 2383 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://91.99.216.181:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 91.99.216.181:6443: connect: connection refused Sep 5 23:51:20.188868 kubelet[2383]: E0905 23:51:20.188838 2383 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://91.99.216.181:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 91.99.216.181:6443: connect: connection refused" logger="UnhandledError" Sep 5 23:51:20.193035 kubelet[2383]: I0905 23:51:20.192967 2383 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 5 23:51:20.193035 kubelet[2383]: I0905 23:51:20.193024 2383 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 5 23:51:20.193185 kubelet[2383]: I0905 23:51:20.193047 2383 state_mem.go:36] "Initialized new in-memory state store" Sep 5 23:51:20.195007 kubelet[2383]: I0905 23:51:20.194976 2383 policy_none.go:49] "None policy: Start" Sep 5 23:51:20.195936 kubelet[2383]: I0905 23:51:20.195899 2383 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 5 23:51:20.195997 kubelet[2383]: I0905 23:51:20.195943 2383 state_mem.go:35] "Initializing new in-memory state store" Sep 5 23:51:20.203077 kubelet[2383]: I0905 23:51:20.203008 2383 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 5 23:51:20.203245 kubelet[2383]: I0905 23:51:20.203222 2383 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 5 23:51:20.203291 kubelet[2383]: I0905 23:51:20.203239 2383 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 5 23:51:20.204736 kubelet[2383]: I0905 23:51:20.204659 2383 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 5 23:51:20.206641 kubelet[2383]: E0905 23:51:20.206569 2383 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-5-n-f09ad01745\" not found" Sep 5 23:51:20.306335 kubelet[2383]: I0905 23:51:20.305781 2383 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-5-n-f09ad01745" Sep 5 23:51:20.306335 kubelet[2383]: E0905 23:51:20.306288 2383 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://91.99.216.181:6443/api/v1/nodes\": dial tcp 91.99.216.181:6443: connect: connection refused" node="ci-4081-3-5-n-f09ad01745" Sep 5 23:51:20.359799 kubelet[2383]: E0905 23:51:20.359641 2383 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.216.181:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-n-f09ad01745?timeout=10s\": dial tcp 91.99.216.181:6443: connect: connection refused" interval="400ms" Sep 5 23:51:20.456238 kubelet[2383]: I0905 23:51:20.456167 2383 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/65cb7656ed99f18c2a84abc454710ad6-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-5-n-f09ad01745\" (UID: \"65cb7656ed99f18c2a84abc454710ad6\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-f09ad01745" Sep 5 23:51:20.457077 kubelet[2383]: I0905 23:51:20.456659 2383 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/65cb7656ed99f18c2a84abc454710ad6-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-5-n-f09ad01745\" (UID: \"65cb7656ed99f18c2a84abc454710ad6\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-f09ad01745" Sep 5 23:51:20.457077 kubelet[2383]: I0905 23:51:20.456722 2383 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0dd4c30c441aa9734c7496750da87fa8-kubeconfig\") pod \"kube-scheduler-ci-4081-3-5-n-f09ad01745\" (UID: \"0dd4c30c441aa9734c7496750da87fa8\") " pod="kube-system/kube-scheduler-ci-4081-3-5-n-f09ad01745" Sep 5 23:51:20.457077 kubelet[2383]: I0905 23:51:20.456767 2383 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/608d604ce0cd0c5faf228ee7cd584fb8-k8s-certs\") pod \"kube-apiserver-ci-4081-3-5-n-f09ad01745\" (UID: \"608d604ce0cd0c5faf228ee7cd584fb8\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-f09ad01745" Sep 5 23:51:20.457077 kubelet[2383]: I0905 23:51:20.456805 2383 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/608d604ce0cd0c5faf228ee7cd584fb8-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-5-n-f09ad01745\" (UID: \"608d604ce0cd0c5faf228ee7cd584fb8\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-f09ad01745" Sep 5 23:51:20.457077 kubelet[2383]: I0905 23:51:20.456848 2383 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/65cb7656ed99f18c2a84abc454710ad6-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-5-n-f09ad01745\" (UID: \"65cb7656ed99f18c2a84abc454710ad6\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-f09ad01745" Sep 5 23:51:20.457483 kubelet[2383]: I0905 23:51:20.456885 2383 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/608d604ce0cd0c5faf228ee7cd584fb8-ca-certs\") pod \"kube-apiserver-ci-4081-3-5-n-f09ad01745\" (UID: \"608d604ce0cd0c5faf228ee7cd584fb8\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-f09ad01745" Sep 5 23:51:20.457483 kubelet[2383]: I0905 23:51:20.456920 2383 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/65cb7656ed99f18c2a84abc454710ad6-ca-certs\") pod \"kube-controller-manager-ci-4081-3-5-n-f09ad01745\" (UID: \"65cb7656ed99f18c2a84abc454710ad6\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-f09ad01745" Sep 5 23:51:20.457483 kubelet[2383]: I0905 23:51:20.456957 2383 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/65cb7656ed99f18c2a84abc454710ad6-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-5-n-f09ad01745\" (UID: \"65cb7656ed99f18c2a84abc454710ad6\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-f09ad01745" Sep 5 23:51:20.509651 kubelet[2383]: I0905 23:51:20.509289 2383 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-5-n-f09ad01745" Sep 5 23:51:20.509803 kubelet[2383]: E0905 23:51:20.509704 2383 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://91.99.216.181:6443/api/v1/nodes\": dial tcp 91.99.216.181:6443: connect: connection refused" node="ci-4081-3-5-n-f09ad01745" Sep 5 23:51:20.588957 containerd[1598]: time="2025-09-05T23:51:20.588878584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-5-n-f09ad01745,Uid:608d604ce0cd0c5faf228ee7cd584fb8,Namespace:kube-system,Attempt:0,}" Sep 5 23:51:20.597205 containerd[1598]: time="2025-09-05T23:51:20.596823459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-5-n-f09ad01745,Uid:65cb7656ed99f18c2a84abc454710ad6,Namespace:kube-system,Attempt:0,}" Sep 5 23:51:20.605196 containerd[1598]: time="2025-09-05T23:51:20.604334770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-5-n-f09ad01745,Uid:0dd4c30c441aa9734c7496750da87fa8,Namespace:kube-system,Attempt:0,}" Sep 5 23:51:20.763362 kubelet[2383]: E0905 23:51:20.762347 2383 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.216.181:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-n-f09ad01745?timeout=10s\": dial tcp 91.99.216.181:6443: connect: connection refused" interval="800ms" Sep 5 23:51:20.852570 kubelet[2383]: E0905 23:51:20.852322 2383 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://91.99.216.181:6443/api/v1/namespaces/default/events\": dial tcp 91.99.216.181:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-5-n-f09ad01745.186287f27a87e36b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-5-n-f09ad01745,UID:ci-4081-3-5-n-f09ad01745,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-5-n-f09ad01745,},FirstTimestamp:2025-09-05 23:51:20.135476075 +0000 UTC m=+1.498173618,LastTimestamp:2025-09-05 23:51:20.135476075 +0000 UTC m=+1.498173618,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-5-n-f09ad01745,}" Sep 5 23:51:20.911927 kubelet[2383]: I0905 23:51:20.911607 2383 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-5-n-f09ad01745" Sep 5 23:51:20.912073 kubelet[2383]: E0905 23:51:20.912035 2383 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://91.99.216.181:6443/api/v1/nodes\": dial tcp 91.99.216.181:6443: connect: connection refused" node="ci-4081-3-5-n-f09ad01745" Sep 5 23:51:20.998602 kubelet[2383]: W0905 23:51:20.998462 2383 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://91.99.216.181:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 91.99.216.181:6443: connect: connection refused Sep 5 23:51:20.998602 kubelet[2383]: E0905 23:51:20.998534 2383 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://91.99.216.181:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 91.99.216.181:6443: connect: connection refused" logger="UnhandledError" Sep 5 23:51:21.141380 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3170566749.mount: Deactivated successfully. Sep 5 23:51:21.153143 containerd[1598]: time="2025-09-05T23:51:21.151853337Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 23:51:21.154118 containerd[1598]: time="2025-09-05T23:51:21.153960574Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Sep 5 23:51:21.161443 containerd[1598]: time="2025-09-05T23:51:21.160866078Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 23:51:21.166300 containerd[1598]: time="2025-09-05T23:51:21.165145086Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 23:51:21.168989 containerd[1598]: time="2025-09-05T23:51:21.168754327Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 23:51:21.171081 containerd[1598]: time="2025-09-05T23:51:21.170979788Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 5 23:51:21.173344 containerd[1598]: time="2025-09-05T23:51:21.173310548Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 5 23:51:21.179926 containerd[1598]: time="2025-09-05T23:51:21.178632312Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 23:51:21.179926 containerd[1598]: time="2025-09-05T23:51:21.179300679Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 582.360516ms" Sep 5 23:51:21.181123 containerd[1598]: time="2025-09-05T23:51:21.181076334Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 592.026836ms" Sep 5 23:51:21.189849 containerd[1598]: time="2025-09-05T23:51:21.189604224Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 585.155151ms" Sep 5 23:51:21.223560 kubelet[2383]: W0905 23:51:21.223369 2383 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://91.99.216.181:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-n-f09ad01745&limit=500&resourceVersion=0": dial tcp 91.99.216.181:6443: connect: connection refused Sep 5 23:51:21.223911 kubelet[2383]: E0905 23:51:21.223572 2383 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://91.99.216.181:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-n-f09ad01745&limit=500&resourceVersion=0\": dial tcp 91.99.216.181:6443: connect: connection refused" logger="UnhandledError" Sep 5 23:51:21.310243 kubelet[2383]: W0905 23:51:21.310176 2383 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://91.99.216.181:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 91.99.216.181:6443: connect: connection refused Sep 5 23:51:21.312537 kubelet[2383]: E0905 23:51:21.310257 2383 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://91.99.216.181:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 91.99.216.181:6443: connect: connection refused" logger="UnhandledError" Sep 5 23:51:21.314572 containerd[1598]: time="2025-09-05T23:51:21.314461317Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:51:21.314823 containerd[1598]: time="2025-09-05T23:51:21.314773415Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:51:21.315699 containerd[1598]: time="2025-09-05T23:51:21.315658503Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:51:21.316006 containerd[1598]: time="2025-09-05T23:51:21.315958279Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:51:21.320506 containerd[1598]: time="2025-09-05T23:51:21.320393837Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:51:21.320639 containerd[1598]: time="2025-09-05T23:51:21.320475092Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:51:21.320639 containerd[1598]: time="2025-09-05T23:51:21.320493815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:51:21.320639 containerd[1598]: time="2025-09-05T23:51:21.320585553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:51:21.326665 containerd[1598]: time="2025-09-05T23:51:21.326536396Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:51:21.326665 containerd[1598]: time="2025-09-05T23:51:21.326594167Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:51:21.326665 containerd[1598]: time="2025-09-05T23:51:21.326610410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:51:21.329454 containerd[1598]: time="2025-09-05T23:51:21.329176455Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:51:21.419322 containerd[1598]: time="2025-09-05T23:51:21.419209093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-5-n-f09ad01745,Uid:608d604ce0cd0c5faf228ee7cd584fb8,Namespace:kube-system,Attempt:0,} returns sandbox id \"8c1db8e92b37a6290d31d1bb1752be875b1f64a9c1372cd85a0cbcf92e74fa86\"" Sep 5 23:51:21.426229 containerd[1598]: time="2025-09-05T23:51:21.426187650Z" level=info msg="CreateContainer within sandbox \"8c1db8e92b37a6290d31d1bb1752be875b1f64a9c1372cd85a0cbcf92e74fa86\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 5 23:51:21.427084 containerd[1598]: time="2025-09-05T23:51:21.426794445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-5-n-f09ad01745,Uid:65cb7656ed99f18c2a84abc454710ad6,Namespace:kube-system,Attempt:0,} returns sandbox id \"52f0e536081b7d9897526e86b94be776484a49663e285ed54cd44fce57e0e76c\"" Sep 5 23:51:21.428135 containerd[1598]: time="2025-09-05T23:51:21.428016475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-5-n-f09ad01745,Uid:0dd4c30c441aa9734c7496750da87fa8,Namespace:kube-system,Attempt:0,} returns sandbox id \"fa90da84f56da441a9ccd384d655f941f9ded9779439f854c5287352d06d8bdf\"" Sep 5 23:51:21.431762 containerd[1598]: time="2025-09-05T23:51:21.431711853Z" level=info msg="CreateContainer within sandbox \"52f0e536081b7d9897526e86b94be776484a49663e285ed54cd44fce57e0e76c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 5 23:51:21.433487 containerd[1598]: time="2025-09-05T23:51:21.433453222Z" level=info msg="CreateContainer within sandbox \"fa90da84f56da441a9ccd384d655f941f9ded9779439f854c5287352d06d8bdf\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 5 23:51:21.447569 containerd[1598]: time="2025-09-05T23:51:21.447389053Z" level=info msg="CreateContainer within sandbox \"8c1db8e92b37a6290d31d1bb1752be875b1f64a9c1372cd85a0cbcf92e74fa86\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3226a84da99e7f025556d8538a37c153bed4352cbef0a5985c013ce386296cda\"" Sep 5 23:51:21.449054 containerd[1598]: time="2025-09-05T23:51:21.449015360Z" level=info msg="StartContainer for \"3226a84da99e7f025556d8538a37c153bed4352cbef0a5985c013ce386296cda\"" Sep 5 23:51:21.452893 containerd[1598]: time="2025-09-05T23:51:21.452846283Z" level=info msg="CreateContainer within sandbox \"fa90da84f56da441a9ccd384d655f941f9ded9779439f854c5287352d06d8bdf\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"153cb9f58e28f333c2cdd0023fb15f9c45f54ba687ee76c467a16c3fbd0e65d3\"" Sep 5 23:51:21.453652 containerd[1598]: time="2025-09-05T23:51:21.453627591Z" level=info msg="StartContainer for \"153cb9f58e28f333c2cdd0023fb15f9c45f54ba687ee76c467a16c3fbd0e65d3\"" Sep 5 23:51:21.458504 containerd[1598]: time="2025-09-05T23:51:21.458460023Z" level=info msg="CreateContainer within sandbox \"52f0e536081b7d9897526e86b94be776484a49663e285ed54cd44fce57e0e76c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"919ba07ff34d3839f7feb5d923ef6ab33b44c74e999917e99eb332fa5ccee3cc\"" Sep 5 23:51:21.460799 containerd[1598]: time="2025-09-05T23:51:21.460699846Z" level=info msg="StartContainer for \"919ba07ff34d3839f7feb5d923ef6ab33b44c74e999917e99eb332fa5ccee3cc\"" Sep 5 23:51:21.540564 containerd[1598]: time="2025-09-05T23:51:21.539812542Z" level=info msg="StartContainer for \"919ba07ff34d3839f7feb5d923ef6ab33b44c74e999917e99eb332fa5ccee3cc\" returns successfully" Sep 5 23:51:21.563338 kubelet[2383]: E0905 23:51:21.563282 2383 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.216.181:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-n-f09ad01745?timeout=10s\": dial tcp 91.99.216.181:6443: connect: connection refused" interval="1.6s" Sep 5 23:51:21.577762 containerd[1598]: time="2025-09-05T23:51:21.577313662Z" level=info msg="StartContainer for \"3226a84da99e7f025556d8538a37c153bed4352cbef0a5985c013ce386296cda\" returns successfully" Sep 5 23:51:21.578592 containerd[1598]: time="2025-09-05T23:51:21.577461050Z" level=info msg="StartContainer for \"153cb9f58e28f333c2cdd0023fb15f9c45f54ba687ee76c467a16c3fbd0e65d3\" returns successfully" Sep 5 23:51:21.646888 kubelet[2383]: W0905 23:51:21.646770 2383 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://91.99.216.181:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 91.99.216.181:6443: connect: connection refused Sep 5 23:51:21.646888 kubelet[2383]: E0905 23:51:21.646889 2383 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://91.99.216.181:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 91.99.216.181:6443: connect: connection refused" logger="UnhandledError" Sep 5 23:51:21.716227 kubelet[2383]: I0905 23:51:21.716114 2383 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-5-n-f09ad01745" Sep 5 23:51:24.067424 kubelet[2383]: E0905 23:51:24.065663 2383 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-5-n-f09ad01745\" not found" node="ci-4081-3-5-n-f09ad01745" Sep 5 23:51:24.127784 kubelet[2383]: I0905 23:51:24.127736 2383 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-5-n-f09ad01745" Sep 5 23:51:24.140027 kubelet[2383]: I0905 23:51:24.139861 2383 apiserver.go:52] "Watching apiserver" Sep 5 23:51:24.255671 kubelet[2383]: I0905 23:51:24.255613 2383 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 5 23:51:26.086770 systemd[1]: Reloading requested from client PID 2658 ('systemctl') (unit session-7.scope)... Sep 5 23:51:26.087092 systemd[1]: Reloading... Sep 5 23:51:26.198598 zram_generator::config[2707]: No configuration found. Sep 5 23:51:26.316055 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 23:51:26.396789 systemd[1]: Reloading finished in 309 ms. Sep 5 23:51:26.428236 kubelet[2383]: I0905 23:51:26.428163 2383 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 23:51:26.429630 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:51:26.445093 systemd[1]: kubelet.service: Deactivated successfully. Sep 5 23:51:26.446285 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:51:26.455587 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:51:26.584264 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:51:26.594930 (kubelet)[2753]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 5 23:51:26.660518 kubelet[2753]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 23:51:26.660518 kubelet[2753]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 5 23:51:26.660518 kubelet[2753]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 23:51:26.661149 kubelet[2753]: I0905 23:51:26.660588 2753 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 5 23:51:26.671964 kubelet[2753]: I0905 23:51:26.671917 2753 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 5 23:51:26.671964 kubelet[2753]: I0905 23:51:26.671949 2753 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 5 23:51:26.672215 kubelet[2753]: I0905 23:51:26.672190 2753 server.go:934] "Client rotation is on, will bootstrap in background" Sep 5 23:51:26.674012 kubelet[2753]: I0905 23:51:26.673987 2753 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 5 23:51:26.678548 kubelet[2753]: I0905 23:51:26.678333 2753 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 23:51:26.685862 kubelet[2753]: E0905 23:51:26.685799 2753 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 5 23:51:26.687202 kubelet[2753]: I0905 23:51:26.686100 2753 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 5 23:51:26.689939 kubelet[2753]: I0905 23:51:26.689903 2753 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 5 23:51:26.690542 kubelet[2753]: I0905 23:51:26.690523 2753 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 5 23:51:26.690832 kubelet[2753]: I0905 23:51:26.690800 2753 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 5 23:51:26.691204 kubelet[2753]: I0905 23:51:26.690884 2753 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-5-n-f09ad01745","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 5 23:51:26.691360 kubelet[2753]: I0905 23:51:26.691345 2753 topology_manager.go:138] "Creating topology manager with none policy" Sep 5 23:51:26.691689 kubelet[2753]: I0905 23:51:26.691433 2753 container_manager_linux.go:300] "Creating device plugin manager" Sep 5 23:51:26.691689 kubelet[2753]: I0905 23:51:26.691484 2753 state_mem.go:36] "Initialized new in-memory state store" Sep 5 23:51:26.691689 kubelet[2753]: I0905 23:51:26.691616 2753 kubelet.go:408] "Attempting to sync node with API server" Sep 5 23:51:26.691689 kubelet[2753]: I0905 23:51:26.691633 2753 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 5 23:51:26.691689 kubelet[2753]: I0905 23:51:26.691654 2753 kubelet.go:314] "Adding apiserver pod source" Sep 5 23:51:26.691689 kubelet[2753]: I0905 23:51:26.691670 2753 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 5 23:51:26.695416 kubelet[2753]: I0905 23:51:26.693869 2753 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 5 23:51:26.695416 kubelet[2753]: I0905 23:51:26.694411 2753 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 5 23:51:26.695416 kubelet[2753]: I0905 23:51:26.694910 2753 server.go:1274] "Started kubelet" Sep 5 23:51:26.698412 kubelet[2753]: I0905 23:51:26.696979 2753 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 5 23:51:26.708950 kubelet[2753]: I0905 23:51:26.708549 2753 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 5 23:51:26.710943 kubelet[2753]: I0905 23:51:26.710899 2753 server.go:449] "Adding debug handlers to kubelet server" Sep 5 23:51:26.714411 kubelet[2753]: I0905 23:51:26.712178 2753 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 5 23:51:26.714411 kubelet[2753]: I0905 23:51:26.712476 2753 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 5 23:51:26.714550 kubelet[2753]: I0905 23:51:26.714455 2753 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 5 23:51:26.717198 kubelet[2753]: I0905 23:51:26.715938 2753 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 5 23:51:26.717198 kubelet[2753]: E0905 23:51:26.716173 2753 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-5-n-f09ad01745\" not found" Sep 5 23:51:26.719105 kubelet[2753]: I0905 23:51:26.719060 2753 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 5 23:51:26.719768 kubelet[2753]: I0905 23:51:26.719748 2753 reconciler.go:26] "Reconciler: start to sync state" Sep 5 23:51:26.724677 kubelet[2753]: I0905 23:51:26.724625 2753 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 5 23:51:26.727430 kubelet[2753]: I0905 23:51:26.726128 2753 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 5 23:51:26.727430 kubelet[2753]: I0905 23:51:26.726159 2753 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 5 23:51:26.727430 kubelet[2753]: I0905 23:51:26.726179 2753 kubelet.go:2321] "Starting kubelet main sync loop" Sep 5 23:51:26.727430 kubelet[2753]: E0905 23:51:26.726223 2753 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 5 23:51:26.734480 kubelet[2753]: I0905 23:51:26.734437 2753 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 5 23:51:26.739558 kubelet[2753]: E0905 23:51:26.738190 2753 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 5 23:51:26.739558 kubelet[2753]: I0905 23:51:26.738796 2753 factory.go:221] Registration of the containerd container factory successfully Sep 5 23:51:26.739558 kubelet[2753]: I0905 23:51:26.738809 2753 factory.go:221] Registration of the systemd container factory successfully Sep 5 23:51:26.798450 kubelet[2753]: I0905 23:51:26.798388 2753 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 5 23:51:26.798450 kubelet[2753]: I0905 23:51:26.798448 2753 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 5 23:51:26.798655 kubelet[2753]: I0905 23:51:26.798479 2753 state_mem.go:36] "Initialized new in-memory state store" Sep 5 23:51:26.798761 kubelet[2753]: I0905 23:51:26.798733 2753 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 5 23:51:26.798797 kubelet[2753]: I0905 23:51:26.798764 2753 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 5 23:51:26.798819 kubelet[2753]: I0905 23:51:26.798803 2753 policy_none.go:49] "None policy: Start" Sep 5 23:51:26.800271 kubelet[2753]: I0905 23:51:26.799782 2753 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 5 23:51:26.800271 kubelet[2753]: I0905 23:51:26.799808 2753 state_mem.go:35] "Initializing new in-memory state store" Sep 5 23:51:26.800271 kubelet[2753]: I0905 23:51:26.799966 2753 state_mem.go:75] "Updated machine memory state" Sep 5 23:51:26.801425 kubelet[2753]: I0905 23:51:26.801358 2753 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 5 23:51:26.801732 kubelet[2753]: I0905 23:51:26.801713 2753 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 5 23:51:26.801843 kubelet[2753]: I0905 23:51:26.801810 2753 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 5 23:51:26.802589 kubelet[2753]: I0905 23:51:26.802562 2753 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 5 23:51:26.907439 kubelet[2753]: I0905 23:51:26.907397 2753 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-5-n-f09ad01745" Sep 5 23:51:26.918608 kubelet[2753]: I0905 23:51:26.918076 2753 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081-3-5-n-f09ad01745" Sep 5 23:51:26.918608 kubelet[2753]: I0905 23:51:26.918192 2753 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-5-n-f09ad01745" Sep 5 23:51:27.021514 kubelet[2753]: I0905 23:51:27.021414 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0dd4c30c441aa9734c7496750da87fa8-kubeconfig\") pod \"kube-scheduler-ci-4081-3-5-n-f09ad01745\" (UID: \"0dd4c30c441aa9734c7496750da87fa8\") " pod="kube-system/kube-scheduler-ci-4081-3-5-n-f09ad01745" Sep 5 23:51:27.021514 kubelet[2753]: I0905 23:51:27.021466 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/65cb7656ed99f18c2a84abc454710ad6-ca-certs\") pod \"kube-controller-manager-ci-4081-3-5-n-f09ad01745\" (UID: \"65cb7656ed99f18c2a84abc454710ad6\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-f09ad01745" Sep 5 23:51:27.021514 kubelet[2753]: I0905 23:51:27.021490 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/65cb7656ed99f18c2a84abc454710ad6-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-5-n-f09ad01745\" (UID: \"65cb7656ed99f18c2a84abc454710ad6\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-f09ad01745" Sep 5 23:51:27.021514 kubelet[2753]: I0905 23:51:27.021508 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/65cb7656ed99f18c2a84abc454710ad6-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-5-n-f09ad01745\" (UID: \"65cb7656ed99f18c2a84abc454710ad6\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-f09ad01745" Sep 5 23:51:27.021514 kubelet[2753]: I0905 23:51:27.021525 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/65cb7656ed99f18c2a84abc454710ad6-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-5-n-f09ad01745\" (UID: \"65cb7656ed99f18c2a84abc454710ad6\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-f09ad01745" Sep 5 23:51:27.022010 kubelet[2753]: I0905 23:51:27.021545 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/608d604ce0cd0c5faf228ee7cd584fb8-ca-certs\") pod \"kube-apiserver-ci-4081-3-5-n-f09ad01745\" (UID: \"608d604ce0cd0c5faf228ee7cd584fb8\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-f09ad01745" Sep 5 23:51:27.022010 kubelet[2753]: I0905 23:51:27.021563 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/608d604ce0cd0c5faf228ee7cd584fb8-k8s-certs\") pod \"kube-apiserver-ci-4081-3-5-n-f09ad01745\" (UID: \"608d604ce0cd0c5faf228ee7cd584fb8\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-f09ad01745" Sep 5 23:51:27.022010 kubelet[2753]: I0905 23:51:27.021581 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/608d604ce0cd0c5faf228ee7cd584fb8-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-5-n-f09ad01745\" (UID: \"608d604ce0cd0c5faf228ee7cd584fb8\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-f09ad01745" Sep 5 23:51:27.022010 kubelet[2753]: I0905 23:51:27.021599 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/65cb7656ed99f18c2a84abc454710ad6-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-5-n-f09ad01745\" (UID: \"65cb7656ed99f18c2a84abc454710ad6\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-f09ad01745" Sep 5 23:51:27.085005 sudo[2785]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 5 23:51:27.085284 sudo[2785]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 5 23:51:27.526516 sudo[2785]: pam_unix(sudo:session): session closed for user root Sep 5 23:51:27.703953 kubelet[2753]: I0905 23:51:27.703899 2753 apiserver.go:52] "Watching apiserver" Sep 5 23:51:27.719779 kubelet[2753]: I0905 23:51:27.719673 2753 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 5 23:51:27.779577 kubelet[2753]: E0905 23:51:27.779145 2753 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081-3-5-n-f09ad01745\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-5-n-f09ad01745" Sep 5 23:51:27.814510 kubelet[2753]: I0905 23:51:27.814277 2753 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-5-n-f09ad01745" podStartSLOduration=1.814259135 podStartE2EDuration="1.814259135s" podCreationTimestamp="2025-09-05 23:51:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 23:51:27.813331751 +0000 UTC m=+1.212341543" watchObservedRunningTime="2025-09-05 23:51:27.814259135 +0000 UTC m=+1.213268927" Sep 5 23:51:27.814510 kubelet[2753]: I0905 23:51:27.814422 2753 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-5-n-f09ad01745" podStartSLOduration=1.814394916 podStartE2EDuration="1.814394916s" podCreationTimestamp="2025-09-05 23:51:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 23:51:27.799653146 +0000 UTC m=+1.198662938" watchObservedRunningTime="2025-09-05 23:51:27.814394916 +0000 UTC m=+1.213404708" Sep 5 23:51:27.825000 kubelet[2753]: I0905 23:51:27.824930 2753 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-5-n-f09ad01745" podStartSLOduration=1.824883305 podStartE2EDuration="1.824883305s" podCreationTimestamp="2025-09-05 23:51:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 23:51:27.822979169 +0000 UTC m=+1.221988961" watchObservedRunningTime="2025-09-05 23:51:27.824883305 +0000 UTC m=+1.223893097" Sep 5 23:51:29.436823 sudo[1867]: pam_unix(sudo:session): session closed for user root Sep 5 23:51:29.609685 sshd[1863]: pam_unix(sshd:session): session closed for user core Sep 5 23:51:29.615859 systemd[1]: sshd@6-91.99.216.181:22-139.178.68.195:50880.service: Deactivated successfully. Sep 5 23:51:29.631208 systemd[1]: session-7.scope: Deactivated successfully. Sep 5 23:51:29.632894 systemd-logind[1566]: Session 7 logged out. Waiting for processes to exit. Sep 5 23:51:29.636951 systemd-logind[1566]: Removed session 7. Sep 5 23:51:32.353489 systemd[1]: Started sshd@7-91.99.216.181:22-175.206.1.60:55152.service - OpenSSH per-connection server daemon (175.206.1.60:55152). Sep 5 23:51:32.574574 kubelet[2753]: I0905 23:51:32.574303 2753 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 5 23:51:32.576575 containerd[1598]: time="2025-09-05T23:51:32.576223436Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 5 23:51:32.576846 kubelet[2753]: I0905 23:51:32.576541 2753 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 5 23:51:33.363121 kubelet[2753]: I0905 23:51:33.362926 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwv5f\" (UniqueName: \"kubernetes.io/projected/ecbd88c5-fb01-4a23-a044-b7fb468bbf9a-kube-api-access-cwv5f\") pod \"cilium-rx94p\" (UID: \"ecbd88c5-fb01-4a23-a044-b7fb468bbf9a\") " pod="kube-system/cilium-rx94p" Sep 5 23:51:33.365275 kubelet[2753]: I0905 23:51:33.364235 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ecbd88c5-fb01-4a23-a044-b7fb468bbf9a-etc-cni-netd\") pod \"cilium-rx94p\" (UID: \"ecbd88c5-fb01-4a23-a044-b7fb468bbf9a\") " pod="kube-system/cilium-rx94p" Sep 5 23:51:33.365275 kubelet[2753]: I0905 23:51:33.364262 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ecbd88c5-fb01-4a23-a044-b7fb468bbf9a-hubble-tls\") pod \"cilium-rx94p\" (UID: \"ecbd88c5-fb01-4a23-a044-b7fb468bbf9a\") " pod="kube-system/cilium-rx94p" Sep 5 23:51:33.365275 kubelet[2753]: I0905 23:51:33.364281 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ecbd88c5-fb01-4a23-a044-b7fb468bbf9a-cilium-config-path\") pod \"cilium-rx94p\" (UID: \"ecbd88c5-fb01-4a23-a044-b7fb468bbf9a\") " pod="kube-system/cilium-rx94p" Sep 5 23:51:33.365275 kubelet[2753]: I0905 23:51:33.364299 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/03ecc05b-effa-449a-9fd6-49da334d9548-kube-proxy\") pod \"kube-proxy-4nvtp\" (UID: \"03ecc05b-effa-449a-9fd6-49da334d9548\") " pod="kube-system/kube-proxy-4nvtp" Sep 5 23:51:33.365275 kubelet[2753]: I0905 23:51:33.364314 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ecbd88c5-fb01-4a23-a044-b7fb468bbf9a-xtables-lock\") pod \"cilium-rx94p\" (UID: \"ecbd88c5-fb01-4a23-a044-b7fb468bbf9a\") " pod="kube-system/cilium-rx94p" Sep 5 23:51:33.365275 kubelet[2753]: I0905 23:51:33.364328 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ecbd88c5-fb01-4a23-a044-b7fb468bbf9a-clustermesh-secrets\") pod \"cilium-rx94p\" (UID: \"ecbd88c5-fb01-4a23-a044-b7fb468bbf9a\") " pod="kube-system/cilium-rx94p" Sep 5 23:51:33.365550 kubelet[2753]: I0905 23:51:33.364345 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ecbd88c5-fb01-4a23-a044-b7fb468bbf9a-host-proc-sys-kernel\") pod \"cilium-rx94p\" (UID: \"ecbd88c5-fb01-4a23-a044-b7fb468bbf9a\") " pod="kube-system/cilium-rx94p" Sep 5 23:51:33.365550 kubelet[2753]: I0905 23:51:33.364360 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/03ecc05b-effa-449a-9fd6-49da334d9548-lib-modules\") pod \"kube-proxy-4nvtp\" (UID: \"03ecc05b-effa-449a-9fd6-49da334d9548\") " pod="kube-system/kube-proxy-4nvtp" Sep 5 23:51:33.365550 kubelet[2753]: I0905 23:51:33.364375 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ecbd88c5-fb01-4a23-a044-b7fb468bbf9a-lib-modules\") pod \"cilium-rx94p\" (UID: \"ecbd88c5-fb01-4a23-a044-b7fb468bbf9a\") " pod="kube-system/cilium-rx94p" Sep 5 23:51:33.365550 kubelet[2753]: I0905 23:51:33.364390 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ecbd88c5-fb01-4a23-a044-b7fb468bbf9a-cilium-run\") pod \"cilium-rx94p\" (UID: \"ecbd88c5-fb01-4a23-a044-b7fb468bbf9a\") " pod="kube-system/cilium-rx94p" Sep 5 23:51:33.365550 kubelet[2753]: I0905 23:51:33.364416 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ecbd88c5-fb01-4a23-a044-b7fb468bbf9a-bpf-maps\") pod \"cilium-rx94p\" (UID: \"ecbd88c5-fb01-4a23-a044-b7fb468bbf9a\") " pod="kube-system/cilium-rx94p" Sep 5 23:51:33.365550 kubelet[2753]: I0905 23:51:33.364433 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dd6dr\" (UniqueName: \"kubernetes.io/projected/03ecc05b-effa-449a-9fd6-49da334d9548-kube-api-access-dd6dr\") pod \"kube-proxy-4nvtp\" (UID: \"03ecc05b-effa-449a-9fd6-49da334d9548\") " pod="kube-system/kube-proxy-4nvtp" Sep 5 23:51:33.365737 kubelet[2753]: I0905 23:51:33.364448 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ecbd88c5-fb01-4a23-a044-b7fb468bbf9a-cilium-cgroup\") pod \"cilium-rx94p\" (UID: \"ecbd88c5-fb01-4a23-a044-b7fb468bbf9a\") " pod="kube-system/cilium-rx94p" Sep 5 23:51:33.365737 kubelet[2753]: I0905 23:51:33.364488 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ecbd88c5-fb01-4a23-a044-b7fb468bbf9a-cni-path\") pod \"cilium-rx94p\" (UID: \"ecbd88c5-fb01-4a23-a044-b7fb468bbf9a\") " pod="kube-system/cilium-rx94p" Sep 5 23:51:33.365737 kubelet[2753]: I0905 23:51:33.364504 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/03ecc05b-effa-449a-9fd6-49da334d9548-xtables-lock\") pod \"kube-proxy-4nvtp\" (UID: \"03ecc05b-effa-449a-9fd6-49da334d9548\") " pod="kube-system/kube-proxy-4nvtp" Sep 5 23:51:33.365737 kubelet[2753]: I0905 23:51:33.364522 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ecbd88c5-fb01-4a23-a044-b7fb468bbf9a-hostproc\") pod \"cilium-rx94p\" (UID: \"ecbd88c5-fb01-4a23-a044-b7fb468bbf9a\") " pod="kube-system/cilium-rx94p" Sep 5 23:51:33.365737 kubelet[2753]: I0905 23:51:33.364535 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ecbd88c5-fb01-4a23-a044-b7fb468bbf9a-host-proc-sys-net\") pod \"cilium-rx94p\" (UID: \"ecbd88c5-fb01-4a23-a044-b7fb468bbf9a\") " pod="kube-system/cilium-rx94p" Sep 5 23:51:33.644476 containerd[1598]: time="2025-09-05T23:51:33.644343960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4nvtp,Uid:03ecc05b-effa-449a-9fd6-49da334d9548,Namespace:kube-system,Attempt:0,}" Sep 5 23:51:33.650163 containerd[1598]: time="2025-09-05T23:51:33.650098803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rx94p,Uid:ecbd88c5-fb01-4a23-a044-b7fb468bbf9a,Namespace:kube-system,Attempt:0,}" Sep 5 23:51:33.672642 kubelet[2753]: I0905 23:51:33.670625 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/27d862b8-7252-4fff-913d-7ef8fa524d35-cilium-config-path\") pod \"cilium-operator-5d85765b45-ppfnb\" (UID: \"27d862b8-7252-4fff-913d-7ef8fa524d35\") " pod="kube-system/cilium-operator-5d85765b45-ppfnb" Sep 5 23:51:33.672642 kubelet[2753]: I0905 23:51:33.670672 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbgds\" (UniqueName: \"kubernetes.io/projected/27d862b8-7252-4fff-913d-7ef8fa524d35-kube-api-access-fbgds\") pod \"cilium-operator-5d85765b45-ppfnb\" (UID: \"27d862b8-7252-4fff-913d-7ef8fa524d35\") " pod="kube-system/cilium-operator-5d85765b45-ppfnb" Sep 5 23:51:33.743951 containerd[1598]: time="2025-09-05T23:51:33.743814155Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:51:33.743951 containerd[1598]: time="2025-09-05T23:51:33.743885244Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:51:33.743951 containerd[1598]: time="2025-09-05T23:51:33.743896446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:51:33.744729 containerd[1598]: time="2025-09-05T23:51:33.744104833Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:51:33.753600 containerd[1598]: time="2025-09-05T23:51:33.752230111Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:51:33.753600 containerd[1598]: time="2025-09-05T23:51:33.752340846Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:51:33.753600 containerd[1598]: time="2025-09-05T23:51:33.752357968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:51:33.753600 containerd[1598]: time="2025-09-05T23:51:33.752491426Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:51:33.818173 containerd[1598]: time="2025-09-05T23:51:33.818117651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rx94p,Uid:ecbd88c5-fb01-4a23-a044-b7fb468bbf9a,Namespace:kube-system,Attempt:0,} returns sandbox id \"ebdeac8f6aa8242f40af9a80823206b48d917ce061f596d532f2631024b5fe19\"" Sep 5 23:51:33.821607 containerd[1598]: time="2025-09-05T23:51:33.821254707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4nvtp,Uid:03ecc05b-effa-449a-9fd6-49da334d9548,Namespace:kube-system,Attempt:0,} returns sandbox id \"aaf50d9ca53bb211c461f598404b0f8243b975dd72550301710f351dc36e28d7\"" Sep 5 23:51:33.823731 containerd[1598]: time="2025-09-05T23:51:33.823443518Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 5 23:51:33.824483 containerd[1598]: time="2025-09-05T23:51:33.824380002Z" level=info msg="CreateContainer within sandbox \"aaf50d9ca53bb211c461f598404b0f8243b975dd72550301710f351dc36e28d7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 5 23:51:33.840061 containerd[1598]: time="2025-09-05T23:51:33.839927864Z" level=info msg="CreateContainer within sandbox \"aaf50d9ca53bb211c461f598404b0f8243b975dd72550301710f351dc36e28d7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"da778f44f5f4f315f3acf29290e7f3080d09b96464ac70e2ab0c5e8bdf56f926\"" Sep 5 23:51:33.840682 containerd[1598]: time="2025-09-05T23:51:33.840633598Z" level=info msg="StartContainer for \"da778f44f5f4f315f3acf29290e7f3080d09b96464ac70e2ab0c5e8bdf56f926\"" Sep 5 23:51:33.901792 containerd[1598]: time="2025-09-05T23:51:33.901693218Z" level=info msg="StartContainer for \"da778f44f5f4f315f3acf29290e7f3080d09b96464ac70e2ab0c5e8bdf56f926\" returns successfully" Sep 5 23:51:33.982424 containerd[1598]: time="2025-09-05T23:51:33.982016632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-ppfnb,Uid:27d862b8-7252-4fff-913d-7ef8fa524d35,Namespace:kube-system,Attempt:0,}" Sep 5 23:51:34.012112 containerd[1598]: time="2025-09-05T23:51:34.012013420Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:51:34.012391 containerd[1598]: time="2025-09-05T23:51:34.012129395Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:51:34.012391 containerd[1598]: time="2025-09-05T23:51:34.012147677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:51:34.012391 containerd[1598]: time="2025-09-05T23:51:34.012245050Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:51:34.073922 containerd[1598]: time="2025-09-05T23:51:34.073875320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-ppfnb,Uid:27d862b8-7252-4fff-913d-7ef8fa524d35,Namespace:kube-system,Attempt:0,} returns sandbox id \"6c109dee7865ca5e54f2e10ca8a21ca277362e878f1d13fe2edd923d174ea516\"" Sep 5 23:51:34.817917 kubelet[2753]: I0905 23:51:34.817459 2753 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4nvtp" podStartSLOduration=1.817438643 podStartE2EDuration="1.817438643s" podCreationTimestamp="2025-09-05 23:51:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 23:51:34.817278182 +0000 UTC m=+8.216288014" watchObservedRunningTime="2025-09-05 23:51:34.817438643 +0000 UTC m=+8.216448435" Sep 5 23:51:36.712076 sshd[2821]: Invalid user test from 175.206.1.60 port 55152 Sep 5 23:51:37.813209 sshd[3111]: pam_faillock(sshd:auth): User unknown Sep 5 23:51:37.818337 sshd[2821]: Postponed keyboard-interactive for invalid user test from 175.206.1.60 port 55152 ssh2 [preauth] Sep 5 23:51:39.173539 sshd[3111]: pam_unix(sshd:auth): check pass; user unknown Sep 5 23:51:39.173586 sshd[3111]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=175.206.1.60 Sep 5 23:51:39.174512 sshd[3111]: pam_faillock(sshd:auth): User unknown Sep 5 23:51:41.301212 sshd[2821]: PAM: Permission denied for illegal user test from 175.206.1.60 Sep 5 23:51:41.301954 sshd[2821]: Failed keyboard-interactive/pam for invalid user test from 175.206.1.60 port 55152 ssh2 Sep 5 23:51:42.455026 sshd[2821]: Connection closed by invalid user test 175.206.1.60 port 55152 [preauth] Sep 5 23:51:42.459282 systemd[1]: sshd@7-91.99.216.181:22-175.206.1.60:55152.service: Deactivated successfully. Sep 5 23:51:47.481759 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3929844725.mount: Deactivated successfully. Sep 5 23:51:48.967082 containerd[1598]: time="2025-09-05T23:51:48.966992245Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:48.968001 containerd[1598]: time="2025-09-05T23:51:48.967962745Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Sep 5 23:51:48.969443 containerd[1598]: time="2025-09-05T23:51:48.969102982Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:48.971781 containerd[1598]: time="2025-09-05T23:51:48.971731893Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 15.14824277s" Sep 5 23:51:48.971919 containerd[1598]: time="2025-09-05T23:51:48.971900870Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 5 23:51:48.973584 containerd[1598]: time="2025-09-05T23:51:48.973549640Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 5 23:51:48.977217 containerd[1598]: time="2025-09-05T23:51:48.977162212Z" level=info msg="CreateContainer within sandbox \"ebdeac8f6aa8242f40af9a80823206b48d917ce061f596d532f2631024b5fe19\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 5 23:51:48.991923 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2621325053.mount: Deactivated successfully. Sep 5 23:51:48.996969 containerd[1598]: time="2025-09-05T23:51:48.996903322Z" level=info msg="CreateContainer within sandbox \"ebdeac8f6aa8242f40af9a80823206b48d917ce061f596d532f2631024b5fe19\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"39a23de4e25f755708fbf6cf8f43e1b73371e3f32c4061ab44bdb0e618af612f\"" Sep 5 23:51:48.998176 containerd[1598]: time="2025-09-05T23:51:48.998107846Z" level=info msg="StartContainer for \"39a23de4e25f755708fbf6cf8f43e1b73371e3f32c4061ab44bdb0e618af612f\"" Sep 5 23:51:49.028413 systemd[1]: run-containerd-runc-k8s.io-39a23de4e25f755708fbf6cf8f43e1b73371e3f32c4061ab44bdb0e618af612f-runc.JjHYN3.mount: Deactivated successfully. Sep 5 23:51:49.105626 containerd[1598]: time="2025-09-05T23:51:49.105535858Z" level=info msg="StartContainer for \"39a23de4e25f755708fbf6cf8f43e1b73371e3f32c4061ab44bdb0e618af612f\" returns successfully" Sep 5 23:51:49.339814 containerd[1598]: time="2025-09-05T23:51:49.339567946Z" level=info msg="shim disconnected" id=39a23de4e25f755708fbf6cf8f43e1b73371e3f32c4061ab44bdb0e618af612f namespace=k8s.io Sep 5 23:51:49.339814 containerd[1598]: time="2025-09-05T23:51:49.339625072Z" level=warning msg="cleaning up after shim disconnected" id=39a23de4e25f755708fbf6cf8f43e1b73371e3f32c4061ab44bdb0e618af612f namespace=k8s.io Sep 5 23:51:49.339814 containerd[1598]: time="2025-09-05T23:51:49.339633833Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 23:51:49.843745 containerd[1598]: time="2025-09-05T23:51:49.843087330Z" level=info msg="CreateContainer within sandbox \"ebdeac8f6aa8242f40af9a80823206b48d917ce061f596d532f2631024b5fe19\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 5 23:51:49.862041 containerd[1598]: time="2025-09-05T23:51:49.861817195Z" level=info msg="CreateContainer within sandbox \"ebdeac8f6aa8242f40af9a80823206b48d917ce061f596d532f2631024b5fe19\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b981377f517e11d766a72ce34c82bd817277467fa66226c1158fd4fa28be2d37\"" Sep 5 23:51:49.864627 containerd[1598]: time="2025-09-05T23:51:49.864576636Z" level=info msg="StartContainer for \"b981377f517e11d766a72ce34c82bd817277467fa66226c1158fd4fa28be2d37\"" Sep 5 23:51:49.920564 containerd[1598]: time="2025-09-05T23:51:49.920516247Z" level=info msg="StartContainer for \"b981377f517e11d766a72ce34c82bd817277467fa66226c1158fd4fa28be2d37\" returns successfully" Sep 5 23:51:49.940567 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 5 23:51:49.940842 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 5 23:51:49.940914 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 5 23:51:49.954733 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 23:51:49.967924 containerd[1598]: time="2025-09-05T23:51:49.967797817Z" level=info msg="shim disconnected" id=b981377f517e11d766a72ce34c82bd817277467fa66226c1158fd4fa28be2d37 namespace=k8s.io Sep 5 23:51:49.967924 containerd[1598]: time="2025-09-05T23:51:49.967867024Z" level=warning msg="cleaning up after shim disconnected" id=b981377f517e11d766a72ce34c82bd817277467fa66226c1158fd4fa28be2d37 namespace=k8s.io Sep 5 23:51:49.967924 containerd[1598]: time="2025-09-05T23:51:49.967876025Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 23:51:49.970256 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 23:51:49.989624 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-39a23de4e25f755708fbf6cf8f43e1b73371e3f32c4061ab44bdb0e618af612f-rootfs.mount: Deactivated successfully. Sep 5 23:51:50.852427 containerd[1598]: time="2025-09-05T23:51:50.850477029Z" level=info msg="CreateContainer within sandbox \"ebdeac8f6aa8242f40af9a80823206b48d917ce061f596d532f2631024b5fe19\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 5 23:51:50.870658 containerd[1598]: time="2025-09-05T23:51:50.869850059Z" level=info msg="CreateContainer within sandbox \"ebdeac8f6aa8242f40af9a80823206b48d917ce061f596d532f2631024b5fe19\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"bebf0769049054ac4b972ecb44f8f5f61cd64a4f8bf6286fba72e47e4fcd1e6f\"" Sep 5 23:51:50.872441 containerd[1598]: time="2025-09-05T23:51:50.871180673Z" level=info msg="StartContainer for \"bebf0769049054ac4b972ecb44f8f5f61cd64a4f8bf6286fba72e47e4fcd1e6f\"" Sep 5 23:51:50.949289 containerd[1598]: time="2025-09-05T23:51:50.949232330Z" level=info msg="StartContainer for \"bebf0769049054ac4b972ecb44f8f5f61cd64a4f8bf6286fba72e47e4fcd1e6f\" returns successfully" Sep 5 23:51:50.981171 containerd[1598]: time="2025-09-05T23:51:50.981084817Z" level=info msg="shim disconnected" id=bebf0769049054ac4b972ecb44f8f5f61cd64a4f8bf6286fba72e47e4fcd1e6f namespace=k8s.io Sep 5 23:51:50.981171 containerd[1598]: time="2025-09-05T23:51:50.981158584Z" level=warning msg="cleaning up after shim disconnected" id=bebf0769049054ac4b972ecb44f8f5f61cd64a4f8bf6286fba72e47e4fcd1e6f namespace=k8s.io Sep 5 23:51:50.981171 containerd[1598]: time="2025-09-05T23:51:50.981172626Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 23:51:50.987863 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bebf0769049054ac4b972ecb44f8f5f61cd64a4f8bf6286fba72e47e4fcd1e6f-rootfs.mount: Deactivated successfully. Sep 5 23:51:51.387558 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4256824982.mount: Deactivated successfully. Sep 5 23:51:51.855237 containerd[1598]: time="2025-09-05T23:51:51.854754308Z" level=info msg="CreateContainer within sandbox \"ebdeac8f6aa8242f40af9a80823206b48d917ce061f596d532f2631024b5fe19\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 5 23:51:51.875671 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3304623902.mount: Deactivated successfully. Sep 5 23:51:51.887235 containerd[1598]: time="2025-09-05T23:51:51.887083650Z" level=info msg="CreateContainer within sandbox \"ebdeac8f6aa8242f40af9a80823206b48d917ce061f596d532f2631024b5fe19\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5fa6073f6e54c7b12893a0067c0ef3c9b25e280a1877239d0d0e9a3ba3c8557e\"" Sep 5 23:51:51.891117 containerd[1598]: time="2025-09-05T23:51:51.889390120Z" level=info msg="StartContainer for \"5fa6073f6e54c7b12893a0067c0ef3c9b25e280a1877239d0d0e9a3ba3c8557e\"" Sep 5 23:51:51.947550 containerd[1598]: time="2025-09-05T23:51:51.947119913Z" level=info msg="StartContainer for \"5fa6073f6e54c7b12893a0067c0ef3c9b25e280a1877239d0d0e9a3ba3c8557e\" returns successfully" Sep 5 23:51:51.974283 containerd[1598]: time="2025-09-05T23:51:51.974204212Z" level=info msg="shim disconnected" id=5fa6073f6e54c7b12893a0067c0ef3c9b25e280a1877239d0d0e9a3ba3c8557e namespace=k8s.io Sep 5 23:51:51.974283 containerd[1598]: time="2025-09-05T23:51:51.974270819Z" level=warning msg="cleaning up after shim disconnected" id=5fa6073f6e54c7b12893a0067c0ef3c9b25e280a1877239d0d0e9a3ba3c8557e namespace=k8s.io Sep 5 23:51:51.974283 containerd[1598]: time="2025-09-05T23:51:51.974281500Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 23:51:52.861444 containerd[1598]: time="2025-09-05T23:51:52.861388022Z" level=info msg="CreateContainer within sandbox \"ebdeac8f6aa8242f40af9a80823206b48d917ce061f596d532f2631024b5fe19\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 5 23:51:52.888006 containerd[1598]: time="2025-09-05T23:51:52.887939003Z" level=info msg="CreateContainer within sandbox \"ebdeac8f6aa8242f40af9a80823206b48d917ce061f596d532f2631024b5fe19\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9c853af0d62870a992e0f1624cf59d6f2ed2f794965b9f32141a738afb297a00\"" Sep 5 23:51:52.889777 containerd[1598]: time="2025-09-05T23:51:52.888803449Z" level=info msg="StartContainer for \"9c853af0d62870a992e0f1624cf59d6f2ed2f794965b9f32141a738afb297a00\"" Sep 5 23:51:52.963031 containerd[1598]: time="2025-09-05T23:51:52.961867062Z" level=info msg="StartContainer for \"9c853af0d62870a992e0f1624cf59d6f2ed2f794965b9f32141a738afb297a00\" returns successfully" Sep 5 23:51:53.077795 kubelet[2753]: I0905 23:51:53.077716 2753 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 5 23:51:53.301021 kubelet[2753]: I0905 23:51:53.300891 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8xlx\" (UniqueName: \"kubernetes.io/projected/56353096-40e2-43b5-9b0b-614c14c6b3a6-kube-api-access-b8xlx\") pod \"coredns-7c65d6cfc9-2gv7k\" (UID: \"56353096-40e2-43b5-9b0b-614c14c6b3a6\") " pod="kube-system/coredns-7c65d6cfc9-2gv7k" Sep 5 23:51:53.301604 kubelet[2753]: I0905 23:51:53.301115 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cn8p\" (UniqueName: \"kubernetes.io/projected/252bfb6d-275a-49f8-9e40-4ec59c96c293-kube-api-access-4cn8p\") pod \"coredns-7c65d6cfc9-r7hmm\" (UID: \"252bfb6d-275a-49f8-9e40-4ec59c96c293\") " pod="kube-system/coredns-7c65d6cfc9-r7hmm" Sep 5 23:51:53.301604 kubelet[2753]: I0905 23:51:53.301145 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/252bfb6d-275a-49f8-9e40-4ec59c96c293-config-volume\") pod \"coredns-7c65d6cfc9-r7hmm\" (UID: \"252bfb6d-275a-49f8-9e40-4ec59c96c293\") " pod="kube-system/coredns-7c65d6cfc9-r7hmm" Sep 5 23:51:53.301604 kubelet[2753]: I0905 23:51:53.301174 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/56353096-40e2-43b5-9b0b-614c14c6b3a6-config-volume\") pod \"coredns-7c65d6cfc9-2gv7k\" (UID: \"56353096-40e2-43b5-9b0b-614c14c6b3a6\") " pod="kube-system/coredns-7c65d6cfc9-2gv7k" Sep 5 23:51:53.432103 containerd[1598]: time="2025-09-05T23:51:53.432056861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2gv7k,Uid:56353096-40e2-43b5-9b0b-614c14c6b3a6,Namespace:kube-system,Attempt:0,}" Sep 5 23:51:53.437894 containerd[1598]: time="2025-09-05T23:51:53.437839146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-r7hmm,Uid:252bfb6d-275a-49f8-9e40-4ec59c96c293,Namespace:kube-system,Attempt:0,}" Sep 5 23:51:58.242034 containerd[1598]: time="2025-09-05T23:51:58.241938819Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:58.243526 containerd[1598]: time="2025-09-05T23:51:58.243447185Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Sep 5 23:51:58.245132 containerd[1598]: time="2025-09-05T23:51:58.244437483Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:58.246926 containerd[1598]: time="2025-09-05T23:51:58.246787962Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 9.272857165s" Sep 5 23:51:58.246926 containerd[1598]: time="2025-09-05T23:51:58.246833757Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 5 23:51:58.250730 containerd[1598]: time="2025-09-05T23:51:58.250573094Z" level=info msg="CreateContainer within sandbox \"6c109dee7865ca5e54f2e10ca8a21ca277362e878f1d13fe2edd923d174ea516\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 5 23:51:58.268392 containerd[1598]: time="2025-09-05T23:51:58.268297036Z" level=info msg="CreateContainer within sandbox \"6c109dee7865ca5e54f2e10ca8a21ca277362e878f1d13fe2edd923d174ea516\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"2d197c966fa8bcfe6ead7130d43c50f630fed4701d6fa06b5601b58de44dc3b2\"" Sep 5 23:51:58.270883 containerd[1598]: time="2025-09-05T23:51:58.270520528Z" level=info msg="StartContainer for \"2d197c966fa8bcfe6ead7130d43c50f630fed4701d6fa06b5601b58de44dc3b2\"" Sep 5 23:51:58.325991 containerd[1598]: time="2025-09-05T23:51:58.325830135Z" level=info msg="StartContainer for \"2d197c966fa8bcfe6ead7130d43c50f630fed4701d6fa06b5601b58de44dc3b2\" returns successfully" Sep 5 23:51:58.940591 kubelet[2753]: I0905 23:51:58.939385 2753 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rx94p" podStartSLOduration=10.786501385 podStartE2EDuration="25.939368169s" podCreationTimestamp="2025-09-05 23:51:33 +0000 UTC" firstStartedPulling="2025-09-05 23:51:33.820184485 +0000 UTC m=+7.219194277" lastFinishedPulling="2025-09-05 23:51:48.973051189 +0000 UTC m=+22.372061061" observedRunningTime="2025-09-05 23:51:53.887972508 +0000 UTC m=+27.286982300" watchObservedRunningTime="2025-09-05 23:51:58.939368169 +0000 UTC m=+32.338377961" Sep 5 23:52:02.152529 systemd-networkd[1244]: cilium_host: Link UP Sep 5 23:52:02.155896 systemd-networkd[1244]: cilium_net: Link UP Sep 5 23:52:02.155901 systemd-networkd[1244]: cilium_net: Gained carrier Sep 5 23:52:02.156083 systemd-networkd[1244]: cilium_host: Gained carrier Sep 5 23:52:02.156448 systemd-networkd[1244]: cilium_host: Gained IPv6LL Sep 5 23:52:02.299646 systemd-networkd[1244]: cilium_vxlan: Link UP Sep 5 23:52:02.299653 systemd-networkd[1244]: cilium_vxlan: Gained carrier Sep 5 23:52:02.598443 kernel: NET: Registered PF_ALG protocol family Sep 5 23:52:03.149537 systemd-networkd[1244]: cilium_net: Gained IPv6LL Sep 5 23:52:03.336861 systemd-networkd[1244]: lxc_health: Link UP Sep 5 23:52:03.337101 systemd-networkd[1244]: lxc_health: Gained carrier Sep 5 23:52:03.516973 systemd-networkd[1244]: lxcf40c9fe02325: Link UP Sep 5 23:52:03.523514 kernel: eth0: renamed from tmp63cb9 Sep 5 23:52:03.530607 systemd-networkd[1244]: lxcf40c9fe02325: Gained carrier Sep 5 23:52:03.541080 systemd-networkd[1244]: lxc03fec3d7c7f3: Link UP Sep 5 23:52:03.550958 kernel: eth0: renamed from tmp42c8a Sep 5 23:52:03.560842 systemd-networkd[1244]: lxc03fec3d7c7f3: Gained carrier Sep 5 23:52:03.676901 kubelet[2753]: I0905 23:52:03.676755 2753 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-ppfnb" podStartSLOduration=6.50443688 podStartE2EDuration="30.676724748s" podCreationTimestamp="2025-09-05 23:51:33 +0000 UTC" firstStartedPulling="2025-09-05 23:51:34.07541744 +0000 UTC m=+7.474427232" lastFinishedPulling="2025-09-05 23:51:58.247705308 +0000 UTC m=+31.646715100" observedRunningTime="2025-09-05 23:51:58.959748159 +0000 UTC m=+32.358757951" watchObservedRunningTime="2025-09-05 23:52:03.676724748 +0000 UTC m=+37.075734540" Sep 5 23:52:03.918603 systemd-networkd[1244]: cilium_vxlan: Gained IPv6LL Sep 5 23:52:04.429772 systemd-networkd[1244]: lxc_health: Gained IPv6LL Sep 5 23:52:04.685778 systemd-networkd[1244]: lxcf40c9fe02325: Gained IPv6LL Sep 5 23:52:05.454033 systemd-networkd[1244]: lxc03fec3d7c7f3: Gained IPv6LL Sep 5 23:52:07.654848 containerd[1598]: time="2025-09-05T23:52:07.653720506Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:52:07.654848 containerd[1598]: time="2025-09-05T23:52:07.653833539Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:52:07.654848 containerd[1598]: time="2025-09-05T23:52:07.653869697Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:52:07.654848 containerd[1598]: time="2025-09-05T23:52:07.654440822Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:52:07.725725 containerd[1598]: time="2025-09-05T23:52:07.725615695Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:52:07.726523 containerd[1598]: time="2025-09-05T23:52:07.726339211Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:52:07.727138 containerd[1598]: time="2025-09-05T23:52:07.726508920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:52:07.727334 containerd[1598]: time="2025-09-05T23:52:07.727247515Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:52:07.756341 containerd[1598]: time="2025-09-05T23:52:07.756239177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-r7hmm,Uid:252bfb6d-275a-49f8-9e40-4ec59c96c293,Namespace:kube-system,Attempt:0,} returns sandbox id \"42c8ad56d9b6424f509226629b84f2e5d82a2b7d1ceb240497a8676c17b451fd\"" Sep 5 23:52:07.765432 containerd[1598]: time="2025-09-05T23:52:07.765250904Z" level=info msg="CreateContainer within sandbox \"42c8ad56d9b6424f509226629b84f2e5d82a2b7d1ceb240497a8676c17b451fd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 5 23:52:07.806246 containerd[1598]: time="2025-09-05T23:52:07.805989845Z" level=info msg="CreateContainer within sandbox \"42c8ad56d9b6424f509226629b84f2e5d82a2b7d1ceb240497a8676c17b451fd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3b6e2e6fd4739f9bbb5bc20579628fc34cef2373b9355680e8943b87e0b58925\"" Sep 5 23:52:07.807623 containerd[1598]: time="2025-09-05T23:52:07.807569988Z" level=info msg="StartContainer for \"3b6e2e6fd4739f9bbb5bc20579628fc34cef2373b9355680e8943b87e0b58925\"" Sep 5 23:52:07.820748 containerd[1598]: time="2025-09-05T23:52:07.820700302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2gv7k,Uid:56353096-40e2-43b5-9b0b-614c14c6b3a6,Namespace:kube-system,Attempt:0,} returns sandbox id \"63cb9af5876c4e257b6f8de9a1a40f22888ede0a8abb445dbc6b06f68b284e7b\"" Sep 5 23:52:07.827047 containerd[1598]: time="2025-09-05T23:52:07.826974797Z" level=info msg="CreateContainer within sandbox \"63cb9af5876c4e257b6f8de9a1a40f22888ede0a8abb445dbc6b06f68b284e7b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 5 23:52:07.848577 containerd[1598]: time="2025-09-05T23:52:07.848519075Z" level=info msg="CreateContainer within sandbox \"63cb9af5876c4e257b6f8de9a1a40f22888ede0a8abb445dbc6b06f68b284e7b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ea74804e104003097aa8b8716113be9ff4c42eb119b9091c2073443242ccea9b\"" Sep 5 23:52:07.853221 containerd[1598]: time="2025-09-05T23:52:07.853043918Z" level=info msg="StartContainer for \"ea74804e104003097aa8b8716113be9ff4c42eb119b9091c2073443242ccea9b\"" Sep 5 23:52:07.881095 containerd[1598]: time="2025-09-05T23:52:07.881018162Z" level=info msg="StartContainer for \"3b6e2e6fd4739f9bbb5bc20579628fc34cef2373b9355680e8943b87e0b58925\" returns successfully" Sep 5 23:52:07.943760 kubelet[2753]: I0905 23:52:07.941758 2753 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-r7hmm" podStartSLOduration=34.941741036 podStartE2EDuration="34.941741036s" podCreationTimestamp="2025-09-05 23:51:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 23:52:07.94054111 +0000 UTC m=+41.339550902" watchObservedRunningTime="2025-09-05 23:52:07.941741036 +0000 UTC m=+41.340750828" Sep 5 23:52:07.965808 containerd[1598]: time="2025-09-05T23:52:07.965281832Z" level=info msg="StartContainer for \"ea74804e104003097aa8b8716113be9ff4c42eb119b9091c2073443242ccea9b\" returns successfully" Sep 5 23:52:08.938425 kubelet[2753]: I0905 23:52:08.938065 2753 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-2gv7k" podStartSLOduration=35.938044167 podStartE2EDuration="35.938044167s" podCreationTimestamp="2025-09-05 23:51:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 23:52:08.937522876 +0000 UTC m=+42.336532708" watchObservedRunningTime="2025-09-05 23:52:08.938044167 +0000 UTC m=+42.337053959" Sep 5 23:53:51.740898 systemd[1]: Started sshd@8-91.99.216.181:22-139.178.68.195:42916.service - OpenSSH per-connection server daemon (139.178.68.195:42916). Sep 5 23:53:52.737872 sshd[4160]: Accepted publickey for core from 139.178.68.195 port 42916 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:53:52.740213 sshd[4160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:53:52.746034 systemd-logind[1566]: New session 8 of user core. Sep 5 23:53:52.752954 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 5 23:53:53.516454 sshd[4160]: pam_unix(sshd:session): session closed for user core Sep 5 23:53:53.522799 systemd[1]: sshd@8-91.99.216.181:22-139.178.68.195:42916.service: Deactivated successfully. Sep 5 23:53:53.528846 systemd[1]: session-8.scope: Deactivated successfully. Sep 5 23:53:53.530449 systemd-logind[1566]: Session 8 logged out. Waiting for processes to exit. Sep 5 23:53:53.531490 systemd-logind[1566]: Removed session 8. Sep 5 23:53:58.686694 systemd[1]: Started sshd@9-91.99.216.181:22-139.178.68.195:42926.service - OpenSSH per-connection server daemon (139.178.68.195:42926). Sep 5 23:53:59.684985 sshd[4175]: Accepted publickey for core from 139.178.68.195 port 42926 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:53:59.688005 sshd[4175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:53:59.694155 systemd-logind[1566]: New session 9 of user core. Sep 5 23:53:59.702053 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 5 23:54:00.478753 sshd[4175]: pam_unix(sshd:session): session closed for user core Sep 5 23:54:00.491945 systemd[1]: sshd@9-91.99.216.181:22-139.178.68.195:42926.service: Deactivated successfully. Sep 5 23:54:00.501875 systemd[1]: session-9.scope: Deactivated successfully. Sep 5 23:54:00.505040 systemd-logind[1566]: Session 9 logged out. Waiting for processes to exit. Sep 5 23:54:00.512628 systemd-logind[1566]: Removed session 9. Sep 5 23:54:05.649791 systemd[1]: Started sshd@10-91.99.216.181:22-139.178.68.195:58688.service - OpenSSH per-connection server daemon (139.178.68.195:58688). Sep 5 23:54:06.639231 sshd[4194]: Accepted publickey for core from 139.178.68.195 port 58688 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:54:06.641350 sshd[4194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:54:06.647299 systemd-logind[1566]: New session 10 of user core. Sep 5 23:54:06.652893 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 5 23:54:07.407856 sshd[4194]: pam_unix(sshd:session): session closed for user core Sep 5 23:54:07.415306 systemd-logind[1566]: Session 10 logged out. Waiting for processes to exit. Sep 5 23:54:07.416024 systemd[1]: sshd@10-91.99.216.181:22-139.178.68.195:58688.service: Deactivated successfully. Sep 5 23:54:07.421962 systemd[1]: session-10.scope: Deactivated successfully. Sep 5 23:54:07.423697 systemd-logind[1566]: Removed session 10. Sep 5 23:54:12.577850 systemd[1]: Started sshd@11-91.99.216.181:22-139.178.68.195:38324.service - OpenSSH per-connection server daemon (139.178.68.195:38324). Sep 5 23:54:13.577989 sshd[4209]: Accepted publickey for core from 139.178.68.195 port 38324 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:54:13.580183 sshd[4209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:54:13.586606 systemd-logind[1566]: New session 11 of user core. Sep 5 23:54:13.591778 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 5 23:54:14.341713 sshd[4209]: pam_unix(sshd:session): session closed for user core Sep 5 23:54:14.346207 systemd[1]: sshd@11-91.99.216.181:22-139.178.68.195:38324.service: Deactivated successfully. Sep 5 23:54:14.351647 systemd[1]: session-11.scope: Deactivated successfully. Sep 5 23:54:14.352832 systemd-logind[1566]: Session 11 logged out. Waiting for processes to exit. Sep 5 23:54:14.353952 systemd-logind[1566]: Removed session 11. Sep 5 23:54:19.533233 systemd[1]: Started sshd@12-91.99.216.181:22-139.178.68.195:38336.service - OpenSSH per-connection server daemon (139.178.68.195:38336). Sep 5 23:54:20.582648 sshd[4224]: Accepted publickey for core from 139.178.68.195 port 38336 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:54:20.584690 sshd[4224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:54:20.590003 systemd-logind[1566]: New session 12 of user core. Sep 5 23:54:20.600005 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 5 23:54:21.384948 sshd[4224]: pam_unix(sshd:session): session closed for user core Sep 5 23:54:21.391329 systemd[1]: sshd@12-91.99.216.181:22-139.178.68.195:38336.service: Deactivated successfully. Sep 5 23:54:21.391408 systemd-logind[1566]: Session 12 logged out. Waiting for processes to exit. Sep 5 23:54:21.394658 systemd[1]: session-12.scope: Deactivated successfully. Sep 5 23:54:21.396059 systemd-logind[1566]: Removed session 12. Sep 5 23:54:26.229868 systemd[1]: Started sshd@13-91.99.216.181:22-194.85.69.22:57058.service - OpenSSH per-connection server daemon (194.85.69.22:57058). Sep 5 23:54:26.548625 systemd[1]: Started sshd@14-91.99.216.181:22-139.178.68.195:37122.service - OpenSSH per-connection server daemon (139.178.68.195:37122). Sep 5 23:54:27.546819 sshd[4240]: Accepted publickey for core from 139.178.68.195 port 37122 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:54:27.550728 sshd[4240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:54:27.560227 systemd-logind[1566]: New session 13 of user core. Sep 5 23:54:27.571538 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 5 23:54:28.015580 sshd[4239]: Invalid user default from 194.85.69.22 port 57058 Sep 5 23:54:28.319854 sshd[4240]: pam_unix(sshd:session): session closed for user core Sep 5 23:54:28.324679 systemd[1]: sshd@14-91.99.216.181:22-139.178.68.195:37122.service: Deactivated successfully. Sep 5 23:54:28.330771 systemd[1]: session-13.scope: Deactivated successfully. Sep 5 23:54:28.332110 systemd-logind[1566]: Session 13 logged out. Waiting for processes to exit. Sep 5 23:54:28.333163 systemd-logind[1566]: Removed session 13. Sep 5 23:54:28.537356 sshd[4258]: pam_faillock(sshd:auth): User unknown Sep 5 23:54:28.541119 sshd[4239]: Postponed keyboard-interactive for invalid user default from 194.85.69.22 port 57058 ssh2 [preauth] Sep 5 23:54:28.891274 sshd[4258]: pam_unix(sshd:auth): check pass; user unknown Sep 5 23:54:28.891314 sshd[4258]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=194.85.69.22 Sep 5 23:54:28.892276 sshd[4258]: pam_faillock(sshd:auth): User unknown Sep 5 23:54:30.633922 systemd[1]: Started sshd@15-91.99.216.181:22-93.62.72.229:44902.service - OpenSSH per-connection server daemon (93.62.72.229:44902). Sep 5 23:54:31.157651 sshd[4239]: PAM: Permission denied for illegal user default from 194.85.69.22 Sep 5 23:54:31.158365 sshd[4239]: Failed keyboard-interactive/pam for invalid user default from 194.85.69.22 port 57058 ssh2 Sep 5 23:54:31.516494 sshd[4259]: Invalid user user from 93.62.72.229 port 44902 Sep 5 23:54:31.564431 sshd[4239]: Connection closed by invalid user default 194.85.69.22 port 57058 [preauth] Sep 5 23:54:31.570829 systemd[1]: sshd@13-91.99.216.181:22-194.85.69.22:57058.service: Deactivated successfully. Sep 5 23:54:31.686213 sshd[4264]: pam_faillock(sshd:auth): User unknown Sep 5 23:54:31.691778 sshd[4259]: Postponed keyboard-interactive for invalid user user from 93.62.72.229 port 44902 ssh2 [preauth] Sep 5 23:54:31.854957 sshd[4264]: pam_unix(sshd:auth): check pass; user unknown Sep 5 23:54:31.855010 sshd[4264]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=93.62.72.229 Sep 5 23:54:31.856606 sshd[4264]: pam_faillock(sshd:auth): User unknown Sep 5 23:54:33.518053 systemd[1]: Started sshd@16-91.99.216.181:22-139.178.68.195:54562.service - OpenSSH per-connection server daemon (139.178.68.195:54562). Sep 5 23:54:34.204252 sshd[4259]: PAM: Permission denied for illegal user user from 93.62.72.229 Sep 5 23:54:34.206110 sshd[4259]: Failed keyboard-interactive/pam for invalid user user from 93.62.72.229 port 44902 ssh2 Sep 5 23:54:34.473709 sshd[4259]: Connection closed by invalid user user 93.62.72.229 port 44902 [preauth] Sep 5 23:54:34.478640 systemd[1]: sshd@15-91.99.216.181:22-93.62.72.229:44902.service: Deactivated successfully. Sep 5 23:54:34.577816 sshd[4265]: Accepted publickey for core from 139.178.68.195 port 54562 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:54:34.580672 sshd[4265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:54:34.589351 systemd-logind[1566]: New session 14 of user core. Sep 5 23:54:34.594935 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 5 23:54:35.377826 sshd[4265]: pam_unix(sshd:session): session closed for user core Sep 5 23:54:35.383232 systemd[1]: sshd@16-91.99.216.181:22-139.178.68.195:54562.service: Deactivated successfully. Sep 5 23:54:35.386840 systemd[1]: session-14.scope: Deactivated successfully. Sep 5 23:54:35.388764 systemd-logind[1566]: Session 14 logged out. Waiting for processes to exit. Sep 5 23:54:35.389858 systemd-logind[1566]: Removed session 14. Sep 5 23:54:40.536933 systemd[1]: Started sshd@17-91.99.216.181:22-139.178.68.195:60414.service - OpenSSH per-connection server daemon (139.178.68.195:60414). Sep 5 23:54:41.538325 sshd[4286]: Accepted publickey for core from 139.178.68.195 port 60414 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:54:41.540439 sshd[4286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:54:41.547186 systemd-logind[1566]: New session 15 of user core. Sep 5 23:54:41.555793 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 5 23:54:42.300731 sshd[4286]: pam_unix(sshd:session): session closed for user core Sep 5 23:54:42.304511 systemd[1]: sshd@17-91.99.216.181:22-139.178.68.195:60414.service: Deactivated successfully. Sep 5 23:54:42.318447 systemd[1]: session-15.scope: Deactivated successfully. Sep 5 23:54:42.319899 systemd-logind[1566]: Session 15 logged out. Waiting for processes to exit. Sep 5 23:54:42.321932 systemd-logind[1566]: Removed session 15. Sep 5 23:54:47.467692 systemd[1]: Started sshd@18-91.99.216.181:22-139.178.68.195:60428.service - OpenSSH per-connection server daemon (139.178.68.195:60428). Sep 5 23:54:48.471901 sshd[4302]: Accepted publickey for core from 139.178.68.195 port 60428 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:54:48.473944 sshd[4302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:54:48.479426 systemd-logind[1566]: New session 16 of user core. Sep 5 23:54:48.485061 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 5 23:54:49.256500 sshd[4302]: pam_unix(sshd:session): session closed for user core Sep 5 23:54:49.261839 systemd[1]: sshd@18-91.99.216.181:22-139.178.68.195:60428.service: Deactivated successfully. Sep 5 23:54:49.263075 systemd-logind[1566]: Session 16 logged out. Waiting for processes to exit. Sep 5 23:54:49.272478 systemd[1]: session-16.scope: Deactivated successfully. Sep 5 23:54:49.276456 systemd-logind[1566]: Removed session 16. Sep 5 23:54:54.448899 systemd[1]: Started sshd@19-91.99.216.181:22-139.178.68.195:46220.service - OpenSSH per-connection server daemon (139.178.68.195:46220). Sep 5 23:54:55.503465 sshd[4317]: Accepted publickey for core from 139.178.68.195 port 46220 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:54:55.504964 sshd[4317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:54:55.511164 systemd-logind[1566]: New session 17 of user core. Sep 5 23:54:55.516717 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 5 23:54:56.302393 sshd[4317]: pam_unix(sshd:session): session closed for user core Sep 5 23:54:56.308441 systemd-logind[1566]: Session 17 logged out. Waiting for processes to exit. Sep 5 23:54:56.308723 systemd[1]: sshd@19-91.99.216.181:22-139.178.68.195:46220.service: Deactivated successfully. Sep 5 23:54:56.313995 systemd[1]: session-17.scope: Deactivated successfully. Sep 5 23:54:56.317074 systemd-logind[1566]: Removed session 17. Sep 5 23:55:01.461731 systemd[1]: Started sshd@20-91.99.216.181:22-139.178.68.195:57226.service - OpenSSH per-connection server daemon (139.178.68.195:57226). Sep 5 23:55:02.461165 sshd[4332]: Accepted publickey for core from 139.178.68.195 port 57226 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:55:02.463605 sshd[4332]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:55:02.472918 systemd-logind[1566]: New session 18 of user core. Sep 5 23:55:02.479365 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 5 23:55:03.244290 sshd[4332]: pam_unix(sshd:session): session closed for user core Sep 5 23:55:03.251552 systemd-logind[1566]: Session 18 logged out. Waiting for processes to exit. Sep 5 23:55:03.251961 systemd[1]: sshd@20-91.99.216.181:22-139.178.68.195:57226.service: Deactivated successfully. Sep 5 23:55:03.257067 systemd[1]: session-18.scope: Deactivated successfully. Sep 5 23:55:03.258387 systemd-logind[1566]: Removed session 18. Sep 5 23:55:08.433808 systemd[1]: Started sshd@21-91.99.216.181:22-139.178.68.195:57228.service - OpenSSH per-connection server daemon (139.178.68.195:57228). Sep 5 23:55:09.483041 sshd[4349]: Accepted publickey for core from 139.178.68.195 port 57228 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:55:09.485535 sshd[4349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:55:09.490283 systemd-logind[1566]: New session 19 of user core. Sep 5 23:55:09.497823 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 5 23:55:10.291814 sshd[4349]: pam_unix(sshd:session): session closed for user core Sep 5 23:55:10.297546 systemd[1]: sshd@21-91.99.216.181:22-139.178.68.195:57228.service: Deactivated successfully. Sep 5 23:55:10.302482 systemd-logind[1566]: Session 19 logged out. Waiting for processes to exit. Sep 5 23:55:10.303122 systemd[1]: session-19.scope: Deactivated successfully. Sep 5 23:55:10.305216 systemd-logind[1566]: Removed session 19. Sep 5 23:55:15.462711 systemd[1]: Started sshd@22-91.99.216.181:22-139.178.68.195:56228.service - OpenSSH per-connection server daemon (139.178.68.195:56228). Sep 5 23:55:16.466581 sshd[4364]: Accepted publickey for core from 139.178.68.195 port 56228 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:55:16.469105 sshd[4364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:55:16.475106 systemd-logind[1566]: New session 20 of user core. Sep 5 23:55:16.480751 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 5 23:55:17.229560 sshd[4364]: pam_unix(sshd:session): session closed for user core Sep 5 23:55:17.237485 systemd[1]: sshd@22-91.99.216.181:22-139.178.68.195:56228.service: Deactivated successfully. Sep 5 23:55:17.244525 systemd[1]: session-20.scope: Deactivated successfully. Sep 5 23:55:17.246228 systemd-logind[1566]: Session 20 logged out. Waiting for processes to exit. Sep 5 23:55:17.248199 systemd-logind[1566]: Removed session 20. Sep 5 23:55:22.397729 systemd[1]: Started sshd@23-91.99.216.181:22-139.178.68.195:44796.service - OpenSSH per-connection server daemon (139.178.68.195:44796). Sep 5 23:55:23.387013 sshd[4380]: Accepted publickey for core from 139.178.68.195 port 44796 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:55:23.389148 sshd[4380]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:55:23.395055 systemd-logind[1566]: New session 21 of user core. Sep 5 23:55:23.400442 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 5 23:55:24.145631 sshd[4380]: pam_unix(sshd:session): session closed for user core Sep 5 23:55:24.149534 systemd[1]: sshd@23-91.99.216.181:22-139.178.68.195:44796.service: Deactivated successfully. Sep 5 23:55:24.155097 systemd[1]: session-21.scope: Deactivated successfully. Sep 5 23:55:24.156482 systemd-logind[1566]: Session 21 logged out. Waiting for processes to exit. Sep 5 23:55:24.157758 systemd-logind[1566]: Removed session 21. Sep 5 23:55:29.340989 systemd[1]: Started sshd@24-91.99.216.181:22-139.178.68.195:44810.service - OpenSSH per-connection server daemon (139.178.68.195:44810). Sep 5 23:55:30.393704 sshd[4401]: Accepted publickey for core from 139.178.68.195 port 44810 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:55:30.396152 sshd[4401]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:55:30.402156 systemd-logind[1566]: New session 22 of user core. Sep 5 23:55:30.411187 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 5 23:55:31.194801 sshd[4401]: pam_unix(sshd:session): session closed for user core Sep 5 23:55:31.199056 systemd[1]: sshd@24-91.99.216.181:22-139.178.68.195:44810.service: Deactivated successfully. Sep 5 23:55:31.203969 systemd[1]: session-22.scope: Deactivated successfully. Sep 5 23:55:31.206106 systemd-logind[1566]: Session 22 logged out. Waiting for processes to exit. Sep 5 23:55:31.207807 systemd-logind[1566]: Removed session 22. Sep 5 23:55:36.355757 systemd[1]: Started sshd@25-91.99.216.181:22-139.178.68.195:41332.service - OpenSSH per-connection server daemon (139.178.68.195:41332). Sep 5 23:55:37.347745 sshd[4417]: Accepted publickey for core from 139.178.68.195 port 41332 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:55:37.351275 sshd[4417]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:55:37.358474 systemd-logind[1566]: New session 23 of user core. Sep 5 23:55:37.362941 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 5 23:55:38.115882 sshd[4417]: pam_unix(sshd:session): session closed for user core Sep 5 23:55:38.122614 systemd[1]: sshd@25-91.99.216.181:22-139.178.68.195:41332.service: Deactivated successfully. Sep 5 23:55:38.127754 systemd[1]: session-23.scope: Deactivated successfully. Sep 5 23:55:38.128834 systemd-logind[1566]: Session 23 logged out. Waiting for processes to exit. Sep 5 23:55:38.131204 systemd-logind[1566]: Removed session 23. Sep 5 23:55:43.287453 systemd[1]: Started sshd@26-91.99.216.181:22-139.178.68.195:60678.service - OpenSSH per-connection server daemon (139.178.68.195:60678). Sep 5 23:55:44.279991 sshd[4432]: Accepted publickey for core from 139.178.68.195 port 60678 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:55:44.282018 sshd[4432]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:55:44.288394 systemd-logind[1566]: New session 24 of user core. Sep 5 23:55:44.298551 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 5 23:55:45.038313 sshd[4432]: pam_unix(sshd:session): session closed for user core Sep 5 23:55:45.042653 systemd-logind[1566]: Session 24 logged out. Waiting for processes to exit. Sep 5 23:55:45.043910 systemd[1]: sshd@26-91.99.216.181:22-139.178.68.195:60678.service: Deactivated successfully. Sep 5 23:55:45.046905 systemd[1]: session-24.scope: Deactivated successfully. Sep 5 23:55:45.048082 systemd-logind[1566]: Removed session 24. Sep 5 23:55:50.207762 systemd[1]: Started sshd@27-91.99.216.181:22-139.178.68.195:55876.service - OpenSSH per-connection server daemon (139.178.68.195:55876). Sep 5 23:55:51.202444 sshd[4448]: Accepted publickey for core from 139.178.68.195 port 55876 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:55:51.209080 sshd[4448]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:55:51.218024 systemd-logind[1566]: New session 25 of user core. Sep 5 23:55:51.221753 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 5 23:55:51.969624 sshd[4448]: pam_unix(sshd:session): session closed for user core Sep 5 23:55:51.974987 systemd[1]: sshd@27-91.99.216.181:22-139.178.68.195:55876.service: Deactivated successfully. Sep 5 23:55:51.975452 systemd-logind[1566]: Session 25 logged out. Waiting for processes to exit. Sep 5 23:55:51.980094 systemd[1]: session-25.scope: Deactivated successfully. Sep 5 23:55:51.982673 systemd-logind[1566]: Removed session 25. Sep 5 23:55:57.138725 systemd[1]: Started sshd@28-91.99.216.181:22-139.178.68.195:55878.service - OpenSSH per-connection server daemon (139.178.68.195:55878). Sep 5 23:55:58.135768 sshd[4463]: Accepted publickey for core from 139.178.68.195 port 55878 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:55:58.138265 sshd[4463]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:55:58.143679 systemd-logind[1566]: New session 26 of user core. Sep 5 23:55:58.150726 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 5 23:55:58.896430 sshd[4463]: pam_unix(sshd:session): session closed for user core Sep 5 23:55:58.905051 systemd[1]: sshd@28-91.99.216.181:22-139.178.68.195:55878.service: Deactivated successfully. Sep 5 23:55:58.909526 systemd-logind[1566]: Session 26 logged out. Waiting for processes to exit. Sep 5 23:55:58.910432 systemd[1]: session-26.scope: Deactivated successfully. Sep 5 23:55:58.913724 systemd-logind[1566]: Removed session 26. Sep 5 23:56:04.084701 systemd[1]: Started sshd@29-91.99.216.181:22-139.178.68.195:54112.service - OpenSSH per-connection server daemon (139.178.68.195:54112). Sep 5 23:56:05.132367 sshd[4477]: Accepted publickey for core from 139.178.68.195 port 54112 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:56:05.134736 sshd[4477]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:56:05.141530 systemd-logind[1566]: New session 27 of user core. Sep 5 23:56:05.145744 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 5 23:56:05.933832 sshd[4477]: pam_unix(sshd:session): session closed for user core Sep 5 23:56:05.939493 systemd[1]: sshd@29-91.99.216.181:22-139.178.68.195:54112.service: Deactivated successfully. Sep 5 23:56:05.946227 systemd[1]: session-27.scope: Deactivated successfully. Sep 5 23:56:05.947131 systemd-logind[1566]: Session 27 logged out. Waiting for processes to exit. Sep 5 23:56:05.948949 systemd-logind[1566]: Removed session 27. Sep 5 23:56:11.095722 systemd[1]: Started sshd@30-91.99.216.181:22-139.178.68.195:49682.service - OpenSSH per-connection server daemon (139.178.68.195:49682). Sep 5 23:56:12.090262 sshd[4493]: Accepted publickey for core from 139.178.68.195 port 49682 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:56:12.092562 sshd[4493]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:56:12.097576 systemd-logind[1566]: New session 28 of user core. Sep 5 23:56:12.103809 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 5 23:56:12.847792 sshd[4493]: pam_unix(sshd:session): session closed for user core Sep 5 23:56:12.853063 systemd-logind[1566]: Session 28 logged out. Waiting for processes to exit. Sep 5 23:56:12.853894 systemd[1]: sshd@30-91.99.216.181:22-139.178.68.195:49682.service: Deactivated successfully. Sep 5 23:56:12.858201 systemd[1]: session-28.scope: Deactivated successfully. Sep 5 23:56:12.859350 systemd-logind[1566]: Removed session 28. Sep 5 23:56:18.022070 systemd[1]: Started sshd@31-91.99.216.181:22-139.178.68.195:49686.service - OpenSSH per-connection server daemon (139.178.68.195:49686). Sep 5 23:56:19.020265 sshd[4508]: Accepted publickey for core from 139.178.68.195 port 49686 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:56:19.022595 sshd[4508]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:56:19.028725 systemd-logind[1566]: New session 29 of user core. Sep 5 23:56:19.035965 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 5 23:56:19.804128 sshd[4508]: pam_unix(sshd:session): session closed for user core Sep 5 23:56:19.811471 systemd-logind[1566]: Session 29 logged out. Waiting for processes to exit. Sep 5 23:56:19.812556 systemd[1]: sshd@31-91.99.216.181:22-139.178.68.195:49686.service: Deactivated successfully. Sep 5 23:56:19.818191 systemd[1]: session-29.scope: Deactivated successfully. Sep 5 23:56:19.820819 systemd-logind[1566]: Removed session 29. Sep 5 23:56:24.980789 systemd[1]: Started sshd@32-91.99.216.181:22-139.178.68.195:50260.service - OpenSSH per-connection server daemon (139.178.68.195:50260). Sep 5 23:56:25.982467 sshd[4523]: Accepted publickey for core from 139.178.68.195 port 50260 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:56:25.984881 sshd[4523]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:56:25.990327 systemd-logind[1566]: New session 30 of user core. Sep 5 23:56:25.992770 systemd[1]: Started session-30.scope - Session 30 of User core. Sep 5 23:56:26.741741 sshd[4523]: pam_unix(sshd:session): session closed for user core Sep 5 23:56:26.747746 systemd[1]: sshd@32-91.99.216.181:22-139.178.68.195:50260.service: Deactivated successfully. Sep 5 23:56:26.753738 systemd[1]: session-30.scope: Deactivated successfully. Sep 5 23:56:26.754608 systemd-logind[1566]: Session 30 logged out. Waiting for processes to exit. Sep 5 23:56:26.755542 systemd-logind[1566]: Removed session 30. Sep 5 23:56:31.919831 systemd[1]: Started sshd@33-91.99.216.181:22-139.178.68.195:48750.service - OpenSSH per-connection server daemon (139.178.68.195:48750). Sep 5 23:56:32.909563 sshd[4540]: Accepted publickey for core from 139.178.68.195 port 48750 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:56:32.913038 sshd[4540]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:56:32.919251 systemd-logind[1566]: New session 31 of user core. Sep 5 23:56:32.923777 systemd[1]: Started session-31.scope - Session 31 of User core. Sep 5 23:56:33.692820 sshd[4540]: pam_unix(sshd:session): session closed for user core Sep 5 23:56:33.699949 systemd[1]: sshd@33-91.99.216.181:22-139.178.68.195:48750.service: Deactivated successfully. Sep 5 23:56:33.703866 systemd[1]: session-31.scope: Deactivated successfully. Sep 5 23:56:33.705112 systemd-logind[1566]: Session 31 logged out. Waiting for processes to exit. Sep 5 23:56:33.708545 systemd-logind[1566]: Removed session 31. Sep 5 23:56:38.873039 systemd[1]: Started sshd@34-91.99.216.181:22-139.178.68.195:48754.service - OpenSSH per-connection server daemon (139.178.68.195:48754). Sep 5 23:56:39.923175 sshd[4558]: Accepted publickey for core from 139.178.68.195 port 48754 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:56:39.925142 sshd[4558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:56:39.929912 systemd-logind[1566]: New session 32 of user core. Sep 5 23:56:39.935768 systemd[1]: Started session-32.scope - Session 32 of User core. Sep 5 23:56:40.720881 sshd[4558]: pam_unix(sshd:session): session closed for user core Sep 5 23:56:40.725997 systemd[1]: sshd@34-91.99.216.181:22-139.178.68.195:48754.service: Deactivated successfully. Sep 5 23:56:40.734191 systemd[1]: session-32.scope: Deactivated successfully. Sep 5 23:56:40.734237 systemd-logind[1566]: Session 32 logged out. Waiting for processes to exit. Sep 5 23:56:40.736229 systemd-logind[1566]: Removed session 32. Sep 5 23:56:45.897828 systemd[1]: Started sshd@35-91.99.216.181:22-139.178.68.195:39610.service - OpenSSH per-connection server daemon (139.178.68.195:39610). Sep 5 23:56:46.893382 sshd[4572]: Accepted publickey for core from 139.178.68.195 port 39610 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:56:46.897792 sshd[4572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:56:46.903924 systemd-logind[1566]: New session 33 of user core. Sep 5 23:56:46.907775 systemd[1]: Started session-33.scope - Session 33 of User core. Sep 5 23:56:47.654459 sshd[4572]: pam_unix(sshd:session): session closed for user core Sep 5 23:56:47.658939 systemd[1]: sshd@35-91.99.216.181:22-139.178.68.195:39610.service: Deactivated successfully. Sep 5 23:56:47.662790 systemd-logind[1566]: Session 33 logged out. Waiting for processes to exit. Sep 5 23:56:47.663068 systemd[1]: session-33.scope: Deactivated successfully. Sep 5 23:56:47.665338 systemd-logind[1566]: Removed session 33. Sep 5 23:56:52.829217 systemd[1]: Started sshd@36-91.99.216.181:22-139.178.68.195:44416.service - OpenSSH per-connection server daemon (139.178.68.195:44416). Sep 5 23:56:53.824183 sshd[4587]: Accepted publickey for core from 139.178.68.195 port 44416 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:56:53.827511 sshd[4587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:56:53.835663 systemd-logind[1566]: New session 34 of user core. Sep 5 23:56:53.843767 systemd[1]: Started session-34.scope - Session 34 of User core. Sep 5 23:56:54.591076 sshd[4587]: pam_unix(sshd:session): session closed for user core Sep 5 23:56:54.594544 systemd-logind[1566]: Session 34 logged out. Waiting for processes to exit. Sep 5 23:56:54.595061 systemd[1]: sshd@36-91.99.216.181:22-139.178.68.195:44416.service: Deactivated successfully. Sep 5 23:56:54.600790 systemd[1]: session-34.scope: Deactivated successfully. Sep 5 23:56:54.606763 systemd-logind[1566]: Removed session 34. Sep 5 23:56:59.757719 systemd[1]: Started sshd@37-91.99.216.181:22-139.178.68.195:44432.service - OpenSSH per-connection server daemon (139.178.68.195:44432). Sep 5 23:57:00.760371 sshd[4601]: Accepted publickey for core from 139.178.68.195 port 44432 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:57:00.762799 sshd[4601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:57:00.773282 systemd-logind[1566]: New session 35 of user core. Sep 5 23:57:00.776764 systemd[1]: Started session-35.scope - Session 35 of User core. Sep 5 23:57:01.549258 sshd[4601]: pam_unix(sshd:session): session closed for user core Sep 5 23:57:01.554028 systemd-logind[1566]: Session 35 logged out. Waiting for processes to exit. Sep 5 23:57:01.555123 systemd[1]: sshd@37-91.99.216.181:22-139.178.68.195:44432.service: Deactivated successfully. Sep 5 23:57:01.558864 systemd[1]: session-35.scope: Deactivated successfully. Sep 5 23:57:01.559614 systemd-logind[1566]: Removed session 35. Sep 5 23:57:06.721729 systemd[1]: Started sshd@38-91.99.216.181:22-139.178.68.195:56088.service - OpenSSH per-connection server daemon (139.178.68.195:56088). Sep 5 23:57:07.715736 sshd[4619]: Accepted publickey for core from 139.178.68.195 port 56088 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:57:07.717557 sshd[4619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:57:07.723253 systemd-logind[1566]: New session 36 of user core. Sep 5 23:57:07.726416 systemd[1]: Started session-36.scope - Session 36 of User core. Sep 5 23:57:08.477231 sshd[4619]: pam_unix(sshd:session): session closed for user core Sep 5 23:57:08.483722 systemd[1]: sshd@38-91.99.216.181:22-139.178.68.195:56088.service: Deactivated successfully. Sep 5 23:57:08.488027 systemd-logind[1566]: Session 36 logged out. Waiting for processes to exit. Sep 5 23:57:08.488176 systemd[1]: session-36.scope: Deactivated successfully. Sep 5 23:57:08.489992 systemd-logind[1566]: Removed session 36. Sep 5 23:57:13.666001 systemd[1]: Started sshd@39-91.99.216.181:22-139.178.68.195:45164.service - OpenSSH per-connection server daemon (139.178.68.195:45164). Sep 5 23:57:14.714444 sshd[4634]: Accepted publickey for core from 139.178.68.195 port 45164 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:57:14.716873 sshd[4634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:57:14.722767 systemd-logind[1566]: New session 37 of user core. Sep 5 23:57:14.728305 systemd[1]: Started session-37.scope - Session 37 of User core. Sep 5 23:57:15.515121 sshd[4634]: pam_unix(sshd:session): session closed for user core Sep 5 23:57:15.521022 systemd[1]: sshd@39-91.99.216.181:22-139.178.68.195:45164.service: Deactivated successfully. Sep 5 23:57:15.526630 systemd[1]: session-37.scope: Deactivated successfully. Sep 5 23:57:15.527994 systemd-logind[1566]: Session 37 logged out. Waiting for processes to exit. Sep 5 23:57:15.529081 systemd-logind[1566]: Removed session 37. Sep 5 23:57:20.696698 systemd[1]: Started sshd@40-91.99.216.181:22-139.178.68.195:38026.service - OpenSSH per-connection server daemon (139.178.68.195:38026). Sep 5 23:57:21.748135 sshd[4649]: Accepted publickey for core from 139.178.68.195 port 38026 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:57:21.750901 sshd[4649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:57:21.758264 systemd-logind[1566]: New session 38 of user core. Sep 5 23:57:21.765955 systemd[1]: Started session-38.scope - Session 38 of User core. Sep 5 23:57:22.544853 sshd[4649]: pam_unix(sshd:session): session closed for user core Sep 5 23:57:22.552288 systemd-logind[1566]: Session 38 logged out. Waiting for processes to exit. Sep 5 23:57:22.553388 systemd[1]: sshd@40-91.99.216.181:22-139.178.68.195:38026.service: Deactivated successfully. Sep 5 23:57:22.561357 systemd[1]: session-38.scope: Deactivated successfully. Sep 5 23:57:22.563743 systemd-logind[1566]: Removed session 38. Sep 5 23:57:27.705476 systemd[1]: Started sshd@41-91.99.216.181:22-139.178.68.195:38040.service - OpenSSH per-connection server daemon (139.178.68.195:38040). Sep 5 23:57:28.703834 sshd[4666]: Accepted publickey for core from 139.178.68.195 port 38040 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:57:28.705684 sshd[4666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:57:28.712547 systemd-logind[1566]: New session 39 of user core. Sep 5 23:57:28.718886 systemd[1]: Started session-39.scope - Session 39 of User core. Sep 5 23:57:29.460813 sshd[4666]: pam_unix(sshd:session): session closed for user core Sep 5 23:57:29.465999 systemd[1]: sshd@41-91.99.216.181:22-139.178.68.195:38040.service: Deactivated successfully. Sep 5 23:57:29.469901 systemd[1]: session-39.scope: Deactivated successfully. Sep 5 23:57:29.471771 systemd-logind[1566]: Session 39 logged out. Waiting for processes to exit. Sep 5 23:57:29.473929 systemd-logind[1566]: Removed session 39. Sep 5 23:57:34.650857 systemd[1]: Started sshd@42-91.99.216.181:22-139.178.68.195:52902.service - OpenSSH per-connection server daemon (139.178.68.195:52902). Sep 5 23:57:35.707277 sshd[4685]: Accepted publickey for core from 139.178.68.195 port 52902 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:57:35.709800 sshd[4685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:57:35.715774 systemd-logind[1566]: New session 40 of user core. Sep 5 23:57:35.722173 systemd[1]: Started session-40.scope - Session 40 of User core. Sep 5 23:57:36.531314 sshd[4685]: pam_unix(sshd:session): session closed for user core Sep 5 23:57:36.537308 systemd-logind[1566]: Session 40 logged out. Waiting for processes to exit. Sep 5 23:57:36.537964 systemd[1]: sshd@42-91.99.216.181:22-139.178.68.195:52902.service: Deactivated successfully. Sep 5 23:57:36.542182 systemd[1]: session-40.scope: Deactivated successfully. Sep 5 23:57:36.543258 systemd-logind[1566]: Removed session 40. Sep 5 23:57:41.699755 systemd[1]: Started sshd@43-91.99.216.181:22-139.178.68.195:53164.service - OpenSSH per-connection server daemon (139.178.68.195:53164). Sep 5 23:57:42.745931 sshd[4699]: Accepted publickey for core from 139.178.68.195 port 53164 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:57:42.748583 sshd[4699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:57:42.754521 systemd-logind[1566]: New session 41 of user core. Sep 5 23:57:42.760857 systemd[1]: Started session-41.scope - Session 41 of User core. Sep 5 23:57:43.540778 sshd[4699]: pam_unix(sshd:session): session closed for user core Sep 5 23:57:43.547059 systemd[1]: sshd@43-91.99.216.181:22-139.178.68.195:53164.service: Deactivated successfully. Sep 5 23:57:43.551725 systemd[1]: session-41.scope: Deactivated successfully. Sep 5 23:57:43.552995 systemd-logind[1566]: Session 41 logged out. Waiting for processes to exit. Sep 5 23:57:43.555000 systemd-logind[1566]: Removed session 41. Sep 5 23:57:48.720899 systemd[1]: Started sshd@44-91.99.216.181:22-139.178.68.195:53168.service - OpenSSH per-connection server daemon (139.178.68.195:53168). Sep 5 23:57:49.771339 sshd[4714]: Accepted publickey for core from 139.178.68.195 port 53168 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:57:49.773904 sshd[4714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:57:49.778782 systemd-logind[1566]: New session 42 of user core. Sep 5 23:57:49.788004 systemd[1]: Started session-42.scope - Session 42 of User core. Sep 5 23:57:50.574034 sshd[4714]: pam_unix(sshd:session): session closed for user core Sep 5 23:57:50.586583 systemd[1]: sshd@44-91.99.216.181:22-139.178.68.195:53168.service: Deactivated successfully. Sep 5 23:57:50.598426 systemd[1]: session-42.scope: Deactivated successfully. Sep 5 23:57:50.598905 systemd-logind[1566]: Session 42 logged out. Waiting for processes to exit. Sep 5 23:57:50.602125 systemd-logind[1566]: Removed session 42. Sep 5 23:57:55.742094 systemd[1]: Started sshd@45-91.99.216.181:22-139.178.68.195:49264.service - OpenSSH per-connection server daemon (139.178.68.195:49264). Sep 5 23:57:56.738041 sshd[4729]: Accepted publickey for core from 139.178.68.195 port 49264 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:57:56.740274 sshd[4729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:57:56.746475 systemd-logind[1566]: New session 43 of user core. Sep 5 23:57:56.751761 systemd[1]: Started session-43.scope - Session 43 of User core. Sep 5 23:57:57.502087 sshd[4729]: pam_unix(sshd:session): session closed for user core Sep 5 23:57:57.507004 systemd[1]: sshd@45-91.99.216.181:22-139.178.68.195:49264.service: Deactivated successfully. Sep 5 23:57:57.510946 systemd[1]: session-43.scope: Deactivated successfully. Sep 5 23:57:57.511092 systemd-logind[1566]: Session 43 logged out. Waiting for processes to exit. Sep 5 23:57:57.513151 systemd-logind[1566]: Removed session 43. Sep 5 23:57:57.670790 systemd[1]: Started sshd@46-91.99.216.181:22-139.178.68.195:49280.service - OpenSSH per-connection server daemon (139.178.68.195:49280). Sep 5 23:57:58.664373 sshd[4744]: Accepted publickey for core from 139.178.68.195 port 49280 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:57:58.667058 sshd[4744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:57:58.675889 systemd-logind[1566]: New session 44 of user core. Sep 5 23:57:58.688962 systemd[1]: Started session-44.scope - Session 44 of User core. Sep 5 23:57:59.464684 sshd[4744]: pam_unix(sshd:session): session closed for user core Sep 5 23:57:59.470623 systemd-logind[1566]: Session 44 logged out. Waiting for processes to exit. Sep 5 23:57:59.471470 systemd[1]: sshd@46-91.99.216.181:22-139.178.68.195:49280.service: Deactivated successfully. Sep 5 23:57:59.475573 systemd[1]: session-44.scope: Deactivated successfully. Sep 5 23:57:59.476880 systemd-logind[1566]: Removed session 44. Sep 5 23:57:59.652830 systemd[1]: Started sshd@47-91.99.216.181:22-139.178.68.195:49284.service - OpenSSH per-connection server daemon (139.178.68.195:49284). Sep 5 23:58:00.705131 sshd[4756]: Accepted publickey for core from 139.178.68.195 port 49284 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:58:00.707271 sshd[4756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:58:00.713460 systemd-logind[1566]: New session 45 of user core. Sep 5 23:58:00.717765 systemd[1]: Started session-45.scope - Session 45 of User core. Sep 5 23:58:01.508830 sshd[4756]: pam_unix(sshd:session): session closed for user core Sep 5 23:58:01.513493 systemd[1]: sshd@47-91.99.216.181:22-139.178.68.195:49284.service: Deactivated successfully. Sep 5 23:58:01.517514 systemd-logind[1566]: Session 45 logged out. Waiting for processes to exit. Sep 5 23:58:01.517939 systemd[1]: session-45.scope: Deactivated successfully. Sep 5 23:58:01.520257 systemd-logind[1566]: Removed session 45. Sep 5 23:58:02.165737 systemd[1]: Started sshd@48-91.99.216.181:22-188.166.242.21:52790.service - OpenSSH per-connection server daemon (188.166.242.21:52790). Sep 5 23:58:02.348852 sshd[4769]: Connection closed by 188.166.242.21 port 52790 Sep 5 23:58:02.349960 systemd[1]: sshd@48-91.99.216.181:22-188.166.242.21:52790.service: Deactivated successfully. Sep 5 23:58:06.680805 systemd[1]: Started sshd@49-91.99.216.181:22-139.178.68.195:51556.service - OpenSSH per-connection server daemon (139.178.68.195:51556). Sep 5 23:58:07.673469 sshd[4775]: Accepted publickey for core from 139.178.68.195 port 51556 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:58:07.675565 sshd[4775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:58:07.680662 systemd-logind[1566]: New session 46 of user core. Sep 5 23:58:07.689587 systemd[1]: Started session-46.scope - Session 46 of User core. Sep 5 23:58:08.449452 sshd[4775]: pam_unix(sshd:session): session closed for user core Sep 5 23:58:08.456201 systemd[1]: sshd@49-91.99.216.181:22-139.178.68.195:51556.service: Deactivated successfully. Sep 5 23:58:08.457845 systemd-logind[1566]: Session 46 logged out. Waiting for processes to exit. Sep 5 23:58:08.460555 systemd[1]: session-46.scope: Deactivated successfully. Sep 5 23:58:08.461381 systemd-logind[1566]: Removed session 46. Sep 5 23:58:13.619070 systemd[1]: Started sshd@50-91.99.216.181:22-139.178.68.195:47824.service - OpenSSH per-connection server daemon (139.178.68.195:47824). Sep 5 23:58:14.612887 sshd[4789]: Accepted publickey for core from 139.178.68.195 port 47824 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:58:14.614985 sshd[4789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:58:14.619836 systemd-logind[1566]: New session 47 of user core. Sep 5 23:58:14.631997 systemd[1]: Started session-47.scope - Session 47 of User core. Sep 5 23:58:15.371920 sshd[4789]: pam_unix(sshd:session): session closed for user core Sep 5 23:58:15.375999 systemd[1]: sshd@50-91.99.216.181:22-139.178.68.195:47824.service: Deactivated successfully. Sep 5 23:58:15.380190 systemd[1]: session-47.scope: Deactivated successfully. Sep 5 23:58:15.381394 systemd-logind[1566]: Session 47 logged out. Waiting for processes to exit. Sep 5 23:58:15.383815 systemd-logind[1566]: Removed session 47. Sep 5 23:58:20.552127 systemd[1]: Started sshd@51-91.99.216.181:22-139.178.68.195:42230.service - OpenSSH per-connection server daemon (139.178.68.195:42230). Sep 5 23:58:21.551425 sshd[4803]: Accepted publickey for core from 139.178.68.195 port 42230 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:58:21.554002 sshd[4803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:58:21.559533 systemd-logind[1566]: New session 48 of user core. Sep 5 23:58:21.564848 systemd[1]: Started session-48.scope - Session 48 of User core. Sep 5 23:58:22.308782 sshd[4803]: pam_unix(sshd:session): session closed for user core Sep 5 23:58:22.314098 systemd[1]: sshd@51-91.99.216.181:22-139.178.68.195:42230.service: Deactivated successfully. Sep 5 23:58:22.318911 systemd[1]: session-48.scope: Deactivated successfully. Sep 5 23:58:22.319961 systemd-logind[1566]: Session 48 logged out. Waiting for processes to exit. Sep 5 23:58:22.321787 systemd-logind[1566]: Removed session 48. Sep 5 23:58:27.479696 systemd[1]: Started sshd@52-91.99.216.181:22-139.178.68.195:42246.service - OpenSSH per-connection server daemon (139.178.68.195:42246). Sep 5 23:58:28.472516 sshd[4818]: Accepted publickey for core from 139.178.68.195 port 42246 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:58:28.474512 sshd[4818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:58:28.478546 systemd-logind[1566]: New session 49 of user core. Sep 5 23:58:28.489871 systemd[1]: Started session-49.scope - Session 49 of User core. Sep 5 23:58:29.247932 sshd[4818]: pam_unix(sshd:session): session closed for user core Sep 5 23:58:29.254563 systemd[1]: sshd@52-91.99.216.181:22-139.178.68.195:42246.service: Deactivated successfully. Sep 5 23:58:29.256998 systemd-logind[1566]: Session 49 logged out. Waiting for processes to exit. Sep 5 23:58:29.257944 systemd[1]: session-49.scope: Deactivated successfully. Sep 5 23:58:29.259354 systemd-logind[1566]: Removed session 49. Sep 5 23:58:34.418377 systemd[1]: Started sshd@53-91.99.216.181:22-139.178.68.195:55454.service - OpenSSH per-connection server daemon (139.178.68.195:55454). Sep 5 23:58:35.413953 sshd[4834]: Accepted publickey for core from 139.178.68.195 port 55454 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:58:35.416322 sshd[4834]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:58:35.421695 systemd-logind[1566]: New session 50 of user core. Sep 5 23:58:35.433974 systemd[1]: Started session-50.scope - Session 50 of User core. Sep 5 23:58:36.175137 sshd[4834]: pam_unix(sshd:session): session closed for user core Sep 5 23:58:36.179783 systemd-logind[1566]: Session 50 logged out. Waiting for processes to exit. Sep 5 23:58:36.180773 systemd[1]: sshd@53-91.99.216.181:22-139.178.68.195:55454.service: Deactivated successfully. Sep 5 23:58:36.187943 systemd[1]: session-50.scope: Deactivated successfully. Sep 5 23:58:36.189434 systemd-logind[1566]: Removed session 50. Sep 5 23:58:41.348716 systemd[1]: Started sshd@54-91.99.216.181:22-139.178.68.195:35450.service - OpenSSH per-connection server daemon (139.178.68.195:35450). Sep 5 23:58:42.346879 sshd[4848]: Accepted publickey for core from 139.178.68.195 port 35450 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:58:42.349290 sshd[4848]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:58:42.354476 systemd-logind[1566]: New session 51 of user core. Sep 5 23:58:42.361071 systemd[1]: Started session-51.scope - Session 51 of User core. Sep 5 23:58:43.111890 sshd[4848]: pam_unix(sshd:session): session closed for user core Sep 5 23:58:43.115603 systemd[1]: sshd@54-91.99.216.181:22-139.178.68.195:35450.service: Deactivated successfully. Sep 5 23:58:43.120748 systemd[1]: session-51.scope: Deactivated successfully. Sep 5 23:58:43.122060 systemd-logind[1566]: Session 51 logged out. Waiting for processes to exit. Sep 5 23:58:43.123227 systemd-logind[1566]: Removed session 51. Sep 5 23:58:48.287986 systemd[1]: Started sshd@55-91.99.216.181:22-139.178.68.195:35458.service - OpenSSH per-connection server daemon (139.178.68.195:35458). Sep 5 23:58:49.284029 sshd[4862]: Accepted publickey for core from 139.178.68.195 port 35458 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:58:49.287253 sshd[4862]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:58:49.294603 systemd-logind[1566]: New session 52 of user core. Sep 5 23:58:49.298813 systemd[1]: Started session-52.scope - Session 52 of User core. Sep 5 23:58:50.087149 sshd[4862]: pam_unix(sshd:session): session closed for user core Sep 5 23:58:50.093355 systemd[1]: sshd@55-91.99.216.181:22-139.178.68.195:35458.service: Deactivated successfully. Sep 5 23:58:50.098942 systemd[1]: session-52.scope: Deactivated successfully. Sep 5 23:58:50.100189 systemd-logind[1566]: Session 52 logged out. Waiting for processes to exit. Sep 5 23:58:50.101482 systemd-logind[1566]: Removed session 52. Sep 5 23:58:55.255963 systemd[1]: Started sshd@56-91.99.216.181:22-139.178.68.195:59736.service - OpenSSH per-connection server daemon (139.178.68.195:59736). Sep 5 23:58:56.248493 sshd[4876]: Accepted publickey for core from 139.178.68.195 port 59736 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:58:56.250300 sshd[4876]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:58:56.257375 systemd-logind[1566]: New session 53 of user core. Sep 5 23:58:56.267863 systemd[1]: Started session-53.scope - Session 53 of User core. Sep 5 23:58:57.011689 sshd[4876]: pam_unix(sshd:session): session closed for user core Sep 5 23:58:57.017313 systemd-logind[1566]: Session 53 logged out. Waiting for processes to exit. Sep 5 23:58:57.017633 systemd[1]: sshd@56-91.99.216.181:22-139.178.68.195:59736.service: Deactivated successfully. Sep 5 23:58:57.020294 systemd[1]: session-53.scope: Deactivated successfully. Sep 5 23:58:57.022881 systemd-logind[1566]: Removed session 53. Sep 5 23:59:02.181727 systemd[1]: Started sshd@57-91.99.216.181:22-139.178.68.195:50186.service - OpenSSH per-connection server daemon (139.178.68.195:50186). Sep 5 23:59:03.175945 sshd[4892]: Accepted publickey for core from 139.178.68.195 port 50186 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:59:03.178327 sshd[4892]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:59:03.184177 systemd-logind[1566]: New session 54 of user core. Sep 5 23:59:03.189920 systemd[1]: Started session-54.scope - Session 54 of User core. Sep 5 23:59:03.937751 sshd[4892]: pam_unix(sshd:session): session closed for user core Sep 5 23:59:03.941879 systemd[1]: sshd@57-91.99.216.181:22-139.178.68.195:50186.service: Deactivated successfully. Sep 5 23:59:03.947908 systemd[1]: session-54.scope: Deactivated successfully. Sep 5 23:59:03.949077 systemd-logind[1566]: Session 54 logged out. Waiting for processes to exit. Sep 5 23:59:03.950302 systemd-logind[1566]: Removed session 54. Sep 5 23:59:09.132723 systemd[1]: Started sshd@58-91.99.216.181:22-139.178.68.195:50196.service - OpenSSH per-connection server daemon (139.178.68.195:50196). Sep 5 23:59:10.192737 sshd[4908]: Accepted publickey for core from 139.178.68.195 port 50196 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:59:10.195834 sshd[4908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:59:10.206989 systemd-logind[1566]: New session 55 of user core. Sep 5 23:59:10.211791 systemd[1]: Started session-55.scope - Session 55 of User core. Sep 5 23:59:10.995013 sshd[4908]: pam_unix(sshd:session): session closed for user core Sep 5 23:59:10.999258 systemd[1]: sshd@58-91.99.216.181:22-139.178.68.195:50196.service: Deactivated successfully. Sep 5 23:59:11.003988 systemd-logind[1566]: Session 55 logged out. Waiting for processes to exit. Sep 5 23:59:11.004691 systemd[1]: session-55.scope: Deactivated successfully. Sep 5 23:59:11.006132 systemd-logind[1566]: Removed session 55. Sep 5 23:59:16.157038 systemd[1]: Started sshd@59-91.99.216.181:22-139.178.68.195:43886.service - OpenSSH per-connection server daemon (139.178.68.195:43886). Sep 5 23:59:17.151758 sshd[4922]: Accepted publickey for core from 139.178.68.195 port 43886 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:59:17.154064 sshd[4922]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:59:17.161396 systemd-logind[1566]: New session 56 of user core. Sep 5 23:59:17.169984 systemd[1]: Started session-56.scope - Session 56 of User core. Sep 5 23:59:17.913673 sshd[4922]: pam_unix(sshd:session): session closed for user core Sep 5 23:59:17.918076 systemd[1]: sshd@59-91.99.216.181:22-139.178.68.195:43886.service: Deactivated successfully. Sep 5 23:59:17.923923 systemd[1]: session-56.scope: Deactivated successfully. Sep 5 23:59:17.926161 systemd-logind[1566]: Session 56 logged out. Waiting for processes to exit. Sep 5 23:59:17.928496 systemd-logind[1566]: Removed session 56. Sep 5 23:59:23.092704 systemd[1]: Started sshd@60-91.99.216.181:22-139.178.68.195:57030.service - OpenSSH per-connection server daemon (139.178.68.195:57030). Sep 5 23:59:23.821229 update_engine[1570]: I20250905 23:59:23.820516 1570 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Sep 5 23:59:23.821229 update_engine[1570]: I20250905 23:59:23.820581 1570 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Sep 5 23:59:23.821229 update_engine[1570]: I20250905 23:59:23.820859 1570 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Sep 5 23:59:23.821930 update_engine[1570]: I20250905 23:59:23.821514 1570 omaha_request_params.cc:62] Current group set to lts Sep 5 23:59:23.821930 update_engine[1570]: I20250905 23:59:23.821744 1570 update_attempter.cc:499] Already updated boot flags. Skipping. Sep 5 23:59:23.821930 update_engine[1570]: I20250905 23:59:23.821766 1570 update_attempter.cc:643] Scheduling an action processor start. Sep 5 23:59:23.821930 update_engine[1570]: I20250905 23:59:23.821791 1570 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 5 23:59:23.821930 update_engine[1570]: I20250905 23:59:23.821841 1570 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Sep 5 23:59:23.821930 update_engine[1570]: I20250905 23:59:23.821923 1570 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 5 23:59:23.822197 update_engine[1570]: I20250905 23:59:23.821938 1570 omaha_request_action.cc:272] Request: Sep 5 23:59:23.822197 update_engine[1570]: Sep 5 23:59:23.822197 update_engine[1570]: Sep 5 23:59:23.822197 update_engine[1570]: Sep 5 23:59:23.822197 update_engine[1570]: Sep 5 23:59:23.822197 update_engine[1570]: Sep 5 23:59:23.822197 update_engine[1570]: Sep 5 23:59:23.822197 update_engine[1570]: Sep 5 23:59:23.822197 update_engine[1570]: Sep 5 23:59:23.822197 update_engine[1570]: I20250905 23:59:23.821947 1570 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 5 23:59:23.823461 locksmithd[1620]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Sep 5 23:59:23.825048 update_engine[1570]: I20250905 23:59:23.825002 1570 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 5 23:59:23.825490 update_engine[1570]: I20250905 23:59:23.825454 1570 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 5 23:59:23.828446 update_engine[1570]: E20250905 23:59:23.828362 1570 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 5 23:59:23.828539 update_engine[1570]: I20250905 23:59:23.828507 1570 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Sep 5 23:59:24.145742 sshd[4937]: Accepted publickey for core from 139.178.68.195 port 57030 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:59:24.148365 sshd[4937]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:59:24.153664 systemd-logind[1566]: New session 57 of user core. Sep 5 23:59:24.164915 systemd[1]: Started session-57.scope - Session 57 of User core. Sep 5 23:59:24.944835 sshd[4937]: pam_unix(sshd:session): session closed for user core Sep 5 23:59:24.951114 systemd[1]: sshd@60-91.99.216.181:22-139.178.68.195:57030.service: Deactivated successfully. Sep 5 23:59:24.955806 systemd[1]: session-57.scope: Deactivated successfully. Sep 5 23:59:24.957116 systemd-logind[1566]: Session 57 logged out. Waiting for processes to exit. Sep 5 23:59:24.958355 systemd-logind[1566]: Removed session 57. Sep 5 23:59:30.109800 systemd[1]: Started sshd@61-91.99.216.181:22-139.178.68.195:57042.service - OpenSSH per-connection server daemon (139.178.68.195:57042). Sep 5 23:59:31.109361 sshd[4952]: Accepted publickey for core from 139.178.68.195 port 57042 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:59:31.111791 sshd[4952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:59:31.117281 systemd-logind[1566]: New session 58 of user core. Sep 5 23:59:31.125351 systemd[1]: Started session-58.scope - Session 58 of User core. Sep 5 23:59:31.873796 sshd[4952]: pam_unix(sshd:session): session closed for user core Sep 5 23:59:31.879666 systemd[1]: sshd@61-91.99.216.181:22-139.178.68.195:57042.service: Deactivated successfully. Sep 5 23:59:31.884374 systemd[1]: session-58.scope: Deactivated successfully. Sep 5 23:59:31.885775 systemd-logind[1566]: Session 58 logged out. Waiting for processes to exit. Sep 5 23:59:31.887006 systemd-logind[1566]: Removed session 58. Sep 5 23:59:33.816385 update_engine[1570]: I20250905 23:59:33.816268 1570 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 5 23:59:33.817124 update_engine[1570]: I20250905 23:59:33.816599 1570 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 5 23:59:33.817124 update_engine[1570]: I20250905 23:59:33.816857 1570 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 5 23:59:33.817721 update_engine[1570]: E20250905 23:59:33.817651 1570 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 5 23:59:33.817795 update_engine[1570]: I20250905 23:59:33.817744 1570 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Sep 5 23:59:37.041697 systemd[1]: Started sshd@62-91.99.216.181:22-139.178.68.195:44576.service - OpenSSH per-connection server daemon (139.178.68.195:44576). Sep 5 23:59:38.032524 sshd[4967]: Accepted publickey for core from 139.178.68.195 port 44576 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:59:38.035148 sshd[4967]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:59:38.042544 systemd-logind[1566]: New session 59 of user core. Sep 5 23:59:38.048801 systemd[1]: Started session-59.scope - Session 59 of User core. Sep 5 23:59:38.812048 sshd[4967]: pam_unix(sshd:session): session closed for user core Sep 5 23:59:38.817524 systemd[1]: sshd@62-91.99.216.181:22-139.178.68.195:44576.service: Deactivated successfully. Sep 5 23:59:38.822805 systemd[1]: session-59.scope: Deactivated successfully. Sep 5 23:59:38.823823 systemd-logind[1566]: Session 59 logged out. Waiting for processes to exit. Sep 5 23:59:38.824859 systemd-logind[1566]: Removed session 59. Sep 5 23:59:43.819638 update_engine[1570]: I20250905 23:59:43.819458 1570 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 5 23:59:43.820552 update_engine[1570]: I20250905 23:59:43.819823 1570 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 5 23:59:43.820552 update_engine[1570]: I20250905 23:59:43.820156 1570 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 5 23:59:43.821101 update_engine[1570]: E20250905 23:59:43.820998 1570 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 5 23:59:43.821160 update_engine[1570]: I20250905 23:59:43.821095 1570 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Sep 5 23:59:43.981487 systemd[1]: Started sshd@63-91.99.216.181:22-139.178.68.195:36368.service - OpenSSH per-connection server daemon (139.178.68.195:36368). Sep 5 23:59:44.976211 sshd[4981]: Accepted publickey for core from 139.178.68.195 port 36368 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:59:44.977286 sshd[4981]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:59:44.987631 systemd-logind[1566]: New session 60 of user core. Sep 5 23:59:44.993812 systemd[1]: Started session-60.scope - Session 60 of User core. Sep 5 23:59:45.754350 sshd[4981]: pam_unix(sshd:session): session closed for user core Sep 5 23:59:45.760883 systemd[1]: sshd@63-91.99.216.181:22-139.178.68.195:36368.service: Deactivated successfully. Sep 5 23:59:45.764206 systemd[1]: session-60.scope: Deactivated successfully. Sep 5 23:59:45.765334 systemd-logind[1566]: Session 60 logged out. Waiting for processes to exit. Sep 5 23:59:45.766254 systemd-logind[1566]: Removed session 60. Sep 5 23:59:50.923906 systemd[1]: Started sshd@64-91.99.216.181:22-139.178.68.195:50462.service - OpenSSH per-connection server daemon (139.178.68.195:50462). Sep 5 23:59:51.913079 sshd[4995]: Accepted publickey for core from 139.178.68.195 port 50462 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:59:51.915878 sshd[4995]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:59:51.921820 systemd-logind[1566]: New session 61 of user core. Sep 5 23:59:51.925062 systemd[1]: Started session-61.scope - Session 61 of User core. Sep 5 23:59:52.665831 sshd[4995]: pam_unix(sshd:session): session closed for user core Sep 5 23:59:52.669315 systemd[1]: sshd@64-91.99.216.181:22-139.178.68.195:50462.service: Deactivated successfully. Sep 5 23:59:52.677919 systemd-logind[1566]: Session 61 logged out. Waiting for processes to exit. Sep 5 23:59:52.677973 systemd[1]: session-61.scope: Deactivated successfully. Sep 5 23:59:52.679957 systemd-logind[1566]: Removed session 61. Sep 5 23:59:53.816512 update_engine[1570]: I20250905 23:59:53.815771 1570 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 5 23:59:53.816512 update_engine[1570]: I20250905 23:59:53.816172 1570 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 5 23:59:53.817244 update_engine[1570]: I20250905 23:59:53.816541 1570 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 5 23:59:53.817683 update_engine[1570]: E20250905 23:59:53.817615 1570 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 5 23:59:53.817783 update_engine[1570]: I20250905 23:59:53.817735 1570 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 5 23:59:53.817783 update_engine[1570]: I20250905 23:59:53.817756 1570 omaha_request_action.cc:617] Omaha request response: Sep 5 23:59:53.817921 update_engine[1570]: E20250905 23:59:53.817873 1570 omaha_request_action.cc:636] Omaha request network transfer failed. Sep 5 23:59:53.817921 update_engine[1570]: I20250905 23:59:53.817909 1570 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Sep 5 23:59:53.818013 update_engine[1570]: I20250905 23:59:53.817921 1570 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 5 23:59:53.818013 update_engine[1570]: I20250905 23:59:53.817934 1570 update_attempter.cc:306] Processing Done. Sep 5 23:59:53.818013 update_engine[1570]: E20250905 23:59:53.817958 1570 update_attempter.cc:619] Update failed. Sep 5 23:59:53.818013 update_engine[1570]: I20250905 23:59:53.817970 1570 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Sep 5 23:59:53.818013 update_engine[1570]: I20250905 23:59:53.817981 1570 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Sep 5 23:59:53.818013 update_engine[1570]: I20250905 23:59:53.817991 1570 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Sep 5 23:59:53.818205 update_engine[1570]: I20250905 23:59:53.818101 1570 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 5 23:59:53.818205 update_engine[1570]: I20250905 23:59:53.818138 1570 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 5 23:59:53.818205 update_engine[1570]: I20250905 23:59:53.818151 1570 omaha_request_action.cc:272] Request: Sep 5 23:59:53.818205 update_engine[1570]: Sep 5 23:59:53.818205 update_engine[1570]: Sep 5 23:59:53.818205 update_engine[1570]: Sep 5 23:59:53.818205 update_engine[1570]: Sep 5 23:59:53.818205 update_engine[1570]: Sep 5 23:59:53.818205 update_engine[1570]: Sep 5 23:59:53.818205 update_engine[1570]: I20250905 23:59:53.818164 1570 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 5 23:59:53.818537 update_engine[1570]: I20250905 23:59:53.818439 1570 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 5 23:59:53.818980 update_engine[1570]: I20250905 23:59:53.818743 1570 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 5 23:59:53.819053 locksmithd[1620]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Sep 5 23:59:53.819788 update_engine[1570]: E20250905 23:59:53.819727 1570 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 5 23:59:53.819843 update_engine[1570]: I20250905 23:59:53.819809 1570 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 5 23:59:53.819843 update_engine[1570]: I20250905 23:59:53.819825 1570 omaha_request_action.cc:617] Omaha request response: Sep 5 23:59:53.819843 update_engine[1570]: I20250905 23:59:53.819836 1570 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 5 23:59:53.819928 update_engine[1570]: I20250905 23:59:53.819846 1570 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 5 23:59:53.819928 update_engine[1570]: I20250905 23:59:53.819856 1570 update_attempter.cc:306] Processing Done. Sep 5 23:59:53.819928 update_engine[1570]: I20250905 23:59:53.819866 1570 update_attempter.cc:310] Error event sent. Sep 5 23:59:53.819928 update_engine[1570]: I20250905 23:59:53.819880 1570 update_check_scheduler.cc:74] Next update check in 48m48s Sep 5 23:59:53.820233 locksmithd[1620]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Sep 5 23:59:57.836833 systemd[1]: Started sshd@65-91.99.216.181:22-139.178.68.195:50466.service - OpenSSH per-connection server daemon (139.178.68.195:50466). Sep 5 23:59:58.832777 sshd[5009]: Accepted publickey for core from 139.178.68.195 port 50466 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:59:58.834330 sshd[5009]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:59:58.839371 systemd-logind[1566]: New session 62 of user core. Sep 5 23:59:58.850380 systemd[1]: Started session-62.scope - Session 62 of User core. Sep 5 23:59:59.592718 sshd[5009]: pam_unix(sshd:session): session closed for user core Sep 5 23:59:59.595887 systemd[1]: sshd@65-91.99.216.181:22-139.178.68.195:50466.service: Deactivated successfully. Sep 5 23:59:59.600090 systemd-logind[1566]: Session 62 logged out. Waiting for processes to exit. Sep 5 23:59:59.600370 systemd[1]: session-62.scope: Deactivated successfully. Sep 5 23:59:59.601923 systemd-logind[1566]: Removed session 62. Sep 6 00:00:04.783712 systemd[1]: Started logrotate.service - Rotate and Compress System Logs. Sep 6 00:00:04.786910 systemd[1]: Started sshd@66-91.99.216.181:22-139.178.68.195:32870.service - OpenSSH per-connection server daemon (139.178.68.195:32870). Sep 6 00:00:04.808200 systemd[1]: logrotate.service: Deactivated successfully. Sep 6 00:00:05.855441 sshd[5026]: Accepted publickey for core from 139.178.68.195 port 32870 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 6 00:00:05.857185 sshd[5026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:00:05.862367 systemd-logind[1566]: New session 63 of user core. Sep 6 00:00:05.866734 systemd[1]: Started session-63.scope - Session 63 of User core. Sep 6 00:00:06.655170 sshd[5026]: pam_unix(sshd:session): session closed for user core Sep 6 00:00:06.661786 systemd-logind[1566]: Session 63 logged out. Waiting for processes to exit. Sep 6 00:00:06.662539 systemd[1]: sshd@66-91.99.216.181:22-139.178.68.195:32870.service: Deactivated successfully. Sep 6 00:00:06.665777 systemd[1]: session-63.scope: Deactivated successfully. Sep 6 00:00:06.667008 systemd-logind[1566]: Removed session 63. Sep 6 00:00:11.812879 systemd[1]: Started sshd@67-91.99.216.181:22-139.178.68.195:55676.service - OpenSSH per-connection server daemon (139.178.68.195:55676). Sep 6 00:00:12.821681 sshd[5042]: Accepted publickey for core from 139.178.68.195 port 55676 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 6 00:00:12.823305 sshd[5042]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:00:12.828840 systemd-logind[1566]: New session 64 of user core. Sep 6 00:00:12.834729 systemd[1]: Started session-64.scope - Session 64 of User core. Sep 6 00:00:13.585699 sshd[5042]: pam_unix(sshd:session): session closed for user core Sep 6 00:00:13.589975 systemd[1]: sshd@67-91.99.216.181:22-139.178.68.195:55676.service: Deactivated successfully. Sep 6 00:00:13.595618 systemd[1]: session-64.scope: Deactivated successfully. Sep 6 00:00:13.597311 systemd-logind[1566]: Session 64 logged out. Waiting for processes to exit. Sep 6 00:00:13.598378 systemd-logind[1566]: Removed session 64. Sep 6 00:00:18.753875 systemd[1]: Started sshd@68-91.99.216.181:22-139.178.68.195:55690.service - OpenSSH per-connection server daemon (139.178.68.195:55690). Sep 6 00:00:19.773843 sshd[5057]: Accepted publickey for core from 139.178.68.195 port 55690 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 6 00:00:19.777196 sshd[5057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:00:19.786989 systemd-logind[1566]: New session 65 of user core. Sep 6 00:00:19.795771 systemd[1]: Started session-65.scope - Session 65 of User core. Sep 6 00:00:20.533820 sshd[5057]: pam_unix(sshd:session): session closed for user core Sep 6 00:00:20.540602 systemd-logind[1566]: Session 65 logged out. Waiting for processes to exit. Sep 6 00:00:20.541272 systemd[1]: sshd@68-91.99.216.181:22-139.178.68.195:55690.service: Deactivated successfully. Sep 6 00:00:20.546943 systemd[1]: session-65.scope: Deactivated successfully. Sep 6 00:00:20.548810 systemd-logind[1566]: Removed session 65. Sep 6 00:00:25.729886 systemd[1]: Started sshd@69-91.99.216.181:22-139.178.68.195:51800.service - OpenSSH per-connection server daemon (139.178.68.195:51800). Sep 6 00:00:26.857067 sshd[5071]: Accepted publickey for core from 139.178.68.195 port 51800 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 6 00:00:26.859753 sshd[5071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:00:26.866111 systemd-logind[1566]: New session 66 of user core. Sep 6 00:00:26.871777 systemd[1]: Started session-66.scope - Session 66 of User core. Sep 6 00:00:27.698621 sshd[5071]: pam_unix(sshd:session): session closed for user core Sep 6 00:00:27.703301 systemd[1]: sshd@69-91.99.216.181:22-139.178.68.195:51800.service: Deactivated successfully. Sep 6 00:00:27.709829 systemd[1]: session-66.scope: Deactivated successfully. Sep 6 00:00:27.712335 systemd-logind[1566]: Session 66 logged out. Waiting for processes to exit. Sep 6 00:00:27.713539 systemd-logind[1566]: Removed session 66. Sep 6 00:00:32.881908 systemd[1]: Started sshd@70-91.99.216.181:22-139.178.68.195:34116.service - OpenSSH per-connection server daemon (139.178.68.195:34116). Sep 6 00:00:33.931861 sshd[5087]: Accepted publickey for core from 139.178.68.195 port 34116 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 6 00:00:33.935161 sshd[5087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:00:33.941135 systemd-logind[1566]: New session 67 of user core. Sep 6 00:00:33.945690 systemd[1]: Started session-67.scope - Session 67 of User core. Sep 6 00:00:34.733847 sshd[5087]: pam_unix(sshd:session): session closed for user core Sep 6 00:00:34.738885 systemd[1]: sshd@70-91.99.216.181:22-139.178.68.195:34116.service: Deactivated successfully. Sep 6 00:00:34.744628 systemd[1]: session-67.scope: Deactivated successfully. Sep 6 00:00:34.745791 systemd-logind[1566]: Session 67 logged out. Waiting for processes to exit. Sep 6 00:00:34.747023 systemd-logind[1566]: Removed session 67. Sep 6 00:00:39.911855 systemd[1]: Started sshd@71-91.99.216.181:22-139.178.68.195:34124.service - OpenSSH per-connection server daemon (139.178.68.195:34124). Sep 6 00:00:40.964006 sshd[5103]: Accepted publickey for core from 139.178.68.195 port 34124 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 6 00:00:40.966670 sshd[5103]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:00:40.972018 systemd-logind[1566]: New session 68 of user core. Sep 6 00:00:40.975771 systemd[1]: Started session-68.scope - Session 68 of User core. Sep 6 00:00:41.764854 sshd[5103]: pam_unix(sshd:session): session closed for user core Sep 6 00:00:41.771135 systemd[1]: sshd@71-91.99.216.181:22-139.178.68.195:34124.service: Deactivated successfully. Sep 6 00:00:41.771381 systemd-logind[1566]: Session 68 logged out. Waiting for processes to exit. Sep 6 00:00:41.775230 systemd[1]: session-68.scope: Deactivated successfully. Sep 6 00:00:41.776914 systemd-logind[1566]: Removed session 68. Sep 6 00:00:46.945741 systemd[1]: Started sshd@72-91.99.216.181:22-139.178.68.195:38318.service - OpenSSH per-connection server daemon (139.178.68.195:38318). Sep 6 00:00:48.010120 sshd[5117]: Accepted publickey for core from 139.178.68.195 port 38318 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 6 00:00:48.014105 sshd[5117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:00:48.023378 systemd-logind[1566]: New session 69 of user core. Sep 6 00:00:48.028232 systemd[1]: Started session-69.scope - Session 69 of User core. Sep 6 00:00:48.836819 sshd[5117]: pam_unix(sshd:session): session closed for user core Sep 6 00:00:48.842057 systemd[1]: sshd@72-91.99.216.181:22-139.178.68.195:38318.service: Deactivated successfully. Sep 6 00:00:48.846829 systemd[1]: session-69.scope: Deactivated successfully. Sep 6 00:00:48.847774 systemd-logind[1566]: Session 69 logged out. Waiting for processes to exit. Sep 6 00:00:48.848976 systemd-logind[1566]: Removed session 69. Sep 6 00:00:53.995709 systemd[1]: Started sshd@73-91.99.216.181:22-139.178.68.195:39966.service - OpenSSH per-connection server daemon (139.178.68.195:39966). Sep 6 00:00:54.994650 sshd[5131]: Accepted publickey for core from 139.178.68.195 port 39966 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 6 00:00:54.994375 sshd[5131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:00:55.003996 systemd-logind[1566]: New session 70 of user core. Sep 6 00:00:55.010539 systemd[1]: Started session-70.scope - Session 70 of User core. Sep 6 00:00:55.757717 sshd[5131]: pam_unix(sshd:session): session closed for user core Sep 6 00:00:55.762135 systemd[1]: sshd@73-91.99.216.181:22-139.178.68.195:39966.service: Deactivated successfully. Sep 6 00:00:55.765910 systemd-logind[1566]: Session 70 logged out. Waiting for processes to exit. Sep 6 00:00:55.766165 systemd[1]: session-70.scope: Deactivated successfully. Sep 6 00:00:55.768054 systemd-logind[1566]: Removed session 70. Sep 6 00:01:00.930733 systemd[1]: Started sshd@74-91.99.216.181:22-139.178.68.195:44026.service - OpenSSH per-connection server daemon (139.178.68.195:44026). Sep 6 00:01:01.929809 sshd[5145]: Accepted publickey for core from 139.178.68.195 port 44026 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 6 00:01:01.933223 sshd[5145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:01:01.944874 systemd-logind[1566]: New session 71 of user core. Sep 6 00:01:01.955839 systemd[1]: Started session-71.scope - Session 71 of User core. Sep 6 00:01:02.694605 sshd[5145]: pam_unix(sshd:session): session closed for user core Sep 6 00:01:02.700293 systemd[1]: sshd@74-91.99.216.181:22-139.178.68.195:44026.service: Deactivated successfully. Sep 6 00:01:02.704225 systemd-logind[1566]: Session 71 logged out. Waiting for processes to exit. Sep 6 00:01:02.704907 systemd[1]: session-71.scope: Deactivated successfully. Sep 6 00:01:02.707220 systemd-logind[1566]: Removed session 71. Sep 6 00:01:07.886904 systemd[1]: Started sshd@75-91.99.216.181:22-139.178.68.195:44038.service - OpenSSH per-connection server daemon (139.178.68.195:44038). Sep 6 00:01:08.940438 sshd[5161]: Accepted publickey for core from 139.178.68.195 port 44038 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 6 00:01:08.940006 sshd[5161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:01:08.949793 systemd-logind[1566]: New session 72 of user core. Sep 6 00:01:08.958072 systemd[1]: Started session-72.scope - Session 72 of User core. Sep 6 00:01:09.746032 sshd[5161]: pam_unix(sshd:session): session closed for user core Sep 6 00:01:09.751634 systemd-logind[1566]: Session 72 logged out. Waiting for processes to exit. Sep 6 00:01:09.752155 systemd[1]: sshd@75-91.99.216.181:22-139.178.68.195:44038.service: Deactivated successfully. Sep 6 00:01:09.756421 systemd[1]: session-72.scope: Deactivated successfully. Sep 6 00:01:09.757964 systemd-logind[1566]: Removed session 72. Sep 6 00:01:13.271051 systemd[1]: Started sshd@76-91.99.216.181:22-188.166.242.21:53280.service - OpenSSH per-connection server daemon (188.166.242.21:53280). Sep 6 00:01:14.146562 sshd[5175]: Connection closed by authenticating user root 188.166.242.21 port 53280 [preauth] Sep 6 00:01:14.151914 systemd[1]: sshd@76-91.99.216.181:22-188.166.242.21:53280.service: Deactivated successfully. Sep 6 00:01:14.904776 systemd[1]: Started sshd@77-91.99.216.181:22-139.178.68.195:56988.service - OpenSSH per-connection server daemon (139.178.68.195:56988). Sep 6 00:01:15.897893 sshd[5180]: Accepted publickey for core from 139.178.68.195 port 56988 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 6 00:01:15.900258 sshd[5180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:01:15.907362 systemd-logind[1566]: New session 73 of user core. Sep 6 00:01:15.916022 systemd[1]: Started session-73.scope - Session 73 of User core. Sep 6 00:01:16.661804 sshd[5180]: pam_unix(sshd:session): session closed for user core Sep 6 00:01:16.666678 systemd[1]: sshd@77-91.99.216.181:22-139.178.68.195:56988.service: Deactivated successfully. Sep 6 00:01:16.671970 systemd[1]: session-73.scope: Deactivated successfully. Sep 6 00:01:16.673156 systemd-logind[1566]: Session 73 logged out. Waiting for processes to exit. Sep 6 00:01:16.674287 systemd-logind[1566]: Removed session 73. Sep 6 00:01:21.833716 systemd[1]: Started sshd@78-91.99.216.181:22-139.178.68.195:39188.service - OpenSSH per-connection server daemon (139.178.68.195:39188). Sep 6 00:01:22.868131 sshd[5194]: Accepted publickey for core from 139.178.68.195 port 39188 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 6 00:01:22.870348 sshd[5194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:01:22.875981 systemd-logind[1566]: New session 74 of user core. Sep 6 00:01:22.879705 systemd[1]: Started session-74.scope - Session 74 of User core. Sep 6 00:01:23.654360 sshd[5194]: pam_unix(sshd:session): session closed for user core Sep 6 00:01:23.661780 systemd[1]: sshd@78-91.99.216.181:22-139.178.68.195:39188.service: Deactivated successfully. Sep 6 00:01:23.669172 systemd[1]: session-74.scope: Deactivated successfully. Sep 6 00:01:23.671370 systemd-logind[1566]: Session 74 logged out. Waiting for processes to exit. Sep 6 00:01:23.679568 systemd-logind[1566]: Removed session 74. Sep 6 00:01:28.826920 systemd[1]: Started sshd@79-91.99.216.181:22-139.178.68.195:39200.service - OpenSSH per-connection server daemon (139.178.68.195:39200). Sep 6 00:01:29.832099 sshd[5209]: Accepted publickey for core from 139.178.68.195 port 39200 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 6 00:01:29.834694 sshd[5209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:01:29.840297 systemd-logind[1566]: New session 75 of user core. Sep 6 00:01:29.849018 systemd[1]: Started session-75.scope - Session 75 of User core. Sep 6 00:01:30.597827 sshd[5209]: pam_unix(sshd:session): session closed for user core Sep 6 00:01:30.603290 systemd[1]: sshd@79-91.99.216.181:22-139.178.68.195:39200.service: Deactivated successfully. Sep 6 00:01:30.609748 systemd[1]: session-75.scope: Deactivated successfully. Sep 6 00:01:30.612209 systemd-logind[1566]: Session 75 logged out. Waiting for processes to exit. Sep 6 00:01:30.613371 systemd-logind[1566]: Removed session 75. Sep 6 00:01:35.764713 systemd[1]: Started sshd@80-91.99.216.181:22-139.178.68.195:48950.service - OpenSSH per-connection server daemon (139.178.68.195:48950). Sep 6 00:01:36.760707 sshd[5225]: Accepted publickey for core from 139.178.68.195 port 48950 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 6 00:01:36.763466 sshd[5225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:01:36.769371 systemd-logind[1566]: New session 76 of user core. Sep 6 00:01:36.773829 systemd[1]: Started session-76.scope - Session 76 of User core. Sep 6 00:01:37.538828 sshd[5225]: pam_unix(sshd:session): session closed for user core Sep 6 00:01:37.545470 systemd[1]: sshd@80-91.99.216.181:22-139.178.68.195:48950.service: Deactivated successfully. Sep 6 00:01:37.551553 systemd[1]: session-76.scope: Deactivated successfully. Sep 6 00:01:37.553316 systemd-logind[1566]: Session 76 logged out. Waiting for processes to exit. Sep 6 00:01:37.555008 systemd-logind[1566]: Removed session 76. Sep 6 00:01:42.708762 systemd[1]: Started sshd@81-91.99.216.181:22-139.178.68.195:47398.service - OpenSSH per-connection server daemon (139.178.68.195:47398). Sep 6 00:01:43.701501 sshd[5240]: Accepted publickey for core from 139.178.68.195 port 47398 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 6 00:01:43.703634 sshd[5240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:01:43.708706 systemd-logind[1566]: New session 77 of user core. Sep 6 00:01:43.712720 systemd[1]: Started session-77.scope - Session 77 of User core. Sep 6 00:01:44.461697 sshd[5240]: pam_unix(sshd:session): session closed for user core Sep 6 00:01:44.466295 systemd[1]: sshd@81-91.99.216.181:22-139.178.68.195:47398.service: Deactivated successfully. Sep 6 00:01:44.469852 systemd-logind[1566]: Session 77 logged out. Waiting for processes to exit. Sep 6 00:01:44.470104 systemd[1]: session-77.scope: Deactivated successfully. Sep 6 00:01:44.473067 systemd-logind[1566]: Removed session 77. Sep 6 00:01:49.650739 systemd[1]: Started sshd@82-91.99.216.181:22-139.178.68.195:47402.service - OpenSSH per-connection server daemon (139.178.68.195:47402). Sep 6 00:01:50.707496 sshd[5253]: Accepted publickey for core from 139.178.68.195 port 47402 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 6 00:01:50.712157 sshd[5253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:01:50.720032 systemd-logind[1566]: New session 78 of user core. Sep 6 00:01:50.729279 systemd[1]: Started session-78.scope - Session 78 of User core. Sep 6 00:01:51.529752 sshd[5253]: pam_unix(sshd:session): session closed for user core Sep 6 00:01:51.536453 systemd[1]: sshd@82-91.99.216.181:22-139.178.68.195:47402.service: Deactivated successfully. Sep 6 00:01:51.540198 systemd-logind[1566]: Session 78 logged out. Waiting for processes to exit. Sep 6 00:01:51.540527 systemd[1]: session-78.scope: Deactivated successfully. Sep 6 00:01:51.543372 systemd-logind[1566]: Removed session 78. Sep 6 00:01:56.712931 systemd[1]: Started sshd@83-91.99.216.181:22-139.178.68.195:42226.service - OpenSSH per-connection server daemon (139.178.68.195:42226). Sep 6 00:01:57.766632 sshd[5267]: Accepted publickey for core from 139.178.68.195 port 42226 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 6 00:01:57.769320 sshd[5267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:01:57.777631 systemd-logind[1566]: New session 79 of user core. Sep 6 00:01:57.783841 systemd[1]: Started session-79.scope - Session 79 of User core. Sep 6 00:01:58.563998 sshd[5267]: pam_unix(sshd:session): session closed for user core Sep 6 00:01:58.571279 systemd[1]: sshd@83-91.99.216.181:22-139.178.68.195:42226.service: Deactivated successfully. Sep 6 00:01:58.576148 systemd[1]: session-79.scope: Deactivated successfully. Sep 6 00:01:58.576314 systemd-logind[1566]: Session 79 logged out. Waiting for processes to exit. Sep 6 00:01:58.578738 systemd-logind[1566]: Removed session 79. Sep 6 00:02:03.743901 systemd[1]: Started sshd@84-91.99.216.181:22-139.178.68.195:47222.service - OpenSSH per-connection server daemon (139.178.68.195:47222). Sep 6 00:02:04.800290 sshd[5281]: Accepted publickey for core from 139.178.68.195 port 47222 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 6 00:02:04.803190 sshd[5281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:02:04.809353 systemd-logind[1566]: New session 80 of user core. Sep 6 00:02:04.814203 systemd[1]: Started session-80.scope - Session 80 of User core. Sep 6 00:02:05.597025 sshd[5281]: pam_unix(sshd:session): session closed for user core Sep 6 00:02:05.603057 systemd[1]: sshd@84-91.99.216.181:22-139.178.68.195:47222.service: Deactivated successfully. Sep 6 00:02:05.608796 systemd[1]: session-80.scope: Deactivated successfully. Sep 6 00:02:05.609668 systemd-logind[1566]: Session 80 logged out. Waiting for processes to exit. Sep 6 00:02:05.610655 systemd-logind[1566]: Removed session 80. Sep 6 00:02:05.778145 systemd[1]: Started sshd@85-91.99.216.181:22-139.178.68.195:47230.service - OpenSSH per-connection server daemon (139.178.68.195:47230). Sep 6 00:02:06.826712 sshd[5297]: Accepted publickey for core from 139.178.68.195 port 47230 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 6 00:02:06.830222 sshd[5297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:02:06.837370 systemd-logind[1566]: New session 81 of user core. Sep 6 00:02:06.844961 systemd[1]: Started session-81.scope - Session 81 of User core. Sep 6 00:02:07.697784 sshd[5297]: pam_unix(sshd:session): session closed for user core Sep 6 00:02:07.703915 systemd[1]: sshd@85-91.99.216.181:22-139.178.68.195:47230.service: Deactivated successfully. Sep 6 00:02:07.707016 systemd[1]: session-81.scope: Deactivated successfully. Sep 6 00:02:07.707657 systemd-logind[1566]: Session 81 logged out. Waiting for processes to exit. Sep 6 00:02:07.709157 systemd-logind[1566]: Removed session 81. Sep 6 00:02:07.856212 systemd[1]: Started sshd@86-91.99.216.181:22-139.178.68.195:47236.service - OpenSSH per-connection server daemon (139.178.68.195:47236). Sep 6 00:02:08.851289 sshd[5309]: Accepted publickey for core from 139.178.68.195 port 47236 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 6 00:02:08.853832 sshd[5309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:02:08.860162 systemd-logind[1566]: New session 82 of user core. Sep 6 00:02:08.867020 systemd[1]: Started session-82.scope - Session 82 of User core. Sep 6 00:02:10.876831 sshd[5309]: pam_unix(sshd:session): session closed for user core Sep 6 00:02:10.882392 systemd-logind[1566]: Session 82 logged out. Waiting for processes to exit. Sep 6 00:02:10.883015 systemd[1]: sshd@86-91.99.216.181:22-139.178.68.195:47236.service: Deactivated successfully. Sep 6 00:02:10.887538 systemd[1]: session-82.scope: Deactivated successfully. Sep 6 00:02:10.888945 systemd-logind[1566]: Removed session 82. Sep 6 00:02:11.044734 systemd[1]: Started sshd@87-91.99.216.181:22-139.178.68.195:52770.service - OpenSSH per-connection server daemon (139.178.68.195:52770). Sep 6 00:02:12.040162 sshd[5328]: Accepted publickey for core from 139.178.68.195 port 52770 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 6 00:02:12.042430 sshd[5328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:02:12.048545 systemd-logind[1566]: New session 83 of user core. Sep 6 00:02:12.055577 systemd[1]: Started session-83.scope - Session 83 of User core. Sep 6 00:02:12.928791 sshd[5328]: pam_unix(sshd:session): session closed for user core Sep 6 00:02:12.934176 systemd-logind[1566]: Session 83 logged out. Waiting for processes to exit. Sep 6 00:02:12.937356 systemd[1]: sshd@87-91.99.216.181:22-139.178.68.195:52770.service: Deactivated successfully. Sep 6 00:02:12.939988 systemd[1]: session-83.scope: Deactivated successfully. Sep 6 00:02:12.942132 systemd-logind[1566]: Removed session 83. Sep 6 00:02:13.116732 systemd[1]: Started sshd@88-91.99.216.181:22-139.178.68.195:52786.service - OpenSSH per-connection server daemon (139.178.68.195:52786). Sep 6 00:02:14.172960 sshd[5339]: Accepted publickey for core from 139.178.68.195 port 52786 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 6 00:02:14.176825 sshd[5339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:02:14.184981 systemd-logind[1566]: New session 84 of user core. Sep 6 00:02:14.190337 systemd[1]: Started session-84.scope - Session 84 of User core. Sep 6 00:02:15.005813 sshd[5339]: pam_unix(sshd:session): session closed for user core Sep 6 00:02:15.011826 systemd[1]: sshd@88-91.99.216.181:22-139.178.68.195:52786.service: Deactivated successfully. Sep 6 00:02:15.016758 systemd[1]: session-84.scope: Deactivated successfully. Sep 6 00:02:15.016910 systemd-logind[1566]: Session 84 logged out. Waiting for processes to exit. Sep 6 00:02:15.018498 systemd-logind[1566]: Removed session 84. Sep 6 00:02:20.168887 systemd[1]: Started sshd@89-91.99.216.181:22-139.178.68.195:38644.service - OpenSSH per-connection server daemon (139.178.68.195:38644). Sep 6 00:02:21.160741 sshd[5353]: Accepted publickey for core from 139.178.68.195 port 38644 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 6 00:02:21.162556 sshd[5353]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:02:21.168533 systemd-logind[1566]: New session 85 of user core. Sep 6 00:02:21.171955 systemd[1]: Started session-85.scope - Session 85 of User core. Sep 6 00:02:21.921737 sshd[5353]: pam_unix(sshd:session): session closed for user core Sep 6 00:02:21.927724 systemd[1]: sshd@89-91.99.216.181:22-139.178.68.195:38644.service: Deactivated successfully. Sep 6 00:02:21.933264 systemd[1]: session-85.scope: Deactivated successfully. Sep 6 00:02:21.934762 systemd-logind[1566]: Session 85 logged out. Waiting for processes to exit. Sep 6 00:02:21.936942 systemd-logind[1566]: Removed session 85. Sep 6 00:02:27.093545 systemd[1]: Started sshd@90-91.99.216.181:22-139.178.68.195:38648.service - OpenSSH per-connection server daemon (139.178.68.195:38648). Sep 6 00:02:28.087736 sshd[5369]: Accepted publickey for core from 139.178.68.195 port 38648 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 6 00:02:28.090056 sshd[5369]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:02:28.099852 systemd-logind[1566]: New session 86 of user core. Sep 6 00:02:28.102729 systemd[1]: Started session-86.scope - Session 86 of User core. Sep 6 00:02:28.850438 sshd[5369]: pam_unix(sshd:session): session closed for user core Sep 6 00:02:28.856759 systemd[1]: sshd@90-91.99.216.181:22-139.178.68.195:38648.service: Deactivated successfully. Sep 6 00:02:28.860086 systemd[1]: session-86.scope: Deactivated successfully. Sep 6 00:02:28.861015 systemd-logind[1566]: Session 86 logged out. Waiting for processes to exit. Sep 6 00:02:28.861942 systemd-logind[1566]: Removed session 86. Sep 6 00:02:34.032799 systemd[1]: Started sshd@91-91.99.216.181:22-139.178.68.195:52502.service - OpenSSH per-connection server daemon (139.178.68.195:52502). Sep 6 00:02:35.091331 sshd[5384]: Accepted publickey for core from 139.178.68.195 port 52502 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 6 00:02:35.095672 sshd[5384]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:02:35.106078 systemd-logind[1566]: New session 87 of user core. Sep 6 00:02:35.113434 systemd[1]: Started session-87.scope - Session 87 of User core. Sep 6 00:02:35.918570 sshd[5384]: pam_unix(sshd:session): session closed for user core Sep 6 00:02:35.924396 systemd-logind[1566]: Session 87 logged out. Waiting for processes to exit. Sep 6 00:02:35.924836 systemd[1]: sshd@91-91.99.216.181:22-139.178.68.195:52502.service: Deactivated successfully. Sep 6 00:02:35.929607 systemd[1]: session-87.scope: Deactivated successfully. Sep 6 00:02:35.931293 systemd-logind[1566]: Removed session 87. Sep 6 00:02:41.092761 systemd[1]: Started sshd@92-91.99.216.181:22-139.178.68.195:38844.service - OpenSSH per-connection server daemon (139.178.68.195:38844). Sep 6 00:02:42.088375 sshd[5400]: Accepted publickey for core from 139.178.68.195 port 38844 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 6 00:02:42.090219 sshd[5400]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:02:42.095211 systemd-logind[1566]: New session 88 of user core. Sep 6 00:02:42.100897 systemd[1]: Started session-88.scope - Session 88 of User core. Sep 6 00:02:42.851784 sshd[5400]: pam_unix(sshd:session): session closed for user core Sep 6 00:02:42.858347 systemd[1]: sshd@92-91.99.216.181:22-139.178.68.195:38844.service: Deactivated successfully. Sep 6 00:02:42.862802 systemd[1]: session-88.scope: Deactivated successfully. Sep 6 00:02:42.864877 systemd-logind[1566]: Session 88 logged out. Waiting for processes to exit. Sep 6 00:02:42.866206 systemd-logind[1566]: Removed session 88. Sep 6 00:02:48.024946 systemd[1]: Started sshd@93-91.99.216.181:22-139.178.68.195:38856.service - OpenSSH per-connection server daemon (139.178.68.195:38856). Sep 6 00:02:49.018254 sshd[5413]: Accepted publickey for core from 139.178.68.195 port 38856 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 6 00:02:49.021453 sshd[5413]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:02:49.028386 systemd-logind[1566]: New session 89 of user core. Sep 6 00:02:49.035027 systemd[1]: Started session-89.scope - Session 89 of User core. Sep 6 00:02:49.776713 sshd[5413]: pam_unix(sshd:session): session closed for user core Sep 6 00:02:49.782884 systemd[1]: sshd@93-91.99.216.181:22-139.178.68.195:38856.service: Deactivated successfully. Sep 6 00:02:49.784363 systemd-logind[1566]: Session 89 logged out. Waiting for processes to exit. Sep 6 00:02:49.786383 systemd[1]: session-89.scope: Deactivated successfully. Sep 6 00:02:49.787645 systemd-logind[1566]: Removed session 89. Sep 6 00:02:54.943991 systemd[1]: Started sshd@94-91.99.216.181:22-139.178.68.195:37874.service - OpenSSH per-connection server daemon (139.178.68.195:37874). Sep 6 00:02:55.944604 sshd[5427]: Accepted publickey for core from 139.178.68.195 port 37874 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 6 00:02:55.945650 sshd[5427]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:02:55.953620 systemd-logind[1566]: New session 90 of user core. Sep 6 00:02:55.958732 systemd[1]: Started session-90.scope - Session 90 of User core. Sep 6 00:02:56.704081 sshd[5427]: pam_unix(sshd:session): session closed for user core Sep 6 00:02:56.709185 systemd-logind[1566]: Session 90 logged out. Waiting for processes to exit. Sep 6 00:02:56.709761 systemd[1]: sshd@94-91.99.216.181:22-139.178.68.195:37874.service: Deactivated successfully. Sep 6 00:02:56.715720 systemd[1]: session-90.scope: Deactivated successfully. Sep 6 00:02:56.718096 systemd-logind[1566]: Removed session 90. Sep 6 00:03:01.878743 systemd[1]: Started sshd@95-91.99.216.181:22-139.178.68.195:38106.service - OpenSSH per-connection server daemon (139.178.68.195:38106). Sep 6 00:03:02.870730 sshd[5441]: Accepted publickey for core from 139.178.68.195 port 38106 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 6 00:03:02.875758 sshd[5441]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:03:02.880560 systemd-logind[1566]: New session 91 of user core. Sep 6 00:03:02.888181 systemd[1]: Started session-91.scope - Session 91 of User core. Sep 6 00:03:03.635746 sshd[5441]: pam_unix(sshd:session): session closed for user core Sep 6 00:03:03.641355 systemd-logind[1566]: Session 91 logged out. Waiting for processes to exit. Sep 6 00:03:03.642715 systemd[1]: sshd@95-91.99.216.181:22-139.178.68.195:38106.service: Deactivated successfully. Sep 6 00:03:03.648143 systemd[1]: session-91.scope: Deactivated successfully. Sep 6 00:03:03.649435 systemd-logind[1566]: Removed session 91. Sep 6 00:03:08.401715 systemd[1]: Started sshd@96-91.99.216.181:22-103.186.221.74:44804.service - OpenSSH per-connection server daemon (103.186.221.74:44804). Sep 6 00:03:08.820080 systemd[1]: Started sshd@97-91.99.216.181:22-139.178.68.195:38114.service - OpenSSH per-connection server daemon (139.178.68.195:38114). Sep 6 00:03:09.871213 sshd[5458]: Accepted publickey for core from 139.178.68.195 port 38114 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 6 00:03:09.873673 sshd[5458]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:03:09.879885 systemd-logind[1566]: New session 92 of user core. Sep 6 00:03:09.886938 systemd[1]: Started session-92.scope - Session 92 of User core. Sep 6 00:03:10.670817 sshd[5458]: pam_unix(sshd:session): session closed for user core Sep 6 00:03:10.677075 systemd[1]: sshd@97-91.99.216.181:22-139.178.68.195:38114.service: Deactivated successfully. Sep 6 00:03:10.679712 systemd[1]: session-92.scope: Deactivated successfully. Sep 6 00:03:10.681730 systemd-logind[1566]: Session 92 logged out. Waiting for processes to exit. Sep 6 00:03:10.683059 systemd-logind[1566]: Removed session 92. Sep 6 00:03:12.709391 sshd[5457]: Invalid user guest from 103.186.221.74 port 44804 Sep 6 00:03:13.617955 sshd[5473]: pam_faillock(sshd:auth): User unknown Sep 6 00:03:13.623514 sshd[5457]: Postponed keyboard-interactive for invalid user guest from 103.186.221.74 port 44804 ssh2 [preauth] Sep 6 00:03:14.964298 sshd[5473]: pam_unix(sshd:auth): check pass; user unknown Sep 6 00:03:14.964349 sshd[5473]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=103.186.221.74 Sep 6 00:03:14.966244 sshd[5473]: pam_faillock(sshd:auth): User unknown Sep 6 00:03:15.846943 systemd[1]: Started sshd@98-91.99.216.181:22-139.178.68.195:45400.service - OpenSSH per-connection server daemon (139.178.68.195:45400). Sep 6 00:03:16.849799 sshd[5474]: Accepted publickey for core from 139.178.68.195 port 45400 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 6 00:03:16.852385 sshd[5474]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:03:16.857501 systemd-logind[1566]: New session 93 of user core. Sep 6 00:03:16.860894 systemd[1]: Started session-93.scope - Session 93 of User core. Sep 6 00:03:16.975442 sshd[5457]: PAM: Permission denied for illegal user guest from 103.186.221.74 Sep 6 00:03:16.976481 sshd[5457]: Failed keyboard-interactive/pam for invalid user guest from 103.186.221.74 port 44804 ssh2 Sep 6 00:03:17.609867 sshd[5474]: pam_unix(sshd:session): session closed for user core Sep 6 00:03:17.614530 systemd[1]: sshd@98-91.99.216.181:22-139.178.68.195:45400.service: Deactivated successfully. Sep 6 00:03:17.618807 systemd[1]: session-93.scope: Deactivated successfully. Sep 6 00:03:17.620347 systemd-logind[1566]: Session 93 logged out. Waiting for processes to exit. Sep 6 00:03:17.621352 systemd-logind[1566]: Removed session 93. Sep 6 00:03:18.171964 sshd[5457]: Connection closed by invalid user guest 103.186.221.74 port 44804 [preauth] Sep 6 00:03:18.177961 systemd[1]: sshd@96-91.99.216.181:22-103.186.221.74:44804.service: Deactivated successfully. Sep 6 00:03:22.810724 systemd[1]: Started sshd@99-91.99.216.181:22-139.178.68.195:56884.service - OpenSSH per-connection server daemon (139.178.68.195:56884). Sep 6 00:03:23.923339 sshd[5491]: Accepted publickey for core from 139.178.68.195 port 56884 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 6 00:03:23.925554 sshd[5491]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:03:23.931764 systemd-logind[1566]: New session 94 of user core. Sep 6 00:03:23.938840 systemd[1]: Started session-94.scope - Session 94 of User core. Sep 6 00:03:24.755805 sshd[5491]: pam_unix(sshd:session): session closed for user core Sep 6 00:03:24.761931 systemd[1]: sshd@99-91.99.216.181:22-139.178.68.195:56884.service: Deactivated successfully. Sep 6 00:03:24.766050 systemd[1]: session-94.scope: Deactivated successfully. Sep 6 00:03:24.766846 systemd-logind[1566]: Session 94 logged out. Waiting for processes to exit. Sep 6 00:03:24.767863 systemd-logind[1566]: Removed session 94. Sep 6 00:03:29.913744 systemd[1]: Started sshd@100-91.99.216.181:22-139.178.68.195:56900.service - OpenSSH per-connection server daemon (139.178.68.195:56900). Sep 6 00:03:30.903841 sshd[5507]: Accepted publickey for core from 139.178.68.195 port 56900 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 6 00:03:30.906596 sshd[5507]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:03:30.911591 systemd-logind[1566]: New session 95 of user core. Sep 6 00:03:30.917863 systemd[1]: Started session-95.scope - Session 95 of User core. Sep 6 00:03:31.662625 sshd[5507]: pam_unix(sshd:session): session closed for user core Sep 6 00:03:31.668693 systemd[1]: sshd@100-91.99.216.181:22-139.178.68.195:56900.service: Deactivated successfully. Sep 6 00:03:31.673814 systemd-logind[1566]: Session 95 logged out. Waiting for processes to exit. Sep 6 00:03:31.674358 systemd[1]: session-95.scope: Deactivated successfully. Sep 6 00:03:31.676656 systemd-logind[1566]: Removed session 95. Sep 6 00:03:36.839693 systemd[1]: Started sshd@101-91.99.216.181:22-139.178.68.195:59610.service - OpenSSH per-connection server daemon (139.178.68.195:59610). Sep 6 00:03:37.842765 sshd[5522]: Accepted publickey for core from 139.178.68.195 port 59610 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 6 00:03:37.845254 sshd[5522]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:03:37.849865 systemd-logind[1566]: New session 96 of user core. Sep 6 00:03:37.855816 systemd[1]: Started session-96.scope - Session 96 of User core. Sep 6 00:03:38.614511 sshd[5522]: pam_unix(sshd:session): session closed for user core Sep 6 00:03:38.620238 systemd[1]: sshd@101-91.99.216.181:22-139.178.68.195:59610.service: Deactivated successfully. Sep 6 00:03:38.626698 systemd[1]: session-96.scope: Deactivated successfully. Sep 6 00:03:38.627652 systemd-logind[1566]: Session 96 logged out. Waiting for processes to exit. Sep 6 00:03:38.628625 systemd-logind[1566]: Removed session 96. Sep 6 00:03:43.812483 systemd[1]: Started sshd@102-91.99.216.181:22-139.178.68.195:44476.service - OpenSSH per-connection server daemon (139.178.68.195:44476). Sep 6 00:03:44.868057 sshd[5536]: Accepted publickey for core from 139.178.68.195 port 44476 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 6 00:03:44.871222 sshd[5536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:03:44.878135 systemd-logind[1566]: New session 97 of user core. Sep 6 00:03:44.887121 systemd[1]: Started session-97.scope - Session 97 of User core. Sep 6 00:03:45.667719 sshd[5536]: pam_unix(sshd:session): session closed for user core Sep 6 00:03:45.673109 systemd[1]: sshd@102-91.99.216.181:22-139.178.68.195:44476.service: Deactivated successfully. Sep 6 00:03:45.677379 systemd-logind[1566]: Session 97 logged out. Waiting for processes to exit. Sep 6 00:03:45.677380 systemd[1]: session-97.scope: Deactivated successfully. Sep 6 00:03:45.679592 systemd-logind[1566]: Removed session 97. Sep 6 00:03:50.836560 systemd[1]: Started sshd@103-91.99.216.181:22-139.178.68.195:45092.service - OpenSSH per-connection server daemon (139.178.68.195:45092). Sep 6 00:03:51.830103 sshd[5550]: Accepted publickey for core from 139.178.68.195 port 45092 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 6 00:03:51.832229 sshd[5550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:03:51.840515 systemd-logind[1566]: New session 98 of user core. Sep 6 00:03:51.848951 systemd[1]: Started session-98.scope - Session 98 of User core. Sep 6 00:03:52.614769 sshd[5550]: pam_unix(sshd:session): session closed for user core Sep 6 00:03:52.621625 systemd-logind[1566]: Session 98 logged out. Waiting for processes to exit. Sep 6 00:03:52.622309 systemd[1]: sshd@103-91.99.216.181:22-139.178.68.195:45092.service: Deactivated successfully. Sep 6 00:03:52.627115 systemd[1]: session-98.scope: Deactivated successfully. Sep 6 00:03:52.631228 systemd-logind[1566]: Removed session 98. Sep 6 00:03:53.169869 systemd[1]: Started sshd@104-91.99.216.181:22-188.166.242.21:58654.service - OpenSSH per-connection server daemon (188.166.242.21:58654). Sep 6 00:03:54.112924 sshd[5564]: Connection closed by authenticating user root 188.166.242.21 port 58654 [preauth] Sep 6 00:03:54.116145 systemd[1]: sshd@104-91.99.216.181:22-188.166.242.21:58654.service: Deactivated successfully. Sep 6 00:03:57.790866 systemd[1]: Started sshd@105-91.99.216.181:22-139.178.68.195:45094.service - OpenSSH per-connection server daemon (139.178.68.195:45094). Sep 6 00:03:58.843580 sshd[5569]: Accepted publickey for core from 139.178.68.195 port 45094 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 6 00:03:58.846868 sshd[5569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:03:58.853444 systemd-logind[1566]: New session 99 of user core. Sep 6 00:03:58.866996 systemd[1]: Started session-99.scope - Session 99 of User core. Sep 6 00:03:59.649177 sshd[5569]: pam_unix(sshd:session): session closed for user core Sep 6 00:03:59.658178 systemd[1]: sshd@105-91.99.216.181:22-139.178.68.195:45094.service: Deactivated successfully. Sep 6 00:03:59.668094 systemd[1]: session-99.scope: Deactivated successfully. Sep 6 00:03:59.670521 systemd-logind[1566]: Session 99 logged out. Waiting for processes to exit. Sep 6 00:03:59.673224 systemd-logind[1566]: Removed session 99. Sep 6 00:04:04.817839 systemd[1]: Started sshd@106-91.99.216.181:22-139.178.68.195:38666.service - OpenSSH per-connection server daemon (139.178.68.195:38666). Sep 6 00:04:05.812183 sshd[5585]: Accepted publickey for core from 139.178.68.195 port 38666 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 6 00:04:05.814083 sshd[5585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:04:05.818810 systemd-logind[1566]: New session 100 of user core. Sep 6 00:04:05.826962 systemd[1]: Started session-100.scope - Session 100 of User core. Sep 6 00:04:06.568831 sshd[5585]: pam_unix(sshd:session): session closed for user core Sep 6 00:04:06.575386 systemd[1]: sshd@106-91.99.216.181:22-139.178.68.195:38666.service: Deactivated successfully. Sep 6 00:04:06.579573 systemd[1]: session-100.scope: Deactivated successfully. Sep 6 00:04:06.580937 systemd-logind[1566]: Session 100 logged out. Waiting for processes to exit. Sep 6 00:04:06.582761 systemd-logind[1566]: Removed session 100. Sep 6 00:04:11.739335 systemd[1]: Started sshd@107-91.99.216.181:22-139.178.68.195:43282.service - OpenSSH per-connection server daemon (139.178.68.195:43282). Sep 6 00:04:12.728668 sshd[5599]: Accepted publickey for core from 139.178.68.195 port 43282 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 6 00:04:12.732720 sshd[5599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:04:12.737586 systemd-logind[1566]: New session 101 of user core. Sep 6 00:04:12.745890 systemd[1]: Started session-101.scope - Session 101 of User core. Sep 6 00:04:13.489132 sshd[5599]: pam_unix(sshd:session): session closed for user core Sep 6 00:04:13.494064 systemd[1]: sshd@107-91.99.216.181:22-139.178.68.195:43282.service: Deactivated successfully. Sep 6 00:04:13.498777 systemd[1]: session-101.scope: Deactivated successfully. Sep 6 00:04:13.500522 systemd-logind[1566]: Session 101 logged out. Waiting for processes to exit. Sep 6 00:04:13.501595 systemd-logind[1566]: Removed session 101. Sep 6 00:04:18.661726 systemd[1]: Started sshd@108-91.99.216.181:22-139.178.68.195:43290.service - OpenSSH per-connection server daemon (139.178.68.195:43290). Sep 6 00:04:19.676366 sshd[5612]: Accepted publickey for core from 139.178.68.195 port 43290 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 6 00:04:19.679273 sshd[5612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:04:19.689446 systemd-logind[1566]: New session 102 of user core. Sep 6 00:04:19.691729 systemd[1]: Started session-102.scope - Session 102 of User core. Sep 6 00:04:20.451792 sshd[5612]: pam_unix(sshd:session): session closed for user core Sep 6 00:04:20.458318 systemd[1]: sshd@108-91.99.216.181:22-139.178.68.195:43290.service: Deactivated successfully. Sep 6 00:04:20.464010 systemd[1]: session-102.scope: Deactivated successfully. Sep 6 00:04:20.465267 systemd-logind[1566]: Session 102 logged out. Waiting for processes to exit. Sep 6 00:04:20.466604 systemd-logind[1566]: Removed session 102. Sep 6 00:04:25.625978 systemd[1]: Started sshd@109-91.99.216.181:22-139.178.68.195:41826.service - OpenSSH per-connection server daemon (139.178.68.195:41826). Sep 6 00:04:26.623749 sshd[5626]: Accepted publickey for core from 139.178.68.195 port 41826 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 6 00:04:26.625126 sshd[5626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:04:26.631296 systemd-logind[1566]: New session 103 of user core. Sep 6 00:04:26.639954 systemd[1]: Started session-103.scope - Session 103 of User core. Sep 6 00:04:27.409938 sshd[5626]: pam_unix(sshd:session): session closed for user core Sep 6 00:04:27.416948 systemd[1]: sshd@109-91.99.216.181:22-139.178.68.195:41826.service: Deactivated successfully. Sep 6 00:04:27.419905 systemd[1]: session-103.scope: Deactivated successfully. Sep 6 00:04:27.423381 systemd-logind[1566]: Session 103 logged out. Waiting for processes to exit. Sep 6 00:04:27.424847 systemd-logind[1566]: Removed session 103. Sep 6 00:04:32.596852 systemd[1]: Started sshd@110-91.99.216.181:22-139.178.68.195:60538.service - OpenSSH per-connection server daemon (139.178.68.195:60538). Sep 6 00:04:33.654717 sshd[5642]: Accepted publickey for core from 139.178.68.195 port 60538 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 6 00:04:33.655927 sshd[5642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:04:33.665226 systemd-logind[1566]: New session 104 of user core. Sep 6 00:04:33.670341 systemd[1]: Started session-104.scope - Session 104 of User core. Sep 6 00:04:34.447783 sshd[5642]: pam_unix(sshd:session): session closed for user core Sep 6 00:04:34.459807 systemd[1]: sshd@110-91.99.216.181:22-139.178.68.195:60538.service: Deactivated successfully. Sep 6 00:04:34.467353 systemd[1]: session-104.scope: Deactivated successfully. Sep 6 00:04:34.469454 systemd-logind[1566]: Session 104 logged out. Waiting for processes to exit. Sep 6 00:04:34.470565 systemd-logind[1566]: Removed session 104. Sep 6 00:04:39.627204 systemd[1]: Started sshd@111-91.99.216.181:22-139.178.68.195:60544.service - OpenSSH per-connection server daemon (139.178.68.195:60544). Sep 6 00:04:40.675493 sshd[5658]: Accepted publickey for core from 139.178.68.195 port 60544 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 6 00:04:40.677864 sshd[5658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:04:40.686143 systemd-logind[1566]: New session 105 of user core. Sep 6 00:04:40.694015 systemd[1]: Started session-105.scope - Session 105 of User core. Sep 6 00:04:41.467750 sshd[5658]: pam_unix(sshd:session): session closed for user core Sep 6 00:04:41.473563 systemd[1]: sshd@111-91.99.216.181:22-139.178.68.195:60544.service: Deactivated successfully. Sep 6 00:04:41.479378 systemd[1]: session-105.scope: Deactivated successfully. Sep 6 00:04:41.480545 systemd-logind[1566]: Session 105 logged out. Waiting for processes to exit. Sep 6 00:04:41.482791 systemd-logind[1566]: Removed session 105. Sep 6 00:04:46.636565 systemd[1]: Started sshd@112-91.99.216.181:22-139.178.68.195:58484.service - OpenSSH per-connection server daemon (139.178.68.195:58484). Sep 6 00:04:47.633516 sshd[5673]: Accepted publickey for core from 139.178.68.195 port 58484 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 6 00:04:47.636136 sshd[5673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:04:47.643610 systemd-logind[1566]: New session 106 of user core. Sep 6 00:04:47.647801 systemd[1]: Started session-106.scope - Session 106 of User core. Sep 6 00:04:48.393847 sshd[5673]: pam_unix(sshd:session): session closed for user core Sep 6 00:04:48.401654 systemd[1]: sshd@112-91.99.216.181:22-139.178.68.195:58484.service: Deactivated successfully. Sep 6 00:04:48.405327 systemd[1]: session-106.scope: Deactivated successfully. Sep 6 00:04:48.407125 systemd-logind[1566]: Session 106 logged out. Waiting for processes to exit. Sep 6 00:04:48.408637 systemd-logind[1566]: Removed session 106. Sep 6 00:04:53.588949 systemd[1]: Started sshd@113-91.99.216.181:22-139.178.68.195:57334.service - OpenSSH per-connection server daemon (139.178.68.195:57334). Sep 6 00:04:54.651811 sshd[5687]: Accepted publickey for core from 139.178.68.195 port 57334 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 6 00:04:54.654937 sshd[5687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:04:54.663657 systemd-logind[1566]: New session 107 of user core. Sep 6 00:04:54.671985 systemd[1]: Started session-107.scope - Session 107 of User core. Sep 6 00:04:55.459737 sshd[5687]: pam_unix(sshd:session): session closed for user core Sep 6 00:04:55.467691 systemd[1]: sshd@113-91.99.216.181:22-139.178.68.195:57334.service: Deactivated successfully. Sep 6 00:04:55.476575 systemd[1]: session-107.scope: Deactivated successfully. Sep 6 00:04:55.481937 systemd-logind[1566]: Session 107 logged out. Waiting for processes to exit. Sep 6 00:04:55.483568 systemd-logind[1566]: Removed session 107. Sep 6 00:05:00.624884 systemd[1]: Started sshd@114-91.99.216.181:22-139.178.68.195:39666.service - OpenSSH per-connection server daemon (139.178.68.195:39666). Sep 6 00:05:01.629751 sshd[5701]: Accepted publickey for core from 139.178.68.195 port 39666 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 6 00:05:01.634449 sshd[5701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:05:01.649562 systemd-logind[1566]: New session 108 of user core. Sep 6 00:05:01.654942 systemd[1]: Started session-108.scope - Session 108 of User core. Sep 6 00:05:02.393788 sshd[5701]: pam_unix(sshd:session): session closed for user core Sep 6 00:05:02.399252 systemd-logind[1566]: Session 108 logged out. Waiting for processes to exit. Sep 6 00:05:02.400353 systemd[1]: sshd@114-91.99.216.181:22-139.178.68.195:39666.service: Deactivated successfully. Sep 6 00:05:02.404606 systemd[1]: session-108.scope: Deactivated successfully. Sep 6 00:05:02.405754 systemd-logind[1566]: Removed session 108. Sep 6 00:05:07.561725 systemd[1]: Started sshd@115-91.99.216.181:22-139.178.68.195:39682.service - OpenSSH per-connection server daemon (139.178.68.195:39682). Sep 6 00:05:08.557043 sshd[5717]: Accepted publickey for core from 139.178.68.195 port 39682 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 6 00:05:08.560211 sshd[5717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:05:08.565751 systemd-logind[1566]: New session 109 of user core. Sep 6 00:05:08.571788 systemd[1]: Started session-109.scope - Session 109 of User core. Sep 6 00:05:09.320818 sshd[5717]: pam_unix(sshd:session): session closed for user core Sep 6 00:05:09.329813 systemd[1]: Starting systemd-tmpfiles-clean.service - Cleanup of Temporary Directories... Sep 6 00:05:09.331752 systemd[1]: sshd@115-91.99.216.181:22-139.178.68.195:39682.service: Deactivated successfully. Sep 6 00:05:09.336623 systemd[1]: session-109.scope: Deactivated successfully. Sep 6 00:05:09.338756 systemd-logind[1566]: Session 109 logged out. Waiting for processes to exit. Sep 6 00:05:09.341049 systemd-logind[1566]: Removed session 109. Sep 6 00:05:09.352568 systemd-tmpfiles[5729]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 6 00:05:09.352924 systemd-tmpfiles[5729]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 6 00:05:09.353495 systemd-tmpfiles[5729]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 6 00:05:09.353706 systemd-tmpfiles[5729]: ACLs are not supported, ignoring. Sep 6 00:05:09.353754 systemd-tmpfiles[5729]: ACLs are not supported, ignoring. Sep 6 00:05:09.357759 systemd-tmpfiles[5729]: Detected autofs mount point /boot during canonicalization of boot. Sep 6 00:05:09.357773 systemd-tmpfiles[5729]: Skipping /boot Sep 6 00:05:09.364383 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Sep 6 00:05:09.364982 systemd[1]: Finished systemd-tmpfiles-clean.service - Cleanup of Temporary Directories. Sep 6 00:05:14.489849 systemd[1]: Started sshd@116-91.99.216.181:22-139.178.68.195:57994.service - OpenSSH per-connection server daemon (139.178.68.195:57994). Sep 6 00:05:15.497952 sshd[5734]: Accepted publickey for core from 139.178.68.195 port 57994 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 6 00:05:15.500497 sshd[5734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:05:15.508292 systemd-logind[1566]: New session 110 of user core. Sep 6 00:05:15.514859 systemd[1]: Started session-110.scope - Session 110 of User core. Sep 6 00:05:16.251435 sshd[5734]: pam_unix(sshd:session): session closed for user core Sep 6 00:05:16.259738 systemd[1]: sshd@116-91.99.216.181:22-139.178.68.195:57994.service: Deactivated successfully. Sep 6 00:05:16.263978 systemd-logind[1566]: Session 110 logged out. Waiting for processes to exit. Sep 6 00:05:16.267129 systemd[1]: session-110.scope: Deactivated successfully. Sep 6 00:05:16.268545 systemd-logind[1566]: Removed session 110. Sep 6 00:05:21.422774 systemd[1]: Started sshd@117-91.99.216.181:22-139.178.68.195:34786.service - OpenSSH per-connection server daemon (139.178.68.195:34786). Sep 6 00:05:22.439957 sshd[5750]: Accepted publickey for core from 139.178.68.195 port 34786 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 6 00:05:22.442245 sshd[5750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:05:22.448041 systemd-logind[1566]: New session 111 of user core. Sep 6 00:05:22.455715 systemd[1]: Started session-111.scope - Session 111 of User core. Sep 6 00:05:23.216952 sshd[5750]: pam_unix(sshd:session): session closed for user core Sep 6 00:05:23.225719 systemd[1]: sshd@117-91.99.216.181:22-139.178.68.195:34786.service: Deactivated successfully. Sep 6 00:05:23.236628 systemd[1]: session-111.scope: Deactivated successfully. Sep 6 00:05:23.240130 systemd-logind[1566]: Session 111 logged out. Waiting for processes to exit. Sep 6 00:05:23.242646 systemd-logind[1566]: Removed session 111. Sep 6 00:05:28.416563 systemd[1]: Started sshd@118-91.99.216.181:22-139.178.68.195:34790.service - OpenSSH per-connection server daemon (139.178.68.195:34790). Sep 6 00:05:29.480143 sshd[5766]: Accepted publickey for core from 139.178.68.195 port 34790 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 6 00:05:29.482489 sshd[5766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:05:29.490111 systemd-logind[1566]: New session 112 of user core. Sep 6 00:05:29.495934 systemd[1]: Started session-112.scope - Session 112 of User core. Sep 6 00:05:30.305532 sshd[5766]: pam_unix(sshd:session): session closed for user core Sep 6 00:05:30.311442 systemd[1]: sshd@118-91.99.216.181:22-139.178.68.195:34790.service: Deactivated successfully. Sep 6 00:05:30.311488 systemd-logind[1566]: Session 112 logged out. Waiting for processes to exit. Sep 6 00:05:30.316161 systemd[1]: session-112.scope: Deactivated successfully. Sep 6 00:05:30.317547 systemd-logind[1566]: Removed session 112. Sep 6 00:05:35.462784 systemd[1]: Started sshd@119-91.99.216.181:22-139.178.68.195:46218.service - OpenSSH per-connection server daemon (139.178.68.195:46218). Sep 6 00:05:36.461987 sshd[5782]: Accepted publickey for core from 139.178.68.195 port 46218 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 6 00:05:36.463706 sshd[5782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:05:36.468964 systemd-logind[1566]: New session 113 of user core. Sep 6 00:05:36.475953 systemd[1]: Started session-113.scope - Session 113 of User core. Sep 6 00:05:37.239863 sshd[5782]: pam_unix(sshd:session): session closed for user core Sep 6 00:05:37.245687 systemd[1]: sshd@119-91.99.216.181:22-139.178.68.195:46218.service: Deactivated successfully. Sep 6 00:05:37.255778 systemd[1]: session-113.scope: Deactivated successfully. Sep 6 00:05:37.261864 systemd-logind[1566]: Session 113 logged out. Waiting for processes to exit. Sep 6 00:05:37.264299 systemd-logind[1566]: Removed session 113. Sep 6 00:05:42.412768 systemd[1]: Started sshd@120-91.99.216.181:22-139.178.68.195:45032.service - OpenSSH per-connection server daemon (139.178.68.195:45032). Sep 6 00:05:43.425581 sshd[5796]: Accepted publickey for core from 139.178.68.195 port 45032 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 6 00:05:43.429226 sshd[5796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:05:43.443459 systemd-logind[1566]: New session 114 of user core. Sep 6 00:05:43.447764 systemd[1]: Started session-114.scope - Session 114 of User core. Sep 6 00:05:44.211061 sshd[5796]: pam_unix(sshd:session): session closed for user core Sep 6 00:05:44.215918 systemd[1]: sshd@120-91.99.216.181:22-139.178.68.195:45032.service: Deactivated successfully. Sep 6 00:05:44.219295 systemd-logind[1566]: Session 114 logged out. Waiting for processes to exit. Sep 6 00:05:44.219536 systemd[1]: session-114.scope: Deactivated successfully. Sep 6 00:05:44.222335 systemd-logind[1566]: Removed session 114. Sep 6 00:05:49.379755 systemd[1]: Started sshd@121-91.99.216.181:22-139.178.68.195:45042.service - OpenSSH per-connection server daemon (139.178.68.195:45042). Sep 6 00:05:50.377373 sshd[5810]: Accepted publickey for core from 139.178.68.195 port 45042 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 6 00:05:50.379002 sshd[5810]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:05:50.387076 systemd-logind[1566]: New session 115 of user core. Sep 6 00:05:50.392048 systemd[1]: Started session-115.scope - Session 115 of User core. Sep 6 00:05:51.135481 sshd[5810]: pam_unix(sshd:session): session closed for user core Sep 6 00:05:51.141293 systemd[1]: sshd@121-91.99.216.181:22-139.178.68.195:45042.service: Deactivated successfully. Sep 6 00:05:51.146040 systemd[1]: session-115.scope: Deactivated successfully. Sep 6 00:05:51.147764 systemd-logind[1566]: Session 115 logged out. Waiting for processes to exit. Sep 6 00:05:51.149012 systemd-logind[1566]: Removed session 115. Sep 6 00:05:56.329003 systemd[1]: Started sshd@122-91.99.216.181:22-139.178.68.195:44176.service - OpenSSH per-connection server daemon (139.178.68.195:44176). Sep 6 00:05:57.378150 sshd[5824]: Accepted publickey for core from 139.178.68.195 port 44176 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 6 00:05:57.380366 sshd[5824]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:05:57.386665 systemd-logind[1566]: New session 116 of user core. Sep 6 00:05:57.391952 systemd[1]: Started session-116.scope - Session 116 of User core. Sep 6 00:05:57.879837 systemd[1]: Started sshd@123-91.99.216.181:22-61.76.58.118:40520.service - OpenSSH per-connection server daemon (61.76.58.118:40520). Sep 6 00:05:58.178356 sshd[5824]: pam_unix(sshd:session): session closed for user core Sep 6 00:05:58.184241 systemd[1]: sshd@122-91.99.216.181:22-139.178.68.195:44176.service: Deactivated successfully. Sep 6 00:05:58.189209 systemd-logind[1566]: Session 116 logged out. Waiting for processes to exit. Sep 6 00:05:58.190032 systemd[1]: session-116.scope: Deactivated successfully. Sep 6 00:05:58.191647 systemd-logind[1566]: Removed session 116. Sep 6 00:06:01.837435 sshd[5828]: Invalid user Support from 61.76.58.118 port 40520 Sep 6 00:06:02.937384 sshd[5840]: pam_faillock(sshd:auth): User unknown Sep 6 00:06:02.940500 sshd[5828]: Postponed keyboard-interactive for invalid user Support from 61.76.58.118 port 40520 ssh2 [preauth] Sep 6 00:06:03.338798 systemd[1]: Started sshd@124-91.99.216.181:22-139.178.68.195:54406.service - OpenSSH per-connection server daemon (139.178.68.195:54406). Sep 6 00:06:03.896925 sshd[5840]: pam_unix(sshd:auth): check pass; user unknown Sep 6 00:06:03.896974 sshd[5840]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.76.58.118 Sep 6 00:06:03.898322 sshd[5840]: pam_faillock(sshd:auth): User unknown Sep 6 00:06:04.358964 sshd[5842]: Accepted publickey for core from 139.178.68.195 port 54406 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 6 00:06:04.363120 sshd[5842]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:06:04.374129 systemd-logind[1566]: New session 117 of user core. Sep 6 00:06:04.381871 systemd[1]: Started session-117.scope - Session 117 of User core. Sep 6 00:06:05.144828 sshd[5842]: pam_unix(sshd:session): session closed for user core Sep 6 00:06:05.150983 systemd[1]: sshd@124-91.99.216.181:22-139.178.68.195:54406.service: Deactivated successfully. Sep 6 00:06:05.155813 systemd[1]: session-117.scope: Deactivated successfully. Sep 6 00:06:05.156338 systemd-logind[1566]: Session 117 logged out. Waiting for processes to exit. Sep 6 00:06:05.157927 systemd-logind[1566]: Removed session 117. Sep 6 00:06:06.109333 sshd[5828]: PAM: Permission denied for illegal user Support from 61.76.58.118 Sep 6 00:06:06.111175 sshd[5828]: Failed keyboard-interactive/pam for invalid user Support from 61.76.58.118 port 40520 ssh2 Sep 6 00:06:07.063184 sshd[5828]: Connection closed by invalid user Support 61.76.58.118 port 40520 [preauth] Sep 6 00:06:07.066815 systemd[1]: sshd@123-91.99.216.181:22-61.76.58.118:40520.service: Deactivated successfully. Sep 6 00:06:10.311820 systemd[1]: Started sshd@125-91.99.216.181:22-139.178.68.195:43044.service - OpenSSH per-connection server daemon (139.178.68.195:43044). Sep 6 00:06:11.325157 sshd[5862]: Accepted publickey for core from 139.178.68.195 port 43044 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 6 00:06:11.326077 sshd[5862]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:06:11.346307 systemd-logind[1566]: New session 118 of user core. Sep 6 00:06:11.351353 systemd[1]: Started session-118.scope - Session 118 of User core. Sep 6 00:06:12.104658 sshd[5862]: pam_unix(sshd:session): session closed for user core Sep 6 00:06:12.111166 systemd[1]: sshd@125-91.99.216.181:22-139.178.68.195:43044.service: Deactivated successfully. Sep 6 00:06:12.117280 systemd[1]: session-118.scope: Deactivated successfully. Sep 6 00:06:12.122580 systemd-logind[1566]: Session 118 logged out. Waiting for processes to exit. Sep 6 00:06:12.123914 systemd-logind[1566]: Removed session 118. Sep 6 00:06:17.270831 systemd[1]: Started sshd@126-91.99.216.181:22-139.178.68.195:43058.service - OpenSSH per-connection server daemon (139.178.68.195:43058). Sep 6 00:06:18.264899 sshd[5877]: Accepted publickey for core from 139.178.68.195 port 43058 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 6 00:06:18.267068 sshd[5877]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:06:18.274499 systemd-logind[1566]: New session 119 of user core. Sep 6 00:06:18.282639 systemd[1]: Started session-119.scope - Session 119 of User core. Sep 6 00:06:19.020962 sshd[5877]: pam_unix(sshd:session): session closed for user core Sep 6 00:06:19.026555 systemd-logind[1566]: Session 119 logged out. Waiting for processes to exit. Sep 6 00:06:19.027121 systemd[1]: sshd@126-91.99.216.181:22-139.178.68.195:43058.service: Deactivated successfully. Sep 6 00:06:19.032034 systemd[1]: session-119.scope: Deactivated successfully. Sep 6 00:06:19.035052 systemd-logind[1566]: Removed session 119. Sep 6 00:06:19.189923 systemd[1]: Started sshd@127-91.99.216.181:22-139.178.68.195:43062.service - OpenSSH per-connection server daemon (139.178.68.195:43062). Sep 6 00:06:20.179143 sshd[5891]: Accepted publickey for core from 139.178.68.195 port 43062 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 6 00:06:20.181135 sshd[5891]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:06:20.187005 systemd-logind[1566]: New session 120 of user core. Sep 6 00:06:20.195914 systemd[1]: Started session-120.scope - Session 120 of User core. Sep 6 00:06:22.745982 containerd[1598]: time="2025-09-06T00:06:22.745921124Z" level=info msg="StopContainer for \"2d197c966fa8bcfe6ead7130d43c50f630fed4701d6fa06b5601b58de44dc3b2\" with timeout 30 (s)" Sep 6 00:06:22.747080 containerd[1598]: time="2025-09-06T00:06:22.746894963Z" level=info msg="Stop container \"2d197c966fa8bcfe6ead7130d43c50f630fed4701d6fa06b5601b58de44dc3b2\" with signal terminated" Sep 6 00:06:22.776576 containerd[1598]: time="2025-09-06T00:06:22.776327391Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 6 00:06:22.783339 containerd[1598]: time="2025-09-06T00:06:22.783232270Z" level=info msg="StopContainer for \"9c853af0d62870a992e0f1624cf59d6f2ed2f794965b9f32141a738afb297a00\" with timeout 2 (s)" Sep 6 00:06:22.784180 containerd[1598]: time="2025-09-06T00:06:22.784152627Z" level=info msg="Stop container \"9c853af0d62870a992e0f1624cf59d6f2ed2f794965b9f32141a738afb297a00\" with signal terminated" Sep 6 00:06:22.793874 systemd-networkd[1244]: lxc_health: Link DOWN Sep 6 00:06:22.793880 systemd-networkd[1244]: lxc_health: Lost carrier Sep 6 00:06:22.808590 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2d197c966fa8bcfe6ead7130d43c50f630fed4701d6fa06b5601b58de44dc3b2-rootfs.mount: Deactivated successfully. Sep 6 00:06:22.817715 containerd[1598]: time="2025-09-06T00:06:22.817574856Z" level=info msg="shim disconnected" id=2d197c966fa8bcfe6ead7130d43c50f630fed4701d6fa06b5601b58de44dc3b2 namespace=k8s.io Sep 6 00:06:22.817715 containerd[1598]: time="2025-09-06T00:06:22.817694421Z" level=warning msg="cleaning up after shim disconnected" id=2d197c966fa8bcfe6ead7130d43c50f630fed4701d6fa06b5601b58de44dc3b2 namespace=k8s.io Sep 6 00:06:22.817715 containerd[1598]: time="2025-09-06T00:06:22.817704221Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 6 00:06:22.837970 containerd[1598]: time="2025-09-06T00:06:22.837479459Z" level=info msg="StopContainer for \"2d197c966fa8bcfe6ead7130d43c50f630fed4701d6fa06b5601b58de44dc3b2\" returns successfully" Sep 6 00:06:22.838095 containerd[1598]: time="2025-09-06T00:06:22.838013121Z" level=info msg="StopPodSandbox for \"6c109dee7865ca5e54f2e10ca8a21ca277362e878f1d13fe2edd923d174ea516\"" Sep 6 00:06:22.838095 containerd[1598]: time="2025-09-06T00:06:22.838044802Z" level=info msg="Container to stop \"2d197c966fa8bcfe6ead7130d43c50f630fed4701d6fa06b5601b58de44dc3b2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:06:22.840076 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6c109dee7865ca5e54f2e10ca8a21ca277362e878f1d13fe2edd923d174ea516-shm.mount: Deactivated successfully. Sep 6 00:06:22.853444 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9c853af0d62870a992e0f1624cf59d6f2ed2f794965b9f32141a738afb297a00-rootfs.mount: Deactivated successfully. Sep 6 00:06:22.862136 containerd[1598]: time="2025-09-06T00:06:22.861643515Z" level=info msg="shim disconnected" id=9c853af0d62870a992e0f1624cf59d6f2ed2f794965b9f32141a738afb297a00 namespace=k8s.io Sep 6 00:06:22.862136 containerd[1598]: time="2025-09-06T00:06:22.862083532Z" level=warning msg="cleaning up after shim disconnected" id=9c853af0d62870a992e0f1624cf59d6f2ed2f794965b9f32141a738afb297a00 namespace=k8s.io Sep 6 00:06:22.862136 containerd[1598]: time="2025-09-06T00:06:22.862129214Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 6 00:06:22.889252 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c109dee7865ca5e54f2e10ca8a21ca277362e878f1d13fe2edd923d174ea516-rootfs.mount: Deactivated successfully. Sep 6 00:06:22.893865 containerd[1598]: time="2025-09-06T00:06:22.893775211Z" level=info msg="StopContainer for \"9c853af0d62870a992e0f1624cf59d6f2ed2f794965b9f32141a738afb297a00\" returns successfully" Sep 6 00:06:22.896008 containerd[1598]: time="2025-09-06T00:06:22.895616086Z" level=info msg="StopPodSandbox for \"ebdeac8f6aa8242f40af9a80823206b48d917ce061f596d532f2631024b5fe19\"" Sep 6 00:06:22.896008 containerd[1598]: time="2025-09-06T00:06:22.895680128Z" level=info msg="Container to stop \"39a23de4e25f755708fbf6cf8f43e1b73371e3f32c4061ab44bdb0e618af612f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:06:22.896008 containerd[1598]: time="2025-09-06T00:06:22.895709730Z" level=info msg="Container to stop \"b981377f517e11d766a72ce34c82bd817277467fa66226c1158fd4fa28be2d37\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:06:22.896008 containerd[1598]: time="2025-09-06T00:06:22.895729090Z" level=info msg="Container to stop \"9c853af0d62870a992e0f1624cf59d6f2ed2f794965b9f32141a738afb297a00\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:06:22.896008 containerd[1598]: time="2025-09-06T00:06:22.895745251Z" level=info msg="Container to stop \"bebf0769049054ac4b972ecb44f8f5f61cd64a4f8bf6286fba72e47e4fcd1e6f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:06:22.896008 containerd[1598]: time="2025-09-06T00:06:22.895760612Z" level=info msg="Container to stop \"5fa6073f6e54c7b12893a0067c0ef3c9b25e280a1877239d0d0e9a3ba3c8557e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:06:22.900449 containerd[1598]: time="2025-09-06T00:06:22.899807855Z" level=info msg="shim disconnected" id=6c109dee7865ca5e54f2e10ca8a21ca277362e878f1d13fe2edd923d174ea516 namespace=k8s.io Sep 6 00:06:22.900449 containerd[1598]: time="2025-09-06T00:06:22.900102867Z" level=warning msg="cleaning up after shim disconnected" id=6c109dee7865ca5e54f2e10ca8a21ca277362e878f1d13fe2edd923d174ea516 namespace=k8s.io Sep 6 00:06:22.900449 containerd[1598]: time="2025-09-06T00:06:22.900221032Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 6 00:06:22.924577 containerd[1598]: time="2025-09-06T00:06:22.924360206Z" level=info msg="TearDown network for sandbox \"6c109dee7865ca5e54f2e10ca8a21ca277362e878f1d13fe2edd923d174ea516\" successfully" Sep 6 00:06:22.924577 containerd[1598]: time="2025-09-06T00:06:22.924426369Z" level=info msg="StopPodSandbox for \"6c109dee7865ca5e54f2e10ca8a21ca277362e878f1d13fe2edd923d174ea516\" returns successfully" Sep 6 00:06:22.945730 containerd[1598]: time="2025-09-06T00:06:22.945466258Z" level=info msg="shim disconnected" id=ebdeac8f6aa8242f40af9a80823206b48d917ce061f596d532f2631024b5fe19 namespace=k8s.io Sep 6 00:06:22.945730 containerd[1598]: time="2025-09-06T00:06:22.945532141Z" level=warning msg="cleaning up after shim disconnected" id=ebdeac8f6aa8242f40af9a80823206b48d917ce061f596d532f2631024b5fe19 namespace=k8s.io Sep 6 00:06:22.945730 containerd[1598]: time="2025-09-06T00:06:22.945541621Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 6 00:06:22.960792 containerd[1598]: time="2025-09-06T00:06:22.959532986Z" level=info msg="TearDown network for sandbox \"ebdeac8f6aa8242f40af9a80823206b48d917ce061f596d532f2631024b5fe19\" successfully" Sep 6 00:06:22.960792 containerd[1598]: time="2025-09-06T00:06:22.959574907Z" level=info msg="StopPodSandbox for \"ebdeac8f6aa8242f40af9a80823206b48d917ce061f596d532f2631024b5fe19\" returns successfully" Sep 6 00:06:23.024636 kubelet[2753]: I0906 00:06:23.024483 2753 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbgds\" (UniqueName: \"kubernetes.io/projected/27d862b8-7252-4fff-913d-7ef8fa524d35-kube-api-access-fbgds\") pod \"27d862b8-7252-4fff-913d-7ef8fa524d35\" (UID: \"27d862b8-7252-4fff-913d-7ef8fa524d35\") " Sep 6 00:06:23.024636 kubelet[2753]: I0906 00:06:23.024567 2753 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/27d862b8-7252-4fff-913d-7ef8fa524d35-cilium-config-path\") pod \"27d862b8-7252-4fff-913d-7ef8fa524d35\" (UID: \"27d862b8-7252-4fff-913d-7ef8fa524d35\") " Sep 6 00:06:23.029755 kubelet[2753]: I0906 00:06:23.029661 2753 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27d862b8-7252-4fff-913d-7ef8fa524d35-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "27d862b8-7252-4fff-913d-7ef8fa524d35" (UID: "27d862b8-7252-4fff-913d-7ef8fa524d35"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 6 00:06:23.029755 kubelet[2753]: I0906 00:06:23.029735 2753 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27d862b8-7252-4fff-913d-7ef8fa524d35-kube-api-access-fbgds" (OuterVolumeSpecName: "kube-api-access-fbgds") pod "27d862b8-7252-4fff-913d-7ef8fa524d35" (UID: "27d862b8-7252-4fff-913d-7ef8fa524d35"). InnerVolumeSpecName "kube-api-access-fbgds". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 6 00:06:23.126515 kubelet[2753]: I0906 00:06:23.126392 2753 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwv5f\" (UniqueName: \"kubernetes.io/projected/ecbd88c5-fb01-4a23-a044-b7fb468bbf9a-kube-api-access-cwv5f\") pod \"ecbd88c5-fb01-4a23-a044-b7fb468bbf9a\" (UID: \"ecbd88c5-fb01-4a23-a044-b7fb468bbf9a\") " Sep 6 00:06:23.126515 kubelet[2753]: I0906 00:06:23.126507 2753 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ecbd88c5-fb01-4a23-a044-b7fb468bbf9a-cni-path\") pod \"ecbd88c5-fb01-4a23-a044-b7fb468bbf9a\" (UID: \"ecbd88c5-fb01-4a23-a044-b7fb468bbf9a\") " Sep 6 00:06:23.126748 kubelet[2753]: I0906 00:06:23.126549 2753 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ecbd88c5-fb01-4a23-a044-b7fb468bbf9a-cilium-run\") pod \"ecbd88c5-fb01-4a23-a044-b7fb468bbf9a\" (UID: \"ecbd88c5-fb01-4a23-a044-b7fb468bbf9a\") " Sep 6 00:06:23.126748 kubelet[2753]: I0906 00:06:23.126587 2753 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ecbd88c5-fb01-4a23-a044-b7fb468bbf9a-cilium-cgroup\") pod \"ecbd88c5-fb01-4a23-a044-b7fb468bbf9a\" (UID: \"ecbd88c5-fb01-4a23-a044-b7fb468bbf9a\") " Sep 6 00:06:23.126748 kubelet[2753]: I0906 00:06:23.126624 2753 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ecbd88c5-fb01-4a23-a044-b7fb468bbf9a-host-proc-sys-net\") pod \"ecbd88c5-fb01-4a23-a044-b7fb468bbf9a\" (UID: \"ecbd88c5-fb01-4a23-a044-b7fb468bbf9a\") " Sep 6 00:06:23.126748 kubelet[2753]: I0906 00:06:23.126664 2753 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ecbd88c5-fb01-4a23-a044-b7fb468bbf9a-clustermesh-secrets\") pod \"ecbd88c5-fb01-4a23-a044-b7fb468bbf9a\" (UID: \"ecbd88c5-fb01-4a23-a044-b7fb468bbf9a\") " Sep 6 00:06:23.126748 kubelet[2753]: I0906 00:06:23.126699 2753 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ecbd88c5-fb01-4a23-a044-b7fb468bbf9a-host-proc-sys-kernel\") pod \"ecbd88c5-fb01-4a23-a044-b7fb468bbf9a\" (UID: \"ecbd88c5-fb01-4a23-a044-b7fb468bbf9a\") " Sep 6 00:06:23.126748 kubelet[2753]: I0906 00:06:23.126731 2753 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ecbd88c5-fb01-4a23-a044-b7fb468bbf9a-bpf-maps\") pod \"ecbd88c5-fb01-4a23-a044-b7fb468bbf9a\" (UID: \"ecbd88c5-fb01-4a23-a044-b7fb468bbf9a\") " Sep 6 00:06:23.127019 kubelet[2753]: I0906 00:06:23.126763 2753 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ecbd88c5-fb01-4a23-a044-b7fb468bbf9a-etc-cni-netd\") pod \"ecbd88c5-fb01-4a23-a044-b7fb468bbf9a\" (UID: \"ecbd88c5-fb01-4a23-a044-b7fb468bbf9a\") " Sep 6 00:06:23.127019 kubelet[2753]: I0906 00:06:23.126804 2753 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ecbd88c5-fb01-4a23-a044-b7fb468bbf9a-cilium-config-path\") pod \"ecbd88c5-fb01-4a23-a044-b7fb468bbf9a\" (UID: \"ecbd88c5-fb01-4a23-a044-b7fb468bbf9a\") " Sep 6 00:06:23.127019 kubelet[2753]: I0906 00:06:23.126837 2753 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ecbd88c5-fb01-4a23-a044-b7fb468bbf9a-xtables-lock\") pod \"ecbd88c5-fb01-4a23-a044-b7fb468bbf9a\" (UID: \"ecbd88c5-fb01-4a23-a044-b7fb468bbf9a\") " Sep 6 00:06:23.127019 kubelet[2753]: I0906 00:06:23.126873 2753 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ecbd88c5-fb01-4a23-a044-b7fb468bbf9a-hubble-tls\") pod \"ecbd88c5-fb01-4a23-a044-b7fb468bbf9a\" (UID: \"ecbd88c5-fb01-4a23-a044-b7fb468bbf9a\") " Sep 6 00:06:23.127019 kubelet[2753]: I0906 00:06:23.126910 2753 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ecbd88c5-fb01-4a23-a044-b7fb468bbf9a-lib-modules\") pod \"ecbd88c5-fb01-4a23-a044-b7fb468bbf9a\" (UID: \"ecbd88c5-fb01-4a23-a044-b7fb468bbf9a\") " Sep 6 00:06:23.127019 kubelet[2753]: I0906 00:06:23.126942 2753 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ecbd88c5-fb01-4a23-a044-b7fb468bbf9a-hostproc\") pod \"ecbd88c5-fb01-4a23-a044-b7fb468bbf9a\" (UID: \"ecbd88c5-fb01-4a23-a044-b7fb468bbf9a\") " Sep 6 00:06:23.127263 kubelet[2753]: I0906 00:06:23.127005 2753 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/27d862b8-7252-4fff-913d-7ef8fa524d35-cilium-config-path\") on node \"ci-4081-3-5-n-f09ad01745\" DevicePath \"\"" Sep 6 00:06:23.127263 kubelet[2753]: I0906 00:06:23.127030 2753 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbgds\" (UniqueName: \"kubernetes.io/projected/27d862b8-7252-4fff-913d-7ef8fa524d35-kube-api-access-fbgds\") on node \"ci-4081-3-5-n-f09ad01745\" DevicePath \"\"" Sep 6 00:06:23.127263 kubelet[2753]: I0906 00:06:23.127107 2753 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ecbd88c5-fb01-4a23-a044-b7fb468bbf9a-hostproc" (OuterVolumeSpecName: "hostproc") pod "ecbd88c5-fb01-4a23-a044-b7fb468bbf9a" (UID: "ecbd88c5-fb01-4a23-a044-b7fb468bbf9a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:06:23.129429 kubelet[2753]: I0906 00:06:23.127490 2753 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ecbd88c5-fb01-4a23-a044-b7fb468bbf9a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ecbd88c5-fb01-4a23-a044-b7fb468bbf9a" (UID: "ecbd88c5-fb01-4a23-a044-b7fb468bbf9a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:06:23.129429 kubelet[2753]: I0906 00:06:23.127556 2753 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ecbd88c5-fb01-4a23-a044-b7fb468bbf9a-cni-path" (OuterVolumeSpecName: "cni-path") pod "ecbd88c5-fb01-4a23-a044-b7fb468bbf9a" (UID: "ecbd88c5-fb01-4a23-a044-b7fb468bbf9a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:06:23.129429 kubelet[2753]: I0906 00:06:23.127584 2753 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ecbd88c5-fb01-4a23-a044-b7fb468bbf9a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ecbd88c5-fb01-4a23-a044-b7fb468bbf9a" (UID: "ecbd88c5-fb01-4a23-a044-b7fb468bbf9a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:06:23.129429 kubelet[2753]: I0906 00:06:23.127609 2753 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ecbd88c5-fb01-4a23-a044-b7fb468bbf9a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ecbd88c5-fb01-4a23-a044-b7fb468bbf9a" (UID: "ecbd88c5-fb01-4a23-a044-b7fb468bbf9a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:06:23.129429 kubelet[2753]: I0906 00:06:23.127635 2753 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ecbd88c5-fb01-4a23-a044-b7fb468bbf9a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ecbd88c5-fb01-4a23-a044-b7fb468bbf9a" (UID: "ecbd88c5-fb01-4a23-a044-b7fb468bbf9a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:06:23.129948 kubelet[2753]: I0906 00:06:23.129914 2753 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ecbd88c5-fb01-4a23-a044-b7fb468bbf9a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ecbd88c5-fb01-4a23-a044-b7fb468bbf9a" (UID: "ecbd88c5-fb01-4a23-a044-b7fb468bbf9a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:06:23.130099 kubelet[2753]: I0906 00:06:23.130059 2753 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ecbd88c5-fb01-4a23-a044-b7fb468bbf9a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ecbd88c5-fb01-4a23-a044-b7fb468bbf9a" (UID: "ecbd88c5-fb01-4a23-a044-b7fb468bbf9a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:06:23.130169 kubelet[2753]: I0906 00:06:23.130109 2753 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ecbd88c5-fb01-4a23-a044-b7fb468bbf9a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ecbd88c5-fb01-4a23-a044-b7fb468bbf9a" (UID: "ecbd88c5-fb01-4a23-a044-b7fb468bbf9a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:06:23.132162 kubelet[2753]: I0906 00:06:23.132115 2753 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ecbd88c5-fb01-4a23-a044-b7fb468bbf9a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ecbd88c5-fb01-4a23-a044-b7fb468bbf9a" (UID: "ecbd88c5-fb01-4a23-a044-b7fb468bbf9a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 6 00:06:23.132264 kubelet[2753]: I0906 00:06:23.132181 2753 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ecbd88c5-fb01-4a23-a044-b7fb468bbf9a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ecbd88c5-fb01-4a23-a044-b7fb468bbf9a" (UID: "ecbd88c5-fb01-4a23-a044-b7fb468bbf9a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:06:23.133053 kubelet[2753]: I0906 00:06:23.132985 2753 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecbd88c5-fb01-4a23-a044-b7fb468bbf9a-kube-api-access-cwv5f" (OuterVolumeSpecName: "kube-api-access-cwv5f") pod "ecbd88c5-fb01-4a23-a044-b7fb468bbf9a" (UID: "ecbd88c5-fb01-4a23-a044-b7fb468bbf9a"). InnerVolumeSpecName "kube-api-access-cwv5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 6 00:06:23.134245 kubelet[2753]: I0906 00:06:23.134216 2753 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecbd88c5-fb01-4a23-a044-b7fb468bbf9a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ecbd88c5-fb01-4a23-a044-b7fb468bbf9a" (UID: "ecbd88c5-fb01-4a23-a044-b7fb468bbf9a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 6 00:06:23.134357 kubelet[2753]: I0906 00:06:23.134259 2753 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecbd88c5-fb01-4a23-a044-b7fb468bbf9a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ecbd88c5-fb01-4a23-a044-b7fb468bbf9a" (UID: "ecbd88c5-fb01-4a23-a044-b7fb468bbf9a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 6 00:06:23.228166 kubelet[2753]: I0906 00:06:23.228112 2753 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ecbd88c5-fb01-4a23-a044-b7fb468bbf9a-cilium-config-path\") on node \"ci-4081-3-5-n-f09ad01745\" DevicePath \"\"" Sep 6 00:06:23.228544 kubelet[2753]: I0906 00:06:23.228517 2753 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ecbd88c5-fb01-4a23-a044-b7fb468bbf9a-xtables-lock\") on node \"ci-4081-3-5-n-f09ad01745\" DevicePath \"\"" Sep 6 00:06:23.228694 kubelet[2753]: I0906 00:06:23.228671 2753 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ecbd88c5-fb01-4a23-a044-b7fb468bbf9a-host-proc-sys-kernel\") on node \"ci-4081-3-5-n-f09ad01745\" DevicePath \"\"" Sep 6 00:06:23.228938 kubelet[2753]: I0906 00:06:23.228825 2753 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ecbd88c5-fb01-4a23-a044-b7fb468bbf9a-bpf-maps\") on node \"ci-4081-3-5-n-f09ad01745\" DevicePath \"\"" Sep 6 00:06:23.228938 kubelet[2753]: I0906 00:06:23.228881 2753 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ecbd88c5-fb01-4a23-a044-b7fb468bbf9a-etc-cni-netd\") on node \"ci-4081-3-5-n-f09ad01745\" DevicePath \"\"" Sep 6 00:06:23.228938 kubelet[2753]: I0906 00:06:23.228907 2753 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ecbd88c5-fb01-4a23-a044-b7fb468bbf9a-hubble-tls\") on node \"ci-4081-3-5-n-f09ad01745\" DevicePath \"\"" Sep 6 00:06:23.228938 kubelet[2753]: I0906 00:06:23.228924 2753 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ecbd88c5-fb01-4a23-a044-b7fb468bbf9a-hostproc\") on node \"ci-4081-3-5-n-f09ad01745\" DevicePath \"\"" Sep 6 00:06:23.228938 kubelet[2753]: I0906 00:06:23.228939 2753 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ecbd88c5-fb01-4a23-a044-b7fb468bbf9a-lib-modules\") on node \"ci-4081-3-5-n-f09ad01745\" DevicePath \"\"" Sep 6 00:06:23.229554 kubelet[2753]: I0906 00:06:23.228956 2753 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwv5f\" (UniqueName: \"kubernetes.io/projected/ecbd88c5-fb01-4a23-a044-b7fb468bbf9a-kube-api-access-cwv5f\") on node \"ci-4081-3-5-n-f09ad01745\" DevicePath \"\"" Sep 6 00:06:23.229554 kubelet[2753]: I0906 00:06:23.228972 2753 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ecbd88c5-fb01-4a23-a044-b7fb468bbf9a-cni-path\") on node \"ci-4081-3-5-n-f09ad01745\" DevicePath \"\"" Sep 6 00:06:23.229554 kubelet[2753]: I0906 00:06:23.228988 2753 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ecbd88c5-fb01-4a23-a044-b7fb468bbf9a-host-proc-sys-net\") on node \"ci-4081-3-5-n-f09ad01745\" DevicePath \"\"" Sep 6 00:06:23.229554 kubelet[2753]: I0906 00:06:23.229029 2753 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ecbd88c5-fb01-4a23-a044-b7fb468bbf9a-clustermesh-secrets\") on node \"ci-4081-3-5-n-f09ad01745\" DevicePath \"\"" Sep 6 00:06:23.229554 kubelet[2753]: I0906 00:06:23.229045 2753 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ecbd88c5-fb01-4a23-a044-b7fb468bbf9a-cilium-run\") on node \"ci-4081-3-5-n-f09ad01745\" DevicePath \"\"" Sep 6 00:06:23.229554 kubelet[2753]: I0906 00:06:23.229061 2753 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ecbd88c5-fb01-4a23-a044-b7fb468bbf9a-cilium-cgroup\") on node \"ci-4081-3-5-n-f09ad01745\" DevicePath \"\"" Sep 6 00:06:23.249435 kubelet[2753]: I0906 00:06:23.247691 2753 scope.go:117] "RemoveContainer" containerID="9c853af0d62870a992e0f1624cf59d6f2ed2f794965b9f32141a738afb297a00" Sep 6 00:06:23.253739 containerd[1598]: time="2025-09-06T00:06:23.252034156Z" level=info msg="RemoveContainer for \"9c853af0d62870a992e0f1624cf59d6f2ed2f794965b9f32141a738afb297a00\"" Sep 6 00:06:23.268922 containerd[1598]: time="2025-09-06T00:06:23.266882156Z" level=info msg="RemoveContainer for \"9c853af0d62870a992e0f1624cf59d6f2ed2f794965b9f32141a738afb297a00\" returns successfully" Sep 6 00:06:23.269077 kubelet[2753]: I0906 00:06:23.267489 2753 scope.go:117] "RemoveContainer" containerID="5fa6073f6e54c7b12893a0067c0ef3c9b25e280a1877239d0d0e9a3ba3c8557e" Sep 6 00:06:23.273799 containerd[1598]: time="2025-09-06T00:06:23.273748433Z" level=info msg="RemoveContainer for \"5fa6073f6e54c7b12893a0067c0ef3c9b25e280a1877239d0d0e9a3ba3c8557e\"" Sep 6 00:06:23.294818 containerd[1598]: time="2025-09-06T00:06:23.293562553Z" level=info msg="RemoveContainer for \"5fa6073f6e54c7b12893a0067c0ef3c9b25e280a1877239d0d0e9a3ba3c8557e\" returns successfully" Sep 6 00:06:23.297624 kubelet[2753]: I0906 00:06:23.297524 2753 scope.go:117] "RemoveContainer" containerID="bebf0769049054ac4b972ecb44f8f5f61cd64a4f8bf6286fba72e47e4fcd1e6f" Sep 6 00:06:23.299582 containerd[1598]: time="2025-09-06T00:06:23.299540674Z" level=info msg="RemoveContainer for \"bebf0769049054ac4b972ecb44f8f5f61cd64a4f8bf6286fba72e47e4fcd1e6f\"" Sep 6 00:06:23.306492 containerd[1598]: time="2025-09-06T00:06:23.306443193Z" level=info msg="RemoveContainer for \"bebf0769049054ac4b972ecb44f8f5f61cd64a4f8bf6286fba72e47e4fcd1e6f\" returns successfully" Sep 6 00:06:23.306971 kubelet[2753]: I0906 00:06:23.306824 2753 scope.go:117] "RemoveContainer" containerID="b981377f517e11d766a72ce34c82bd817277467fa66226c1158fd4fa28be2d37" Sep 6 00:06:23.311092 containerd[1598]: time="2025-09-06T00:06:23.310868172Z" level=info msg="RemoveContainer for \"b981377f517e11d766a72ce34c82bd817277467fa66226c1158fd4fa28be2d37\"" Sep 6 00:06:23.317959 containerd[1598]: time="2025-09-06T00:06:23.317816812Z" level=info msg="RemoveContainer for \"b981377f517e11d766a72ce34c82bd817277467fa66226c1158fd4fa28be2d37\" returns successfully" Sep 6 00:06:23.318171 kubelet[2753]: I0906 00:06:23.318081 2753 scope.go:117] "RemoveContainer" containerID="39a23de4e25f755708fbf6cf8f43e1b73371e3f32c4061ab44bdb0e618af612f" Sep 6 00:06:23.319538 containerd[1598]: time="2025-09-06T00:06:23.319500200Z" level=info msg="RemoveContainer for \"39a23de4e25f755708fbf6cf8f43e1b73371e3f32c4061ab44bdb0e618af612f\"" Sep 6 00:06:23.322840 containerd[1598]: time="2025-09-06T00:06:23.322785693Z" level=info msg="RemoveContainer for \"39a23de4e25f755708fbf6cf8f43e1b73371e3f32c4061ab44bdb0e618af612f\" returns successfully" Sep 6 00:06:23.323298 kubelet[2753]: I0906 00:06:23.323149 2753 scope.go:117] "RemoveContainer" containerID="9c853af0d62870a992e0f1624cf59d6f2ed2f794965b9f32141a738afb297a00" Sep 6 00:06:23.323728 containerd[1598]: time="2025-09-06T00:06:23.323582885Z" level=error msg="ContainerStatus for \"9c853af0d62870a992e0f1624cf59d6f2ed2f794965b9f32141a738afb297a00\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9c853af0d62870a992e0f1624cf59d6f2ed2f794965b9f32141a738afb297a00\": not found" Sep 6 00:06:23.324189 kubelet[2753]: E0906 00:06:23.323967 2753 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9c853af0d62870a992e0f1624cf59d6f2ed2f794965b9f32141a738afb297a00\": not found" containerID="9c853af0d62870a992e0f1624cf59d6f2ed2f794965b9f32141a738afb297a00" Sep 6 00:06:23.324189 kubelet[2753]: I0906 00:06:23.324005 2753 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9c853af0d62870a992e0f1624cf59d6f2ed2f794965b9f32141a738afb297a00"} err="failed to get container status \"9c853af0d62870a992e0f1624cf59d6f2ed2f794965b9f32141a738afb297a00\": rpc error: code = NotFound desc = an error occurred when try to find container \"9c853af0d62870a992e0f1624cf59d6f2ed2f794965b9f32141a738afb297a00\": not found" Sep 6 00:06:23.324189 kubelet[2753]: I0906 00:06:23.324101 2753 scope.go:117] "RemoveContainer" containerID="5fa6073f6e54c7b12893a0067c0ef3c9b25e280a1877239d0d0e9a3ba3c8557e" Sep 6 00:06:23.324858 containerd[1598]: time="2025-09-06T00:06:23.324564685Z" level=error msg="ContainerStatus for \"5fa6073f6e54c7b12893a0067c0ef3c9b25e280a1877239d0d0e9a3ba3c8557e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5fa6073f6e54c7b12893a0067c0ef3c9b25e280a1877239d0d0e9a3ba3c8557e\": not found" Sep 6 00:06:23.324955 kubelet[2753]: E0906 00:06:23.324732 2753 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5fa6073f6e54c7b12893a0067c0ef3c9b25e280a1877239d0d0e9a3ba3c8557e\": not found" containerID="5fa6073f6e54c7b12893a0067c0ef3c9b25e280a1877239d0d0e9a3ba3c8557e" Sep 6 00:06:23.324955 kubelet[2753]: I0906 00:06:23.324761 2753 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5fa6073f6e54c7b12893a0067c0ef3c9b25e280a1877239d0d0e9a3ba3c8557e"} err="failed to get container status \"5fa6073f6e54c7b12893a0067c0ef3c9b25e280a1877239d0d0e9a3ba3c8557e\": rpc error: code = NotFound desc = an error occurred when try to find container \"5fa6073f6e54c7b12893a0067c0ef3c9b25e280a1877239d0d0e9a3ba3c8557e\": not found" Sep 6 00:06:23.324955 kubelet[2753]: I0906 00:06:23.324782 2753 scope.go:117] "RemoveContainer" containerID="bebf0769049054ac4b972ecb44f8f5f61cd64a4f8bf6286fba72e47e4fcd1e6f" Sep 6 00:06:23.325297 containerd[1598]: time="2025-09-06T00:06:23.325202951Z" level=error msg="ContainerStatus for \"bebf0769049054ac4b972ecb44f8f5f61cd64a4f8bf6286fba72e47e4fcd1e6f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bebf0769049054ac4b972ecb44f8f5f61cd64a4f8bf6286fba72e47e4fcd1e6f\": not found" Sep 6 00:06:23.325445 kubelet[2753]: E0906 00:06:23.325344 2753 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bebf0769049054ac4b972ecb44f8f5f61cd64a4f8bf6286fba72e47e4fcd1e6f\": not found" containerID="bebf0769049054ac4b972ecb44f8f5f61cd64a4f8bf6286fba72e47e4fcd1e6f" Sep 6 00:06:23.325445 kubelet[2753]: I0906 00:06:23.325370 2753 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bebf0769049054ac4b972ecb44f8f5f61cd64a4f8bf6286fba72e47e4fcd1e6f"} err="failed to get container status \"bebf0769049054ac4b972ecb44f8f5f61cd64a4f8bf6286fba72e47e4fcd1e6f\": rpc error: code = NotFound desc = an error occurred when try to find container \"bebf0769049054ac4b972ecb44f8f5f61cd64a4f8bf6286fba72e47e4fcd1e6f\": not found" Sep 6 00:06:23.325445 kubelet[2753]: I0906 00:06:23.325415 2753 scope.go:117] "RemoveContainer" containerID="b981377f517e11d766a72ce34c82bd817277467fa66226c1158fd4fa28be2d37" Sep 6 00:06:23.325722 containerd[1598]: time="2025-09-06T00:06:23.325616127Z" level=error msg="ContainerStatus for \"b981377f517e11d766a72ce34c82bd817277467fa66226c1158fd4fa28be2d37\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b981377f517e11d766a72ce34c82bd817277467fa66226c1158fd4fa28be2d37\": not found" Sep 6 00:06:23.325988 kubelet[2753]: E0906 00:06:23.325853 2753 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b981377f517e11d766a72ce34c82bd817277467fa66226c1158fd4fa28be2d37\": not found" containerID="b981377f517e11d766a72ce34c82bd817277467fa66226c1158fd4fa28be2d37" Sep 6 00:06:23.325988 kubelet[2753]: I0906 00:06:23.325891 2753 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b981377f517e11d766a72ce34c82bd817277467fa66226c1158fd4fa28be2d37"} err="failed to get container status \"b981377f517e11d766a72ce34c82bd817277467fa66226c1158fd4fa28be2d37\": rpc error: code = NotFound desc = an error occurred when try to find container \"b981377f517e11d766a72ce34c82bd817277467fa66226c1158fd4fa28be2d37\": not found" Sep 6 00:06:23.325988 kubelet[2753]: I0906 00:06:23.325912 2753 scope.go:117] "RemoveContainer" containerID="39a23de4e25f755708fbf6cf8f43e1b73371e3f32c4061ab44bdb0e618af612f" Sep 6 00:06:23.326123 containerd[1598]: time="2025-09-06T00:06:23.326065225Z" level=error msg="ContainerStatus for \"39a23de4e25f755708fbf6cf8f43e1b73371e3f32c4061ab44bdb0e618af612f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"39a23de4e25f755708fbf6cf8f43e1b73371e3f32c4061ab44bdb0e618af612f\": not found" Sep 6 00:06:23.326507 kubelet[2753]: E0906 00:06:23.326258 2753 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"39a23de4e25f755708fbf6cf8f43e1b73371e3f32c4061ab44bdb0e618af612f\": not found" containerID="39a23de4e25f755708fbf6cf8f43e1b73371e3f32c4061ab44bdb0e618af612f" Sep 6 00:06:23.326507 kubelet[2753]: I0906 00:06:23.326327 2753 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"39a23de4e25f755708fbf6cf8f43e1b73371e3f32c4061ab44bdb0e618af612f"} err="failed to get container status \"39a23de4e25f755708fbf6cf8f43e1b73371e3f32c4061ab44bdb0e618af612f\": rpc error: code = NotFound desc = an error occurred when try to find container \"39a23de4e25f755708fbf6cf8f43e1b73371e3f32c4061ab44bdb0e618af612f\": not found" Sep 6 00:06:23.326507 kubelet[2753]: I0906 00:06:23.326347 2753 scope.go:117] "RemoveContainer" containerID="2d197c966fa8bcfe6ead7130d43c50f630fed4701d6fa06b5601b58de44dc3b2" Sep 6 00:06:23.327930 containerd[1598]: time="2025-09-06T00:06:23.327719252Z" level=info msg="RemoveContainer for \"2d197c966fa8bcfe6ead7130d43c50f630fed4701d6fa06b5601b58de44dc3b2\"" Sep 6 00:06:23.331158 containerd[1598]: time="2025-09-06T00:06:23.331120950Z" level=info msg="RemoveContainer for \"2d197c966fa8bcfe6ead7130d43c50f630fed4701d6fa06b5601b58de44dc3b2\" returns successfully" Sep 6 00:06:23.331490 kubelet[2753]: I0906 00:06:23.331461 2753 scope.go:117] "RemoveContainer" containerID="2d197c966fa8bcfe6ead7130d43c50f630fed4701d6fa06b5601b58de44dc3b2" Sep 6 00:06:23.331894 containerd[1598]: time="2025-09-06T00:06:23.331836738Z" level=error msg="ContainerStatus for \"2d197c966fa8bcfe6ead7130d43c50f630fed4701d6fa06b5601b58de44dc3b2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2d197c966fa8bcfe6ead7130d43c50f630fed4701d6fa06b5601b58de44dc3b2\": not found" Sep 6 00:06:23.332142 kubelet[2753]: E0906 00:06:23.332069 2753 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2d197c966fa8bcfe6ead7130d43c50f630fed4701d6fa06b5601b58de44dc3b2\": not found" containerID="2d197c966fa8bcfe6ead7130d43c50f630fed4701d6fa06b5601b58de44dc3b2" Sep 6 00:06:23.332142 kubelet[2753]: I0906 00:06:23.332118 2753 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2d197c966fa8bcfe6ead7130d43c50f630fed4701d6fa06b5601b58de44dc3b2"} err="failed to get container status \"2d197c966fa8bcfe6ead7130d43c50f630fed4701d6fa06b5601b58de44dc3b2\": rpc error: code = NotFound desc = an error occurred when try to find container \"2d197c966fa8bcfe6ead7130d43c50f630fed4701d6fa06b5601b58de44dc3b2\": not found" Sep 6 00:06:23.758501 systemd[1]: var-lib-kubelet-pods-27d862b8\x2d7252\x2d4fff\x2d913d\x2d7ef8fa524d35-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfbgds.mount: Deactivated successfully. Sep 6 00:06:23.758676 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ebdeac8f6aa8242f40af9a80823206b48d917ce061f596d532f2631024b5fe19-rootfs.mount: Deactivated successfully. Sep 6 00:06:23.758762 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ebdeac8f6aa8242f40af9a80823206b48d917ce061f596d532f2631024b5fe19-shm.mount: Deactivated successfully. Sep 6 00:06:23.758845 systemd[1]: var-lib-kubelet-pods-ecbd88c5\x2dfb01\x2d4a23\x2da044\x2db7fb468bbf9a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcwv5f.mount: Deactivated successfully. Sep 6 00:06:23.758931 systemd[1]: var-lib-kubelet-pods-ecbd88c5\x2dfb01\x2d4a23\x2da044\x2db7fb468bbf9a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 6 00:06:23.759007 systemd[1]: var-lib-kubelet-pods-ecbd88c5\x2dfb01\x2d4a23\x2da044\x2db7fb468bbf9a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 6 00:06:24.730460 kubelet[2753]: I0906 00:06:24.730354 2753 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27d862b8-7252-4fff-913d-7ef8fa524d35" path="/var/lib/kubelet/pods/27d862b8-7252-4fff-913d-7ef8fa524d35/volumes" Sep 6 00:06:24.731254 kubelet[2753]: I0906 00:06:24.731209 2753 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecbd88c5-fb01-4a23-a044-b7fb468bbf9a" path="/var/lib/kubelet/pods/ecbd88c5-fb01-4a23-a044-b7fb468bbf9a/volumes" Sep 6 00:06:24.826815 sshd[5891]: pam_unix(sshd:session): session closed for user core Sep 6 00:06:24.833036 systemd[1]: sshd@127-91.99.216.181:22-139.178.68.195:43062.service: Deactivated successfully. Sep 6 00:06:24.835902 systemd-logind[1566]: Session 120 logged out. Waiting for processes to exit. Sep 6 00:06:24.836338 systemd[1]: session-120.scope: Deactivated successfully. Sep 6 00:06:24.839195 systemd-logind[1566]: Removed session 120. Sep 6 00:06:24.999804 systemd[1]: Started sshd@128-91.99.216.181:22-139.178.68.195:52714.service - OpenSSH per-connection server daemon (139.178.68.195:52714). Sep 6 00:06:25.995794 sshd[6057]: Accepted publickey for core from 139.178.68.195 port 52714 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 6 00:06:25.998921 sshd[6057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:06:26.004292 systemd-logind[1566]: New session 121 of user core. Sep 6 00:06:26.011042 systemd[1]: Started session-121.scope - Session 121 of User core. Sep 6 00:06:26.796427 containerd[1598]: time="2025-09-06T00:06:26.795894023Z" level=info msg="StopPodSandbox for \"ebdeac8f6aa8242f40af9a80823206b48d917ce061f596d532f2631024b5fe19\"" Sep 6 00:06:26.796427 containerd[1598]: time="2025-09-06T00:06:26.795998627Z" level=info msg="TearDown network for sandbox \"ebdeac8f6aa8242f40af9a80823206b48d917ce061f596d532f2631024b5fe19\" successfully" Sep 6 00:06:26.796427 containerd[1598]: time="2025-09-06T00:06:26.796020988Z" level=info msg="StopPodSandbox for \"ebdeac8f6aa8242f40af9a80823206b48d917ce061f596d532f2631024b5fe19\" returns successfully" Sep 6 00:06:26.797758 containerd[1598]: time="2025-09-06T00:06:26.797618612Z" level=info msg="RemovePodSandbox for \"ebdeac8f6aa8242f40af9a80823206b48d917ce061f596d532f2631024b5fe19\"" Sep 6 00:06:26.797758 containerd[1598]: time="2025-09-06T00:06:26.797652654Z" level=info msg="Forcibly stopping sandbox \"ebdeac8f6aa8242f40af9a80823206b48d917ce061f596d532f2631024b5fe19\"" Sep 6 00:06:26.797758 containerd[1598]: time="2025-09-06T00:06:26.797700096Z" level=info msg="TearDown network for sandbox \"ebdeac8f6aa8242f40af9a80823206b48d917ce061f596d532f2631024b5fe19\" successfully" Sep 6 00:06:26.803635 containerd[1598]: time="2025-09-06T00:06:26.802525371Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ebdeac8f6aa8242f40af9a80823206b48d917ce061f596d532f2631024b5fe19\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 6 00:06:26.803635 containerd[1598]: time="2025-09-06T00:06:26.803517811Z" level=info msg="RemovePodSandbox \"ebdeac8f6aa8242f40af9a80823206b48d917ce061f596d532f2631024b5fe19\" returns successfully" Sep 6 00:06:26.804458 containerd[1598]: time="2025-09-06T00:06:26.804211319Z" level=info msg="StopPodSandbox for \"6c109dee7865ca5e54f2e10ca8a21ca277362e878f1d13fe2edd923d174ea516\"" Sep 6 00:06:26.804458 containerd[1598]: time="2025-09-06T00:06:26.804292042Z" level=info msg="TearDown network for sandbox \"6c109dee7865ca5e54f2e10ca8a21ca277362e878f1d13fe2edd923d174ea516\" successfully" Sep 6 00:06:26.804458 containerd[1598]: time="2025-09-06T00:06:26.804305243Z" level=info msg="StopPodSandbox for \"6c109dee7865ca5e54f2e10ca8a21ca277362e878f1d13fe2edd923d174ea516\" returns successfully" Sep 6 00:06:26.804925 containerd[1598]: time="2025-09-06T00:06:26.804785622Z" level=info msg="RemovePodSandbox for \"6c109dee7865ca5e54f2e10ca8a21ca277362e878f1d13fe2edd923d174ea516\"" Sep 6 00:06:26.804925 containerd[1598]: time="2025-09-06T00:06:26.804813703Z" level=info msg="Forcibly stopping sandbox \"6c109dee7865ca5e54f2e10ca8a21ca277362e878f1d13fe2edd923d174ea516\"" Sep 6 00:06:26.804925 containerd[1598]: time="2025-09-06T00:06:26.804866425Z" level=info msg="TearDown network for sandbox \"6c109dee7865ca5e54f2e10ca8a21ca277362e878f1d13fe2edd923d174ea516\" successfully" Sep 6 00:06:26.810455 containerd[1598]: time="2025-09-06T00:06:26.810259564Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6c109dee7865ca5e54f2e10ca8a21ca277362e878f1d13fe2edd923d174ea516\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 6 00:06:26.810455 containerd[1598]: time="2025-09-06T00:06:26.810321326Z" level=info msg="RemovePodSandbox \"6c109dee7865ca5e54f2e10ca8a21ca277362e878f1d13fe2edd923d174ea516\" returns successfully" Sep 6 00:06:27.106393 kubelet[2753]: E0906 00:06:27.106078 2753 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:06:27.711388 kubelet[2753]: E0906 00:06:27.711340 2753 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ecbd88c5-fb01-4a23-a044-b7fb468bbf9a" containerName="clean-cilium-state" Sep 6 00:06:27.711840 kubelet[2753]: E0906 00:06:27.711586 2753 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ecbd88c5-fb01-4a23-a044-b7fb468bbf9a" containerName="cilium-agent" Sep 6 00:06:27.711840 kubelet[2753]: E0906 00:06:27.711605 2753 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ecbd88c5-fb01-4a23-a044-b7fb468bbf9a" containerName="apply-sysctl-overwrites" Sep 6 00:06:27.711840 kubelet[2753]: E0906 00:06:27.711612 2753 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ecbd88c5-fb01-4a23-a044-b7fb468bbf9a" containerName="mount-bpf-fs" Sep 6 00:06:27.711840 kubelet[2753]: E0906 00:06:27.711617 2753 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="27d862b8-7252-4fff-913d-7ef8fa524d35" containerName="cilium-operator" Sep 6 00:06:27.711840 kubelet[2753]: E0906 00:06:27.711623 2753 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ecbd88c5-fb01-4a23-a044-b7fb468bbf9a" containerName="mount-cgroup" Sep 6 00:06:27.711840 kubelet[2753]: I0906 00:06:27.711662 2753 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecbd88c5-fb01-4a23-a044-b7fb468bbf9a" containerName="cilium-agent" Sep 6 00:06:27.711840 kubelet[2753]: I0906 00:06:27.711677 2753 memory_manager.go:354] "RemoveStaleState removing state" podUID="27d862b8-7252-4fff-913d-7ef8fa524d35" containerName="cilium-operator" Sep 6 00:06:27.856582 kubelet[2753]: I0906 00:06:27.855906 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/57c27c92-ce4e-4081-ab17-430fb2927906-cni-path\") pod \"cilium-s7pdq\" (UID: \"57c27c92-ce4e-4081-ab17-430fb2927906\") " pod="kube-system/cilium-s7pdq" Sep 6 00:06:27.856582 kubelet[2753]: I0906 00:06:27.855976 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/57c27c92-ce4e-4081-ab17-430fb2927906-host-proc-sys-kernel\") pod \"cilium-s7pdq\" (UID: \"57c27c92-ce4e-4081-ab17-430fb2927906\") " pod="kube-system/cilium-s7pdq" Sep 6 00:06:27.856582 kubelet[2753]: I0906 00:06:27.856013 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/57c27c92-ce4e-4081-ab17-430fb2927906-hostproc\") pod \"cilium-s7pdq\" (UID: \"57c27c92-ce4e-4081-ab17-430fb2927906\") " pod="kube-system/cilium-s7pdq" Sep 6 00:06:27.856582 kubelet[2753]: I0906 00:06:27.856052 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/57c27c92-ce4e-4081-ab17-430fb2927906-etc-cni-netd\") pod \"cilium-s7pdq\" (UID: \"57c27c92-ce4e-4081-ab17-430fb2927906\") " pod="kube-system/cilium-s7pdq" Sep 6 00:06:27.856582 kubelet[2753]: I0906 00:06:27.856087 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/57c27c92-ce4e-4081-ab17-430fb2927906-bpf-maps\") pod \"cilium-s7pdq\" (UID: \"57c27c92-ce4e-4081-ab17-430fb2927906\") " pod="kube-system/cilium-s7pdq" Sep 6 00:06:27.856582 kubelet[2753]: I0906 00:06:27.856119 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/57c27c92-ce4e-4081-ab17-430fb2927906-cilium-config-path\") pod \"cilium-s7pdq\" (UID: \"57c27c92-ce4e-4081-ab17-430fb2927906\") " pod="kube-system/cilium-s7pdq" Sep 6 00:06:27.857014 kubelet[2753]: I0906 00:06:27.856148 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/57c27c92-ce4e-4081-ab17-430fb2927906-cilium-cgroup\") pod \"cilium-s7pdq\" (UID: \"57c27c92-ce4e-4081-ab17-430fb2927906\") " pod="kube-system/cilium-s7pdq" Sep 6 00:06:27.857014 kubelet[2753]: I0906 00:06:27.856182 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/57c27c92-ce4e-4081-ab17-430fb2927906-lib-modules\") pod \"cilium-s7pdq\" (UID: \"57c27c92-ce4e-4081-ab17-430fb2927906\") " pod="kube-system/cilium-s7pdq" Sep 6 00:06:27.857014 kubelet[2753]: I0906 00:06:27.856212 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/57c27c92-ce4e-4081-ab17-430fb2927906-cilium-ipsec-secrets\") pod \"cilium-s7pdq\" (UID: \"57c27c92-ce4e-4081-ab17-430fb2927906\") " pod="kube-system/cilium-s7pdq" Sep 6 00:06:27.857014 kubelet[2753]: I0906 00:06:27.856244 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/57c27c92-ce4e-4081-ab17-430fb2927906-hubble-tls\") pod \"cilium-s7pdq\" (UID: \"57c27c92-ce4e-4081-ab17-430fb2927906\") " pod="kube-system/cilium-s7pdq" Sep 6 00:06:27.857014 kubelet[2753]: I0906 00:06:27.856276 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dvbj\" (UniqueName: \"kubernetes.io/projected/57c27c92-ce4e-4081-ab17-430fb2927906-kube-api-access-5dvbj\") pod \"cilium-s7pdq\" (UID: \"57c27c92-ce4e-4081-ab17-430fb2927906\") " pod="kube-system/cilium-s7pdq" Sep 6 00:06:27.857014 kubelet[2753]: I0906 00:06:27.856309 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/57c27c92-ce4e-4081-ab17-430fb2927906-cilium-run\") pod \"cilium-s7pdq\" (UID: \"57c27c92-ce4e-4081-ab17-430fb2927906\") " pod="kube-system/cilium-s7pdq" Sep 6 00:06:27.857288 kubelet[2753]: I0906 00:06:27.856341 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/57c27c92-ce4e-4081-ab17-430fb2927906-xtables-lock\") pod \"cilium-s7pdq\" (UID: \"57c27c92-ce4e-4081-ab17-430fb2927906\") " pod="kube-system/cilium-s7pdq" Sep 6 00:06:27.857288 kubelet[2753]: I0906 00:06:27.856389 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/57c27c92-ce4e-4081-ab17-430fb2927906-clustermesh-secrets\") pod \"cilium-s7pdq\" (UID: \"57c27c92-ce4e-4081-ab17-430fb2927906\") " pod="kube-system/cilium-s7pdq" Sep 6 00:06:27.857288 kubelet[2753]: I0906 00:06:27.856461 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/57c27c92-ce4e-4081-ab17-430fb2927906-host-proc-sys-net\") pod \"cilium-s7pdq\" (UID: \"57c27c92-ce4e-4081-ab17-430fb2927906\") " pod="kube-system/cilium-s7pdq" Sep 6 00:06:27.902678 sshd[6057]: pam_unix(sshd:session): session closed for user core Sep 6 00:06:27.912357 systemd[1]: sshd@128-91.99.216.181:22-139.178.68.195:52714.service: Deactivated successfully. Sep 6 00:06:27.915984 systemd[1]: session-121.scope: Deactivated successfully. Sep 6 00:06:27.918200 systemd-logind[1566]: Session 121 logged out. Waiting for processes to exit. Sep 6 00:06:27.919574 systemd-logind[1566]: Removed session 121. Sep 6 00:06:28.018718 containerd[1598]: time="2025-09-06T00:06:28.018567869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-s7pdq,Uid:57c27c92-ce4e-4081-ab17-430fb2927906,Namespace:kube-system,Attempt:0,}" Sep 6 00:06:28.051232 containerd[1598]: time="2025-09-06T00:06:28.050355035Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:06:28.051232 containerd[1598]: time="2025-09-06T00:06:28.050855416Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:06:28.051232 containerd[1598]: time="2025-09-06T00:06:28.050880617Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:06:28.051232 containerd[1598]: time="2025-09-06T00:06:28.051006702Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:06:28.070803 systemd[1]: Started sshd@129-91.99.216.181:22-139.178.68.195:52718.service - OpenSSH per-connection server daemon (139.178.68.195:52718). Sep 6 00:06:28.100864 containerd[1598]: time="2025-09-06T00:06:28.100814437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-s7pdq,Uid:57c27c92-ce4e-4081-ab17-430fb2927906,Namespace:kube-system,Attempt:0,} returns sandbox id \"9551f2bb744544487b56ca6c25c30768401eda8e6c5ea450ed72d793201680a3\"" Sep 6 00:06:28.105731 containerd[1598]: time="2025-09-06T00:06:28.105071249Z" level=info msg="CreateContainer within sandbox \"9551f2bb744544487b56ca6c25c30768401eda8e6c5ea450ed72d793201680a3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 6 00:06:28.116252 containerd[1598]: time="2025-09-06T00:06:28.116198139Z" level=info msg="CreateContainer within sandbox \"9551f2bb744544487b56ca6c25c30768401eda8e6c5ea450ed72d793201680a3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0acef6a8c809444772c0449c4f0b53dece505afe1a8033bd0e1eb18ddd37a2c3\"" Sep 6 00:06:28.117151 containerd[1598]: time="2025-09-06T00:06:28.116934049Z" level=info msg="StartContainer for \"0acef6a8c809444772c0449c4f0b53dece505afe1a8033bd0e1eb18ddd37a2c3\"" Sep 6 00:06:28.188812 containerd[1598]: time="2025-09-06T00:06:28.188767316Z" level=info msg="StartContainer for \"0acef6a8c809444772c0449c4f0b53dece505afe1a8033bd0e1eb18ddd37a2c3\" returns successfully" Sep 6 00:06:28.235841 containerd[1598]: time="2025-09-06T00:06:28.235758177Z" level=info msg="shim disconnected" id=0acef6a8c809444772c0449c4f0b53dece505afe1a8033bd0e1eb18ddd37a2c3 namespace=k8s.io Sep 6 00:06:28.235841 containerd[1598]: time="2025-09-06T00:06:28.235831660Z" level=warning msg="cleaning up after shim disconnected" id=0acef6a8c809444772c0449c4f0b53dece505afe1a8033bd0e1eb18ddd37a2c3 namespace=k8s.io Sep 6 00:06:28.235841 containerd[1598]: time="2025-09-06T00:06:28.235844540Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 6 00:06:28.285561 containerd[1598]: time="2025-09-06T00:06:28.285079532Z" level=info msg="CreateContainer within sandbox \"9551f2bb744544487b56ca6c25c30768401eda8e6c5ea450ed72d793201680a3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 6 00:06:28.298053 containerd[1598]: time="2025-09-06T00:06:28.297994695Z" level=info msg="CreateContainer within sandbox \"9551f2bb744544487b56ca6c25c30768401eda8e6c5ea450ed72d793201680a3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"92a484705d1e9842b307794237ffa8ace33e2ff9787a2e50a3b34c574a837350\"" Sep 6 00:06:28.302454 containerd[1598]: time="2025-09-06T00:06:28.302004537Z" level=info msg="StartContainer for \"92a484705d1e9842b307794237ffa8ace33e2ff9787a2e50a3b34c574a837350\"" Sep 6 00:06:28.356252 containerd[1598]: time="2025-09-06T00:06:28.356104966Z" level=info msg="StartContainer for \"92a484705d1e9842b307794237ffa8ace33e2ff9787a2e50a3b34c574a837350\" returns successfully" Sep 6 00:06:28.392141 containerd[1598]: time="2025-09-06T00:06:28.392079942Z" level=info msg="shim disconnected" id=92a484705d1e9842b307794237ffa8ace33e2ff9787a2e50a3b34c574a837350 namespace=k8s.io Sep 6 00:06:28.392619 containerd[1598]: time="2025-09-06T00:06:28.392370593Z" level=warning msg="cleaning up after shim disconnected" id=92a484705d1e9842b307794237ffa8ace33e2ff9787a2e50a3b34c574a837350 namespace=k8s.io Sep 6 00:06:28.392619 containerd[1598]: time="2025-09-06T00:06:28.392386914Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 6 00:06:29.068779 sshd[6101]: Accepted publickey for core from 139.178.68.195 port 52718 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 6 00:06:29.070504 sshd[6101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:06:29.076743 systemd-logind[1566]: New session 122 of user core. Sep 6 00:06:29.079802 systemd[1]: Started session-122.scope - Session 122 of User core. Sep 6 00:06:29.291116 containerd[1598]: time="2025-09-06T00:06:29.290957314Z" level=info msg="CreateContainer within sandbox \"9551f2bb744544487b56ca6c25c30768401eda8e6c5ea450ed72d793201680a3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 6 00:06:29.312187 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount799648183.mount: Deactivated successfully. Sep 6 00:06:29.313958 containerd[1598]: time="2025-09-06T00:06:29.313869962Z" level=info msg="CreateContainer within sandbox \"9551f2bb744544487b56ca6c25c30768401eda8e6c5ea450ed72d793201680a3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f4b578427596c07f5502185ba8a8b68081b189126ee76cf1f268303aedaf9640\"" Sep 6 00:06:29.316514 containerd[1598]: time="2025-09-06T00:06:29.316106332Z" level=info msg="StartContainer for \"f4b578427596c07f5502185ba8a8b68081b189126ee76cf1f268303aedaf9640\"" Sep 6 00:06:29.397288 containerd[1598]: time="2025-09-06T00:06:29.392421181Z" level=info msg="StartContainer for \"f4b578427596c07f5502185ba8a8b68081b189126ee76cf1f268303aedaf9640\" returns successfully" Sep 6 00:06:29.429856 containerd[1598]: time="2025-09-06T00:06:29.429663808Z" level=info msg="shim disconnected" id=f4b578427596c07f5502185ba8a8b68081b189126ee76cf1f268303aedaf9640 namespace=k8s.io Sep 6 00:06:29.430324 containerd[1598]: time="2025-09-06T00:06:29.430116907Z" level=warning msg="cleaning up after shim disconnected" id=f4b578427596c07f5502185ba8a8b68081b189126ee76cf1f268303aedaf9640 namespace=k8s.io Sep 6 00:06:29.430324 containerd[1598]: time="2025-09-06T00:06:29.430137428Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 6 00:06:29.758979 sshd[6101]: pam_unix(sshd:session): session closed for user core Sep 6 00:06:29.765303 systemd-logind[1566]: Session 122 logged out. Waiting for processes to exit. Sep 6 00:06:29.769763 systemd[1]: sshd@129-91.99.216.181:22-139.178.68.195:52718.service: Deactivated successfully. Sep 6 00:06:29.773210 systemd[1]: session-122.scope: Deactivated successfully. Sep 6 00:06:29.779232 systemd-logind[1566]: Removed session 122. Sep 6 00:06:29.926900 systemd[1]: Started sshd@130-91.99.216.181:22-139.178.68.195:52732.service - OpenSSH per-connection server daemon (139.178.68.195:52732). Sep 6 00:06:29.976396 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f4b578427596c07f5502185ba8a8b68081b189126ee76cf1f268303aedaf9640-rootfs.mount: Deactivated successfully. Sep 6 00:06:30.300974 containerd[1598]: time="2025-09-06T00:06:30.300845954Z" level=info msg="CreateContainer within sandbox \"9551f2bb744544487b56ca6c25c30768401eda8e6c5ea450ed72d793201680a3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 6 00:06:30.330943 containerd[1598]: time="2025-09-06T00:06:30.330891091Z" level=info msg="CreateContainer within sandbox \"9551f2bb744544487b56ca6c25c30768401eda8e6c5ea450ed72d793201680a3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6092f3e206fc732154f9d5120bc196872746ed57b6ba1e1e4aed20bbb0d522b0\"" Sep 6 00:06:30.332681 containerd[1598]: time="2025-09-06T00:06:30.332638682Z" level=info msg="StartContainer for \"6092f3e206fc732154f9d5120bc196872746ed57b6ba1e1e4aed20bbb0d522b0\"" Sep 6 00:06:30.394023 containerd[1598]: time="2025-09-06T00:06:30.393969845Z" level=info msg="StartContainer for \"6092f3e206fc732154f9d5120bc196872746ed57b6ba1e1e4aed20bbb0d522b0\" returns successfully" Sep 6 00:06:30.429428 containerd[1598]: time="2025-09-06T00:06:30.429336717Z" level=info msg="shim disconnected" id=6092f3e206fc732154f9d5120bc196872746ed57b6ba1e1e4aed20bbb0d522b0 namespace=k8s.io Sep 6 00:06:30.429428 containerd[1598]: time="2025-09-06T00:06:30.429424201Z" level=warning msg="cleaning up after shim disconnected" id=6092f3e206fc732154f9d5120bc196872746ed57b6ba1e1e4aed20bbb0d522b0 namespace=k8s.io Sep 6 00:06:30.429428 containerd[1598]: time="2025-09-06T00:06:30.429436521Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 6 00:06:30.441674 containerd[1598]: time="2025-09-06T00:06:30.441603094Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:06:30Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 6 00:06:30.929839 sshd[6306]: Accepted publickey for core from 139.178.68.195 port 52732 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 6 00:06:30.933419 sshd[6306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:06:30.943017 systemd-logind[1566]: New session 123 of user core. Sep 6 00:06:30.955966 systemd[1]: Started session-123.scope - Session 123 of User core. Sep 6 00:06:30.975006 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6092f3e206fc732154f9d5120bc196872746ed57b6ba1e1e4aed20bbb0d522b0-rootfs.mount: Deactivated successfully. Sep 6 00:06:31.301768 containerd[1598]: time="2025-09-06T00:06:31.301581960Z" level=info msg="CreateContainer within sandbox \"9551f2bb744544487b56ca6c25c30768401eda8e6c5ea450ed72d793201680a3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 6 00:06:31.331489 containerd[1598]: time="2025-09-06T00:06:31.330704779Z" level=info msg="CreateContainer within sandbox \"9551f2bb744544487b56ca6c25c30768401eda8e6c5ea450ed72d793201680a3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b32d254723f393b4abb51f16d95f8a21d579f8775f4190a4b456427410833ce9\"" Sep 6 00:06:31.332542 containerd[1598]: time="2025-09-06T00:06:31.332493812Z" level=info msg="StartContainer for \"b32d254723f393b4abb51f16d95f8a21d579f8775f4190a4b456427410833ce9\"" Sep 6 00:06:31.405609 containerd[1598]: time="2025-09-06T00:06:31.405532930Z" level=info msg="StartContainer for \"b32d254723f393b4abb51f16d95f8a21d579f8775f4190a4b456427410833ce9\" returns successfully" Sep 6 00:06:31.740046 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 6 00:06:34.679672 systemd-networkd[1244]: lxc_health: Link UP Sep 6 00:06:34.691006 systemd-networkd[1244]: lxc_health: Gained carrier Sep 6 00:06:35.968603 kubelet[2753]: E0906 00:06:35.968318 2753 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:43336->127.0.0.1:39297: write tcp 127.0.0.1:43336->127.0.0.1:39297: write: broken pipe Sep 6 00:06:36.045599 systemd-networkd[1244]: lxc_health: Gained IPv6LL Sep 6 00:06:36.056436 kubelet[2753]: I0906 00:06:36.055165 2753 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-s7pdq" podStartSLOduration=9.055151184 podStartE2EDuration="9.055151184s" podCreationTimestamp="2025-09-06 00:06:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:06:32.327575844 +0000 UTC m=+905.726585756" watchObservedRunningTime="2025-09-06 00:06:36.055151184 +0000 UTC m=+909.454160976" Sep 6 00:06:39.153957 systemd[1]: Started sshd@131-91.99.216.181:22-188.166.242.21:52886.service - OpenSSH per-connection server daemon (188.166.242.21:52886). Sep 6 00:06:40.170286 sshd[6995]: Connection closed by authenticating user root 188.166.242.21 port 52886 [preauth] Sep 6 00:06:40.176889 systemd[1]: sshd@131-91.99.216.181:22-188.166.242.21:52886.service: Deactivated successfully. Sep 6 00:06:40.206374 systemd[1]: run-containerd-runc-k8s.io-b32d254723f393b4abb51f16d95f8a21d579f8775f4190a4b456427410833ce9-runc.KJG5KS.mount: Deactivated successfully. Sep 6 00:06:44.500094 kubelet[2753]: E0906 00:06:44.499976 2753 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:60472->127.0.0.1:39297: write tcp 127.0.0.1:60472->127.0.0.1:39297: write: broken pipe Sep 6 00:06:48.770845 kubelet[2753]: E0906 00:06:48.770482 2753 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 127.0.0.1:60502->127.0.0.1:39297: read tcp 127.0.0.1:60502->127.0.0.1:39297: read: connection reset by peer Sep 6 00:06:55.150175 kubelet[2753]: E0906 00:06:55.150130 2753 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:36904->127.0.0.1:39297: write tcp 127.0.0.1:36904->127.0.0.1:39297: write: broken pipe Sep 6 00:06:57.224736 systemd[1]: run-containerd-runc-k8s.io-b32d254723f393b4abb51f16d95f8a21d579f8775f4190a4b456427410833ce9-runc.sBVFXE.mount: Deactivated successfully. Sep 6 00:06:57.290094 kubelet[2753]: E0906 00:06:57.290004 2753 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:36906->127.0.0.1:39297: write tcp 127.0.0.1:36906->127.0.0.1:39297: write: broken pipe Sep 6 00:07:12.174327 kubelet[2753]: E0906 00:07:12.174209 2753 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:57402->127.0.0.1:39297: write tcp 127.0.0.1:57402->127.0.0.1:39297: write: broken pipe Sep 6 00:07:16.404131 systemd[1]: run-containerd-runc-k8s.io-b32d254723f393b4abb51f16d95f8a21d579f8775f4190a4b456427410833ce9-runc.w4oFHY.mount: Deactivated successfully. Sep 6 00:07:18.588369 kubelet[2753]: E0906 00:07:18.588187 2753 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:57434->127.0.0.1:39297: write tcp 127.0.0.1:57434->127.0.0.1:39297: write: broken pipe Sep 6 00:07:33.579796 sshd[6306]: pam_unix(sshd:session): session closed for user core Sep 6 00:07:33.585151 systemd[1]: sshd@130-91.99.216.181:22-139.178.68.195:52732.service: Deactivated successfully. Sep 6 00:07:33.589495 systemd[1]: session-123.scope: Deactivated successfully. Sep 6 00:07:33.590590 systemd-logind[1566]: Session 123 logged out. Waiting for processes to exit. Sep 6 00:07:33.591598 systemd-logind[1566]: Removed session 123. Sep 6 00:07:48.200888 kubelet[2753]: E0906 00:07:48.198238 2753 controller.go:195] "Failed to update lease" err="Put \"https://91.99.216.181:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-n-f09ad01745?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 6 00:07:48.629485 kubelet[2753]: E0906 00:07:48.629242 2753 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:52244->10.0.0.2:2379: read: connection timed out" Sep 6 00:07:49.602474 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-919ba07ff34d3839f7feb5d923ef6ab33b44c74e999917e99eb332fa5ccee3cc-rootfs.mount: Deactivated successfully. Sep 6 00:07:49.608057 containerd[1598]: time="2025-09-06T00:07:49.607962445Z" level=info msg="shim disconnected" id=919ba07ff34d3839f7feb5d923ef6ab33b44c74e999917e99eb332fa5ccee3cc namespace=k8s.io Sep 6 00:07:49.608057 containerd[1598]: time="2025-09-06T00:07:49.608044524Z" level=warning msg="cleaning up after shim disconnected" id=919ba07ff34d3839f7feb5d923ef6ab33b44c74e999917e99eb332fa5ccee3cc namespace=k8s.io Sep 6 00:07:49.608057 containerd[1598]: time="2025-09-06T00:07:49.608054564Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 6 00:07:50.506567 kubelet[2753]: I0906 00:07:50.506242 2753 scope.go:117] "RemoveContainer" containerID="919ba07ff34d3839f7feb5d923ef6ab33b44c74e999917e99eb332fa5ccee3cc" Sep 6 00:07:50.508852 containerd[1598]: time="2025-09-06T00:07:50.508787396Z" level=info msg="CreateContainer within sandbox \"52f0e536081b7d9897526e86b94be776484a49663e285ed54cd44fce57e0e76c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 6 00:07:50.526647 containerd[1598]: time="2025-09-06T00:07:50.526509940Z" level=info msg="CreateContainer within sandbox \"52f0e536081b7d9897526e86b94be776484a49663e285ed54cd44fce57e0e76c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"36b6a64391f6caf297dd8036cba7eecff8f02cda94898acfff260df8c5bff7c8\"" Sep 6 00:07:50.527157 containerd[1598]: time="2025-09-06T00:07:50.527120054Z" level=info msg="StartContainer for \"36b6a64391f6caf297dd8036cba7eecff8f02cda94898acfff260df8c5bff7c8\"" Sep 6 00:07:50.594454 containerd[1598]: time="2025-09-06T00:07:50.594326708Z" level=info msg="StartContainer for \"36b6a64391f6caf297dd8036cba7eecff8f02cda94898acfff260df8c5bff7c8\" returns successfully" Sep 6 00:07:53.361052 kubelet[2753]: E0906 00:07:53.360861 2753 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:52038->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-5-n-f09ad01745.186288d74c4f07fd kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-5-n-f09ad01745,UID:608d604ce0cd0c5faf228ee7cd584fb8,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-5-n-f09ad01745,},FirstTimestamp:2025-09-06 00:07:42.907508733 +0000 UTC m=+976.306518525,LastTimestamp:2025-09-06 00:07:42.907508733 +0000 UTC m=+976.306518525,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-5-n-f09ad01745,}" Sep 6 00:07:54.965032 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-153cb9f58e28f333c2cdd0023fb15f9c45f54ba687ee76c467a16c3fbd0e65d3-rootfs.mount: Deactivated successfully. Sep 6 00:07:54.973987 containerd[1598]: time="2025-09-06T00:07:54.973747555Z" level=info msg="shim disconnected" id=153cb9f58e28f333c2cdd0023fb15f9c45f54ba687ee76c467a16c3fbd0e65d3 namespace=k8s.io Sep 6 00:07:54.973987 containerd[1598]: time="2025-09-06T00:07:54.973821914Z" level=warning msg="cleaning up after shim disconnected" id=153cb9f58e28f333c2cdd0023fb15f9c45f54ba687ee76c467a16c3fbd0e65d3 namespace=k8s.io Sep 6 00:07:54.973987 containerd[1598]: time="2025-09-06T00:07:54.973833674Z" level=info msg="cleaning up dead shim" namespace=k8s.io