Jan 30 14:21:06.900112 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 30 14:21:06.900178 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Wed Jan 29 10:12:48 -00 2025 Jan 30 14:21:06.900190 kernel: KASLR enabled Jan 30 14:21:06.900196 kernel: efi: EFI v2.7 by EDK II Jan 30 14:21:06.900202 kernel: efi: SMBIOS 3.0=0x135ed0000 MEMATTR=0x133d4d698 ACPI 2.0=0x132430018 RNG=0x13243e918 MEMRESERVE=0x13232ed18 Jan 30 14:21:06.900218 kernel: random: crng init done Jan 30 14:21:06.900227 kernel: ACPI: Early table checksum verification disabled Jan 30 14:21:06.900234 kernel: ACPI: RSDP 0x0000000132430018 000024 (v02 BOCHS ) Jan 30 14:21:06.900240 kernel: ACPI: XSDT 0x000000013243FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Jan 30 14:21:06.900246 kernel: ACPI: FACP 0x000000013243FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 14:21:06.900255 kernel: ACPI: DSDT 0x0000000132437518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 14:21:06.900261 kernel: ACPI: APIC 0x000000013243FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 14:21:06.900267 kernel: ACPI: PPTT 0x000000013243FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 14:21:06.900273 kernel: ACPI: GTDT 0x000000013243D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 14:21:06.900281 kernel: ACPI: MCFG 0x000000013243FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 14:21:06.900290 kernel: ACPI: SPCR 0x000000013243E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 14:21:06.900297 kernel: ACPI: DBG2 0x000000013243E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 14:21:06.900303 kernel: ACPI: IORT 0x000000013243E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 14:21:06.900309 kernel: ACPI: BGRT 0x000000013243E798 000038 (v01 INTEL EDK2 00000002 01000013) Jan 30 14:21:06.900316 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Jan 30 14:21:06.900322 kernel: NUMA: Failed to initialise from firmware Jan 30 14:21:06.900329 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Jan 30 14:21:06.900335 kernel: NUMA: NODE_DATA [mem 0x13981f800-0x139824fff] Jan 30 14:21:06.900341 kernel: Zone ranges: Jan 30 14:21:06.900348 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 30 14:21:06.900354 kernel: DMA32 empty Jan 30 14:21:06.900362 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Jan 30 14:21:06.900369 kernel: Movable zone start for each node Jan 30 14:21:06.900375 kernel: Early memory node ranges Jan 30 14:21:06.900382 kernel: node 0: [mem 0x0000000040000000-0x000000013243ffff] Jan 30 14:21:06.900388 kernel: node 0: [mem 0x0000000132440000-0x000000013272ffff] Jan 30 14:21:06.900395 kernel: node 0: [mem 0x0000000132730000-0x0000000135bfffff] Jan 30 14:21:06.900401 kernel: node 0: [mem 0x0000000135c00000-0x0000000135fdffff] Jan 30 14:21:06.900408 kernel: node 0: [mem 0x0000000135fe0000-0x0000000139ffffff] Jan 30 14:21:06.900414 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Jan 30 14:21:06.900421 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Jan 30 14:21:06.900427 kernel: psci: probing for conduit method from ACPI. Jan 30 14:21:06.900435 kernel: psci: PSCIv1.1 detected in firmware. Jan 30 14:21:06.900442 kernel: psci: Using standard PSCI v0.2 function IDs Jan 30 14:21:06.900448 kernel: psci: Trusted OS migration not required Jan 30 14:21:06.900457 kernel: psci: SMC Calling Convention v1.1 Jan 30 14:21:06.900465 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 30 14:21:06.900472 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 30 14:21:06.900480 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 30 14:21:06.900487 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 30 14:21:06.900494 kernel: Detected PIPT I-cache on CPU0 Jan 30 14:21:06.900501 kernel: CPU features: detected: GIC system register CPU interface Jan 30 14:21:06.900512 kernel: CPU features: detected: Hardware dirty bit management Jan 30 14:21:06.900520 kernel: CPU features: detected: Spectre-v4 Jan 30 14:21:06.900526 kernel: CPU features: detected: Spectre-BHB Jan 30 14:21:06.900533 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 30 14:21:06.900540 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 30 14:21:06.900547 kernel: CPU features: detected: ARM erratum 1418040 Jan 30 14:21:06.900553 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 30 14:21:06.900562 kernel: alternatives: applying boot alternatives Jan 30 14:21:06.900571 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=05d22c8845dec898f2b35f78b7d946edccf803dd23b974a9db2c3070ca1d8f8c Jan 30 14:21:06.900578 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 14:21:06.900585 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 14:21:06.900592 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 14:21:06.900598 kernel: Fallback order for Node 0: 0 Jan 30 14:21:06.900605 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Jan 30 14:21:06.900612 kernel: Policy zone: Normal Jan 30 14:21:06.900620 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 14:21:06.900627 kernel: software IO TLB: area num 2. Jan 30 14:21:06.900634 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Jan 30 14:21:06.900643 kernel: Memory: 3881592K/4096000K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39360K init, 897K bss, 214408K reserved, 0K cma-reserved) Jan 30 14:21:06.900650 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 30 14:21:06.900657 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 14:21:06.900664 kernel: rcu: RCU event tracing is enabled. Jan 30 14:21:06.900671 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 30 14:21:06.900678 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 14:21:06.900685 kernel: Tracing variant of Tasks RCU enabled. Jan 30 14:21:06.900692 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 14:21:06.900699 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 30 14:21:06.900705 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 30 14:21:06.900712 kernel: GICv3: 256 SPIs implemented Jan 30 14:21:06.900723 kernel: GICv3: 0 Extended SPIs implemented Jan 30 14:21:06.900730 kernel: Root IRQ handler: gic_handle_irq Jan 30 14:21:06.900737 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 30 14:21:06.900744 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 30 14:21:06.900750 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 30 14:21:06.900758 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Jan 30 14:21:06.900767 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Jan 30 14:21:06.900774 kernel: GICv3: using LPI property table @0x00000001000e0000 Jan 30 14:21:06.900781 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Jan 30 14:21:06.900788 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 14:21:06.900795 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 14:21:06.900803 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 30 14:21:06.900810 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 30 14:21:06.900818 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 30 14:21:06.900824 kernel: Console: colour dummy device 80x25 Jan 30 14:21:06.900832 kernel: ACPI: Core revision 20230628 Jan 30 14:21:06.900839 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 30 14:21:06.900850 kernel: pid_max: default: 32768 minimum: 301 Jan 30 14:21:06.900858 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 14:21:06.900866 kernel: landlock: Up and running. Jan 30 14:21:06.900874 kernel: SELinux: Initializing. Jan 30 14:21:06.900882 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 14:21:06.900889 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 14:21:06.900897 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 14:21:06.900904 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 14:21:06.900911 kernel: rcu: Hierarchical SRCU implementation. Jan 30 14:21:06.900918 kernel: rcu: Max phase no-delay instances is 400. Jan 30 14:21:06.900926 kernel: Platform MSI: ITS@0x8080000 domain created Jan 30 14:21:06.900933 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 30 14:21:06.900940 kernel: Remapping and enabling EFI services. Jan 30 14:21:06.900949 kernel: smp: Bringing up secondary CPUs ... Jan 30 14:21:06.900956 kernel: Detected PIPT I-cache on CPU1 Jan 30 14:21:06.900963 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 30 14:21:06.900970 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Jan 30 14:21:06.900977 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 14:21:06.900984 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 30 14:21:06.900991 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 14:21:06.900999 kernel: SMP: Total of 2 processors activated. Jan 30 14:21:06.901006 kernel: CPU features: detected: 32-bit EL0 Support Jan 30 14:21:06.901015 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 30 14:21:06.901022 kernel: CPU features: detected: Common not Private translations Jan 30 14:21:06.901029 kernel: CPU features: detected: CRC32 instructions Jan 30 14:21:06.901041 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 30 14:21:06.901051 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 30 14:21:06.901059 kernel: CPU features: detected: LSE atomic instructions Jan 30 14:21:06.901066 kernel: CPU features: detected: Privileged Access Never Jan 30 14:21:06.901074 kernel: CPU features: detected: RAS Extension Support Jan 30 14:21:06.901191 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 30 14:21:06.901203 kernel: CPU: All CPU(s) started at EL1 Jan 30 14:21:06.901222 kernel: alternatives: applying system-wide alternatives Jan 30 14:21:06.901230 kernel: devtmpfs: initialized Jan 30 14:21:06.901237 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 14:21:06.901245 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 30 14:21:06.901252 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 14:21:06.901260 kernel: SMBIOS 3.0.0 present. Jan 30 14:21:06.901267 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Jan 30 14:21:06.901277 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 14:21:06.901285 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 30 14:21:06.901292 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 30 14:21:06.901300 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 30 14:21:06.901307 kernel: audit: initializing netlink subsys (disabled) Jan 30 14:21:06.901315 kernel: audit: type=2000 audit(0.015:1): state=initialized audit_enabled=0 res=1 Jan 30 14:21:06.901322 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 14:21:06.901330 kernel: cpuidle: using governor menu Jan 30 14:21:06.901337 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 30 14:21:06.901346 kernel: ASID allocator initialised with 32768 entries Jan 30 14:21:06.901354 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 14:21:06.901361 kernel: Serial: AMBA PL011 UART driver Jan 30 14:21:06.901369 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 30 14:21:06.901376 kernel: Modules: 0 pages in range for non-PLT usage Jan 30 14:21:06.901384 kernel: Modules: 509040 pages in range for PLT usage Jan 30 14:21:06.901391 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 14:21:06.901399 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 14:21:06.901406 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 30 14:21:06.901415 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 30 14:21:06.901422 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 14:21:06.901430 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 14:21:06.901437 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 30 14:21:06.901445 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 30 14:21:06.901452 kernel: ACPI: Added _OSI(Module Device) Jan 30 14:21:06.901459 kernel: ACPI: Added _OSI(Processor Device) Jan 30 14:21:06.901466 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 14:21:06.901473 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 14:21:06.901482 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 14:21:06.901490 kernel: ACPI: Interpreter enabled Jan 30 14:21:06.901497 kernel: ACPI: Using GIC for interrupt routing Jan 30 14:21:06.901504 kernel: ACPI: MCFG table detected, 1 entries Jan 30 14:21:06.901512 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 30 14:21:06.901519 kernel: printk: console [ttyAMA0] enabled Jan 30 14:21:06.901527 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 14:21:06.901696 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 30 14:21:06.901776 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 30 14:21:06.901842 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 30 14:21:06.901906 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 30 14:21:06.901972 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 30 14:21:06.901982 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 30 14:21:06.901990 kernel: PCI host bridge to bus 0000:00 Jan 30 14:21:06.902066 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 30 14:21:06.904279 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 30 14:21:06.904363 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 30 14:21:06.904424 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 14:21:06.904512 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 30 14:21:06.904590 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Jan 30 14:21:06.904659 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Jan 30 14:21:06.904730 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Jan 30 14:21:06.904835 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 30 14:21:06.904917 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Jan 30 14:21:06.904997 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 30 14:21:06.905131 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Jan 30 14:21:06.905264 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 30 14:21:06.905344 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Jan 30 14:21:06.905439 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 30 14:21:06.905507 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Jan 30 14:21:06.905583 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 30 14:21:06.905649 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Jan 30 14:21:06.905723 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 30 14:21:06.905788 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Jan 30 14:21:06.905865 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 30 14:21:06.905932 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Jan 30 14:21:06.906005 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 30 14:21:06.906072 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Jan 30 14:21:06.906168 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Jan 30 14:21:06.906253 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Jan 30 14:21:06.906335 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Jan 30 14:21:06.906404 kernel: pci 0000:00:04.0: reg 0x10: [io 0x8200-0x8207] Jan 30 14:21:06.906482 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Jan 30 14:21:06.906560 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Jan 30 14:21:06.906631 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 30 14:21:06.906698 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 30 14:21:06.906780 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 30 14:21:06.906855 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Jan 30 14:21:06.906932 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Jan 30 14:21:06.907003 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Jan 30 14:21:06.907071 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Jan 30 14:21:06.909366 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Jan 30 14:21:06.909450 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Jan 30 14:21:06.909535 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 30 14:21:06.909603 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Jan 30 14:21:06.909683 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Jan 30 14:21:06.909766 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Jan 30 14:21:06.909836 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Jan 30 14:21:06.909904 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Jan 30 14:21:06.909992 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Jan 30 14:21:06.910062 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Jan 30 14:21:06.910153 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Jan 30 14:21:06.910270 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 30 14:21:06.910349 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jan 30 14:21:06.910416 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Jan 30 14:21:06.910481 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Jan 30 14:21:06.910558 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jan 30 14:21:06.910624 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jan 30 14:21:06.910689 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Jan 30 14:21:06.910758 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 30 14:21:06.910824 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Jan 30 14:21:06.910889 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Jan 30 14:21:06.910958 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 30 14:21:06.911025 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Jan 30 14:21:06.913831 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jan 30 14:21:06.913950 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 30 14:21:06.914018 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Jan 30 14:21:06.914132 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Jan 30 14:21:06.914226 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 30 14:21:06.914300 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Jan 30 14:21:06.914368 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Jan 30 14:21:06.914448 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 30 14:21:06.914524 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Jan 30 14:21:06.914604 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Jan 30 14:21:06.914674 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 30 14:21:06.914741 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Jan 30 14:21:06.914807 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Jan 30 14:21:06.914877 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 30 14:21:06.914944 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Jan 30 14:21:06.915013 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Jan 30 14:21:06.915094 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Jan 30 14:21:06.915165 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Jan 30 14:21:06.915251 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Jan 30 14:21:06.915321 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Jan 30 14:21:06.915391 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Jan 30 14:21:06.915463 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Jan 30 14:21:06.915533 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Jan 30 14:21:06.915598 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Jan 30 14:21:06.915665 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Jan 30 14:21:06.915730 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Jan 30 14:21:06.915796 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Jan 30 14:21:06.915863 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 30 14:21:06.915934 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Jan 30 14:21:06.916002 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 30 14:21:06.917720 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Jan 30 14:21:06.917851 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 30 14:21:06.917925 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Jan 30 14:21:06.917991 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Jan 30 14:21:06.918063 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Jan 30 14:21:06.918523 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Jan 30 14:21:06.918605 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Jan 30 14:21:06.918671 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jan 30 14:21:06.918741 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Jan 30 14:21:06.918808 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jan 30 14:21:06.918880 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Jan 30 14:21:06.918947 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jan 30 14:21:06.919015 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Jan 30 14:21:06.919106 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jan 30 14:21:06.919180 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Jan 30 14:21:06.919266 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jan 30 14:21:06.919339 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Jan 30 14:21:06.919406 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jan 30 14:21:06.919474 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Jan 30 14:21:06.919542 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jan 30 14:21:06.919629 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Jan 30 14:21:06.919762 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jan 30 14:21:06.919842 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Jan 30 14:21:06.919912 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Jan 30 14:21:06.919987 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Jan 30 14:21:06.920066 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Jan 30 14:21:06.920318 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 30 14:21:06.920399 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Jan 30 14:21:06.920468 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 30 14:21:06.920543 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 30 14:21:06.920612 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Jan 30 14:21:06.920679 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Jan 30 14:21:06.920752 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Jan 30 14:21:06.920825 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 30 14:21:06.920891 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 30 14:21:06.920956 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Jan 30 14:21:06.921021 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Jan 30 14:21:06.921109 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Jan 30 14:21:06.921183 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Jan 30 14:21:06.921301 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 30 14:21:06.921374 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 30 14:21:06.921445 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Jan 30 14:21:06.921512 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Jan 30 14:21:06.921589 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Jan 30 14:21:06.921668 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 30 14:21:06.921738 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 30 14:21:06.921808 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Jan 30 14:21:06.921875 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Jan 30 14:21:06.921954 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Jan 30 14:21:06.922028 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Jan 30 14:21:06.924185 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 30 14:21:06.924332 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 30 14:21:06.924401 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Jan 30 14:21:06.924466 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Jan 30 14:21:06.924542 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Jan 30 14:21:06.924611 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Jan 30 14:21:06.924681 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 30 14:21:06.924753 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 30 14:21:06.924819 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Jan 30 14:21:06.924883 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 30 14:21:06.924956 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Jan 30 14:21:06.925026 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Jan 30 14:21:06.925156 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Jan 30 14:21:06.925249 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 30 14:21:06.925318 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 30 14:21:06.925389 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Jan 30 14:21:06.925453 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 30 14:21:06.925522 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 30 14:21:06.925587 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 30 14:21:06.925655 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Jan 30 14:21:06.925722 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 30 14:21:06.925791 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 30 14:21:06.925856 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Jan 30 14:21:06.925924 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Jan 30 14:21:06.925989 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Jan 30 14:21:06.926056 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 30 14:21:06.927900 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 30 14:21:06.927986 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 30 14:21:06.928063 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 30 14:21:06.928196 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Jan 30 14:21:06.928291 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Jan 30 14:21:06.928371 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Jan 30 14:21:06.928433 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Jan 30 14:21:06.928492 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Jan 30 14:21:06.928561 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Jan 30 14:21:06.928622 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Jan 30 14:21:06.928686 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Jan 30 14:21:06.928756 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jan 30 14:21:06.928817 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Jan 30 14:21:06.928891 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Jan 30 14:21:06.928959 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Jan 30 14:21:06.929020 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Jan 30 14:21:06.929170 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Jan 30 14:21:06.929278 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Jan 30 14:21:06.929343 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Jan 30 14:21:06.929404 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 30 14:21:06.929476 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Jan 30 14:21:06.929542 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Jan 30 14:21:06.929605 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 30 14:21:06.929674 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Jan 30 14:21:06.929734 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Jan 30 14:21:06.929796 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 30 14:21:06.929863 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Jan 30 14:21:06.929925 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Jan 30 14:21:06.929990 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Jan 30 14:21:06.930001 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 30 14:21:06.930009 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 30 14:21:06.930017 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 30 14:21:06.930025 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 30 14:21:06.930033 kernel: iommu: Default domain type: Translated Jan 30 14:21:06.930041 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 30 14:21:06.930049 kernel: efivars: Registered efivars operations Jan 30 14:21:06.930057 kernel: vgaarb: loaded Jan 30 14:21:06.930067 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 30 14:21:06.930074 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 14:21:06.930094 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 14:21:06.930103 kernel: pnp: PnP ACPI init Jan 30 14:21:06.930185 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 30 14:21:06.930197 kernel: pnp: PnP ACPI: found 1 devices Jan 30 14:21:06.930205 kernel: NET: Registered PF_INET protocol family Jan 30 14:21:06.930250 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 14:21:06.930264 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 30 14:21:06.930272 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 14:21:06.930280 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 14:21:06.930288 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 30 14:21:06.930296 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 30 14:21:06.930304 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 14:21:06.930311 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 14:21:06.930319 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 14:21:06.930407 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Jan 30 14:21:06.930422 kernel: PCI: CLS 0 bytes, default 64 Jan 30 14:21:06.930430 kernel: kvm [1]: HYP mode not available Jan 30 14:21:06.930438 kernel: Initialise system trusted keyrings Jan 30 14:21:06.930446 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 30 14:21:06.930454 kernel: Key type asymmetric registered Jan 30 14:21:06.930461 kernel: Asymmetric key parser 'x509' registered Jan 30 14:21:06.930469 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 30 14:21:06.930477 kernel: io scheduler mq-deadline registered Jan 30 14:21:06.930485 kernel: io scheduler kyber registered Jan 30 14:21:06.930495 kernel: io scheduler bfq registered Jan 30 14:21:06.930504 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 30 14:21:06.930576 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Jan 30 14:21:06.930645 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Jan 30 14:21:06.930712 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 14:21:06.930781 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Jan 30 14:21:06.930849 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Jan 30 14:21:06.930919 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 14:21:06.930988 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Jan 30 14:21:06.931054 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Jan 30 14:21:06.933237 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 14:21:06.933337 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Jan 30 14:21:06.933407 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Jan 30 14:21:06.933482 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 14:21:06.933554 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Jan 30 14:21:06.933621 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Jan 30 14:21:06.933688 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 14:21:06.933758 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Jan 30 14:21:06.933824 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Jan 30 14:21:06.933894 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 14:21:06.933966 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Jan 30 14:21:06.934034 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Jan 30 14:21:06.934116 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 14:21:06.934188 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Jan 30 14:21:06.934275 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Jan 30 14:21:06.934351 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 14:21:06.934363 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Jan 30 14:21:06.934431 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Jan 30 14:21:06.934498 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Jan 30 14:21:06.934564 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 14:21:06.934575 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 30 14:21:06.934585 kernel: ACPI: button: Power Button [PWRB] Jan 30 14:21:06.934596 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 30 14:21:06.934669 kernel: virtio-pci 0000:03:00.0: enabling device (0000 -> 0002) Jan 30 14:21:06.934745 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Jan 30 14:21:06.934823 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Jan 30 14:21:06.934835 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 14:21:06.934843 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 30 14:21:06.934914 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Jan 30 14:21:06.934929 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Jan 30 14:21:06.934937 kernel: thunder_xcv, ver 1.0 Jan 30 14:21:06.934945 kernel: thunder_bgx, ver 1.0 Jan 30 14:21:06.934953 kernel: nicpf, ver 1.0 Jan 30 14:21:06.934960 kernel: nicvf, ver 1.0 Jan 30 14:21:06.935043 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 30 14:21:06.937266 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-30T14:21:06 UTC (1738246866) Jan 30 14:21:06.937294 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 30 14:21:06.937311 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 30 14:21:06.937320 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 30 14:21:06.937328 kernel: watchdog: Hard watchdog permanently disabled Jan 30 14:21:06.937337 kernel: NET: Registered PF_INET6 protocol family Jan 30 14:21:06.937345 kernel: Segment Routing with IPv6 Jan 30 14:21:06.937354 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 14:21:06.937362 kernel: NET: Registered PF_PACKET protocol family Jan 30 14:21:06.937370 kernel: Key type dns_resolver registered Jan 30 14:21:06.937378 kernel: registered taskstats version 1 Jan 30 14:21:06.937386 kernel: Loading compiled-in X.509 certificates Jan 30 14:21:06.937396 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: f200c60883a4a38d496d9250faf693faee9d7415' Jan 30 14:21:06.937404 kernel: Key type .fscrypt registered Jan 30 14:21:06.937412 kernel: Key type fscrypt-provisioning registered Jan 30 14:21:06.937421 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 14:21:06.937429 kernel: ima: Allocated hash algorithm: sha1 Jan 30 14:21:06.937437 kernel: ima: No architecture policies found Jan 30 14:21:06.937445 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 30 14:21:06.937453 kernel: clk: Disabling unused clocks Jan 30 14:21:06.937462 kernel: Freeing unused kernel memory: 39360K Jan 30 14:21:06.937470 kernel: Run /init as init process Jan 30 14:21:06.937477 kernel: with arguments: Jan 30 14:21:06.937486 kernel: /init Jan 30 14:21:06.937493 kernel: with environment: Jan 30 14:21:06.937501 kernel: HOME=/ Jan 30 14:21:06.937509 kernel: TERM=linux Jan 30 14:21:06.937516 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 14:21:06.937526 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 14:21:06.937539 systemd[1]: Detected virtualization kvm. Jan 30 14:21:06.937547 systemd[1]: Detected architecture arm64. Jan 30 14:21:06.937555 systemd[1]: Running in initrd. Jan 30 14:21:06.937563 systemd[1]: No hostname configured, using default hostname. Jan 30 14:21:06.937571 systemd[1]: Hostname set to . Jan 30 14:21:06.937580 systemd[1]: Initializing machine ID from VM UUID. Jan 30 14:21:06.937588 systemd[1]: Queued start job for default target initrd.target. Jan 30 14:21:06.937598 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 14:21:06.937606 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 14:21:06.937616 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 14:21:06.937624 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 14:21:06.937633 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 14:21:06.937643 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 14:21:06.937655 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 14:21:06.937666 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 14:21:06.937674 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 14:21:06.937683 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 14:21:06.937693 systemd[1]: Reached target paths.target - Path Units. Jan 30 14:21:06.937702 systemd[1]: Reached target slices.target - Slice Units. Jan 30 14:21:06.937710 systemd[1]: Reached target swap.target - Swaps. Jan 30 14:21:06.937718 systemd[1]: Reached target timers.target - Timer Units. Jan 30 14:21:06.937727 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 14:21:06.937735 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 14:21:06.937746 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 14:21:06.937754 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 14:21:06.937762 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 14:21:06.937771 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 14:21:06.937781 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 14:21:06.937790 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 14:21:06.937798 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 14:21:06.937807 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 14:21:06.937817 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 14:21:06.937826 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 14:21:06.937834 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 14:21:06.937843 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 14:21:06.937851 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:21:06.937859 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 14:21:06.937868 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 14:21:06.937902 systemd-journald[236]: Collecting audit messages is disabled. Jan 30 14:21:06.937925 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 14:21:06.937935 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 14:21:06.937945 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:21:06.937953 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 14:21:06.937961 kernel: Bridge firewalling registered Jan 30 14:21:06.937970 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 14:21:06.937978 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 14:21:06.937988 systemd-journald[236]: Journal started Jan 30 14:21:06.938010 systemd-journald[236]: Runtime Journal (/run/log/journal/1c4cda2a8ab242f0bb6ed734e9e21500) is 8.0M, max 76.5M, 68.5M free. Jan 30 14:21:06.905387 systemd-modules-load[237]: Inserted module 'overlay' Jan 30 14:21:06.920566 systemd-modules-load[237]: Inserted module 'br_netfilter' Jan 30 14:21:06.943418 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 14:21:06.945193 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 14:21:06.945865 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 14:21:06.949116 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:21:06.951754 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:21:06.959316 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 14:21:06.964347 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 14:21:06.967229 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 14:21:06.976618 dracut-cmdline[265]: dracut-dracut-053 Jan 30 14:21:06.980394 dracut-cmdline[265]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=05d22c8845dec898f2b35f78b7d946edccf803dd23b974a9db2c3070ca1d8f8c Jan 30 14:21:06.986003 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 14:21:06.989860 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 14:21:06.996343 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 14:21:07.026847 systemd-resolved[289]: Positive Trust Anchors: Jan 30 14:21:07.027560 systemd-resolved[289]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 14:21:07.027595 systemd-resolved[289]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 14:21:07.037293 systemd-resolved[289]: Defaulting to hostname 'linux'. Jan 30 14:21:07.039073 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 14:21:07.040450 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 14:21:07.063170 kernel: SCSI subsystem initialized Jan 30 14:21:07.068120 kernel: Loading iSCSI transport class v2.0-870. Jan 30 14:21:07.076178 kernel: iscsi: registered transport (tcp) Jan 30 14:21:07.090169 kernel: iscsi: registered transport (qla4xxx) Jan 30 14:21:07.090244 kernel: QLogic iSCSI HBA Driver Jan 30 14:21:07.140563 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 14:21:07.147355 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 14:21:07.167128 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 14:21:07.167200 kernel: device-mapper: uevent: version 1.0.3 Jan 30 14:21:07.168137 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 14:21:07.226167 kernel: raid6: neonx8 gen() 15659 MB/s Jan 30 14:21:07.241149 kernel: raid6: neonx4 gen() 15556 MB/s Jan 30 14:21:07.258134 kernel: raid6: neonx2 gen() 13171 MB/s Jan 30 14:21:07.275137 kernel: raid6: neonx1 gen() 10423 MB/s Jan 30 14:21:07.292138 kernel: raid6: int64x8 gen() 6915 MB/s Jan 30 14:21:07.309153 kernel: raid6: int64x4 gen() 7300 MB/s Jan 30 14:21:07.326145 kernel: raid6: int64x2 gen() 6086 MB/s Jan 30 14:21:07.343174 kernel: raid6: int64x1 gen() 5028 MB/s Jan 30 14:21:07.343287 kernel: raid6: using algorithm neonx8 gen() 15659 MB/s Jan 30 14:21:07.360143 kernel: raid6: .... xor() 11849 MB/s, rmw enabled Jan 30 14:21:07.360252 kernel: raid6: using neon recovery algorithm Jan 30 14:21:07.365320 kernel: xor: measuring software checksum speed Jan 30 14:21:07.365395 kernel: 8regs : 19745 MB/sec Jan 30 14:21:07.366462 kernel: 32regs : 19641 MB/sec Jan 30 14:21:07.366527 kernel: arm64_neon : 26874 MB/sec Jan 30 14:21:07.366544 kernel: xor: using function: arm64_neon (26874 MB/sec) Jan 30 14:21:07.417217 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 14:21:07.431991 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 14:21:07.438483 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 14:21:07.453792 systemd-udevd[456]: Using default interface naming scheme 'v255'. Jan 30 14:21:07.457484 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 14:21:07.468295 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 14:21:07.485950 dracut-pre-trigger[467]: rd.md=0: removing MD RAID activation Jan 30 14:21:07.521877 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 14:21:07.529297 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 14:21:07.576399 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 14:21:07.590983 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 14:21:07.609513 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 14:21:07.612875 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 14:21:07.615940 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 14:21:07.617650 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 14:21:07.628367 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 14:21:07.653114 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 14:21:07.704265 kernel: scsi host0: Virtio SCSI HBA Jan 30 14:21:07.710451 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 30 14:21:07.710549 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 30 14:21:07.717501 kernel: ACPI: bus type USB registered Jan 30 14:21:07.717631 kernel: usbcore: registered new interface driver usbfs Jan 30 14:21:07.717670 kernel: usbcore: registered new interface driver hub Jan 30 14:21:07.718387 kernel: usbcore: registered new device driver usb Jan 30 14:21:07.739975 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 14:21:07.741264 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:21:07.742857 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 14:21:07.744791 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 14:21:07.744976 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:21:07.745675 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:21:07.754596 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:21:07.758889 kernel: sr 0:0:0:0: Power-on or device reset occurred Jan 30 14:21:07.766286 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Jan 30 14:21:07.766433 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 30 14:21:07.766444 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Jan 30 14:21:07.767736 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 30 14:21:07.779271 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jan 30 14:21:07.779405 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 30 14:21:07.779491 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 30 14:21:07.779578 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jan 30 14:21:07.779660 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jan 30 14:21:07.779740 kernel: hub 1-0:1.0: USB hub found Jan 30 14:21:07.779849 kernel: hub 1-0:1.0: 4 ports detected Jan 30 14:21:07.779928 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 30 14:21:07.780025 kernel: hub 2-0:1.0: USB hub found Jan 30 14:21:07.780200 kernel: hub 2-0:1.0: 4 ports detected Jan 30 14:21:07.781362 kernel: sd 0:0:0:1: Power-on or device reset occurred Jan 30 14:21:07.790917 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Jan 30 14:21:07.791055 kernel: sd 0:0:0:1: [sda] Write Protect is off Jan 30 14:21:07.791176 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Jan 30 14:21:07.791328 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 30 14:21:07.791419 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 14:21:07.791430 kernel: GPT:17805311 != 80003071 Jan 30 14:21:07.791448 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 14:21:07.791458 kernel: GPT:17805311 != 80003071 Jan 30 14:21:07.791468 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 14:21:07.791503 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 14:21:07.791517 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Jan 30 14:21:07.794485 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:21:07.803341 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 14:21:07.840145 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (527) Jan 30 14:21:07.840215 kernel: BTRFS: device fsid f02ec3fd-6702-4c1a-b68e-9001713a3a08 devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (521) Jan 30 14:21:07.850489 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:21:07.865912 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 30 14:21:07.873331 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 30 14:21:07.877536 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 30 14:21:07.879813 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jan 30 14:21:07.884858 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 30 14:21:07.901518 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 14:21:07.911108 disk-uuid[577]: Primary Header is updated. Jan 30 14:21:07.911108 disk-uuid[577]: Secondary Entries is updated. Jan 30 14:21:07.911108 disk-uuid[577]: Secondary Header is updated. Jan 30 14:21:07.921108 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 14:21:07.930137 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 14:21:07.936141 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 14:21:08.017227 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 30 14:21:08.261148 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Jan 30 14:21:08.396953 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Jan 30 14:21:08.397023 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jan 30 14:21:08.399141 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Jan 30 14:21:08.453525 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Jan 30 14:21:08.454131 kernel: usbcore: registered new interface driver usbhid Jan 30 14:21:08.454153 kernel: usbhid: USB HID core driver Jan 30 14:21:08.938150 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 14:21:08.939129 disk-uuid[578]: The operation has completed successfully. Jan 30 14:21:08.989850 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 14:21:08.990764 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 14:21:09.010544 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 14:21:09.028531 sh[595]: Success Jan 30 14:21:09.043131 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 30 14:21:09.089736 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 14:21:09.105352 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 14:21:09.109190 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 14:21:09.140350 kernel: BTRFS info (device dm-0): first mount of filesystem f02ec3fd-6702-4c1a-b68e-9001713a3a08 Jan 30 14:21:09.140411 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 30 14:21:09.140424 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 14:21:09.141308 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 14:21:09.141350 kernel: BTRFS info (device dm-0): using free space tree Jan 30 14:21:09.150122 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 30 14:21:09.152126 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 14:21:09.153371 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 14:21:09.160347 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 14:21:09.163627 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 14:21:09.179746 kernel: BTRFS info (device sda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 14:21:09.179818 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 14:21:09.179830 kernel: BTRFS info (device sda6): using free space tree Jan 30 14:21:09.183105 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 14:21:09.183258 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 14:21:09.195110 kernel: BTRFS info (device sda6): last unmount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 14:21:09.195307 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 14:21:09.202332 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 14:21:09.210315 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 14:21:09.305285 ignition[685]: Ignition 2.19.0 Jan 30 14:21:09.305863 ignition[685]: Stage: fetch-offline Jan 30 14:21:09.305911 ignition[685]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:21:09.305919 ignition[685]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 14:21:09.306565 ignition[685]: parsed url from cmdline: "" Jan 30 14:21:09.306579 ignition[685]: no config URL provided Jan 30 14:21:09.306586 ignition[685]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 14:21:09.306599 ignition[685]: no config at "/usr/lib/ignition/user.ign" Jan 30 14:21:09.309824 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 14:21:09.306605 ignition[685]: failed to fetch config: resource requires networking Jan 30 14:21:09.307452 ignition[685]: Ignition finished successfully Jan 30 14:21:09.312531 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 14:21:09.321378 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 14:21:09.341596 systemd-networkd[786]: lo: Link UP Jan 30 14:21:09.341612 systemd-networkd[786]: lo: Gained carrier Jan 30 14:21:09.343334 systemd-networkd[786]: Enumeration completed Jan 30 14:21:09.344235 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 14:21:09.344435 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:21:09.344438 systemd-networkd[786]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 14:21:09.346595 systemd[1]: Reached target network.target - Network. Jan 30 14:21:09.347425 systemd-networkd[786]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:21:09.347428 systemd-networkd[786]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 14:21:09.348063 systemd-networkd[786]: eth0: Link UP Jan 30 14:21:09.348067 systemd-networkd[786]: eth0: Gained carrier Jan 30 14:21:09.348076 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:21:09.353756 systemd-networkd[786]: eth1: Link UP Jan 30 14:21:09.353759 systemd-networkd[786]: eth1: Gained carrier Jan 30 14:21:09.353770 systemd-networkd[786]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:21:09.355378 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 14:21:09.368593 ignition[788]: Ignition 2.19.0 Jan 30 14:21:09.368604 ignition[788]: Stage: fetch Jan 30 14:21:09.368788 ignition[788]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:21:09.368798 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 14:21:09.368884 ignition[788]: parsed url from cmdline: "" Jan 30 14:21:09.368887 ignition[788]: no config URL provided Jan 30 14:21:09.368891 ignition[788]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 14:21:09.368898 ignition[788]: no config at "/usr/lib/ignition/user.ign" Jan 30 14:21:09.368915 ignition[788]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jan 30 14:21:09.369559 ignition[788]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 30 14:21:09.392191 systemd-networkd[786]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 14:21:09.409223 systemd-networkd[786]: eth0: DHCPv4 address 49.13.124.2/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 30 14:21:09.570332 ignition[788]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Jan 30 14:21:09.576600 ignition[788]: GET result: OK Jan 30 14:21:09.576712 ignition[788]: parsing config with SHA512: 4e01d0fe62288b2ce5a766b4355de219b60de433321a8b48e316858482abab3ee61db8f0bcd37a4b0cc83ed1ab29aaa8045cae51fd9e1d1d80425fb846e8686c Jan 30 14:21:09.583068 unknown[788]: fetched base config from "system" Jan 30 14:21:09.583113 unknown[788]: fetched base config from "system" Jan 30 14:21:09.583777 ignition[788]: fetch: fetch complete Jan 30 14:21:09.583127 unknown[788]: fetched user config from "hetzner" Jan 30 14:21:09.583785 ignition[788]: fetch: fetch passed Jan 30 14:21:09.586054 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 14:21:09.583842 ignition[788]: Ignition finished successfully Jan 30 14:21:09.591550 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 14:21:09.610014 ignition[795]: Ignition 2.19.0 Jan 30 14:21:09.610025 ignition[795]: Stage: kargs Jan 30 14:21:09.610283 ignition[795]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:21:09.610295 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 14:21:09.611387 ignition[795]: kargs: kargs passed Jan 30 14:21:09.611443 ignition[795]: Ignition finished successfully Jan 30 14:21:09.612653 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 14:21:09.618309 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 14:21:09.632958 ignition[801]: Ignition 2.19.0 Jan 30 14:21:09.632978 ignition[801]: Stage: disks Jan 30 14:21:09.633820 ignition[801]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:21:09.636262 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 14:21:09.633842 ignition[801]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 14:21:09.634998 ignition[801]: disks: disks passed Jan 30 14:21:09.635058 ignition[801]: Ignition finished successfully Jan 30 14:21:09.639371 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 14:21:09.640066 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 14:21:09.641502 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 14:21:09.642503 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 14:21:09.643438 systemd[1]: Reached target basic.target - Basic System. Jan 30 14:21:09.651383 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 14:21:09.674220 systemd-fsck[809]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 30 14:21:09.678931 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 14:21:09.685368 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 14:21:09.739499 kernel: EXT4-fs (sda9): mounted filesystem 8499bb43-f860-448d-b3b8-5a1fc2b80abf r/w with ordered data mode. Quota mode: none. Jan 30 14:21:09.740256 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 14:21:09.741508 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 14:21:09.751512 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 14:21:09.755639 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 14:21:09.758298 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 30 14:21:09.761265 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 14:21:09.764270 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (817) Jan 30 14:21:09.762439 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 14:21:09.767115 kernel: BTRFS info (device sda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 14:21:09.767165 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 14:21:09.767177 kernel: BTRFS info (device sda6): using free space tree Jan 30 14:21:09.772047 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 14:21:09.776260 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 14:21:09.776324 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 14:21:09.779411 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 14:21:09.785352 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 14:21:09.820280 initrd-setup-root[844]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 14:21:09.823316 coreos-metadata[819]: Jan 30 14:21:09.823 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jan 30 14:21:09.825953 coreos-metadata[819]: Jan 30 14:21:09.825 INFO Fetch successful Jan 30 14:21:09.825953 coreos-metadata[819]: Jan 30 14:21:09.825 INFO wrote hostname ci-4081-3-0-1-1410e96de7 to /sysroot/etc/hostname Jan 30 14:21:09.828908 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 14:21:09.830747 initrd-setup-root[851]: cut: /sysroot/etc/group: No such file or directory Jan 30 14:21:09.834513 initrd-setup-root[859]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 14:21:09.840005 initrd-setup-root[866]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 14:21:09.946636 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 14:21:09.951401 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 14:21:09.954280 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 14:21:09.964119 kernel: BTRFS info (device sda6): last unmount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 14:21:09.980819 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 14:21:09.987502 ignition[934]: INFO : Ignition 2.19.0 Jan 30 14:21:09.987502 ignition[934]: INFO : Stage: mount Jan 30 14:21:09.988689 ignition[934]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 14:21:09.988689 ignition[934]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 14:21:09.990378 ignition[934]: INFO : mount: mount passed Jan 30 14:21:09.991247 ignition[934]: INFO : Ignition finished successfully Jan 30 14:21:09.993300 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 14:21:10.000319 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 14:21:10.139795 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 14:21:10.152490 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 14:21:10.162129 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (945) Jan 30 14:21:10.164110 kernel: BTRFS info (device sda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 14:21:10.164162 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 14:21:10.164245 kernel: BTRFS info (device sda6): using free space tree Jan 30 14:21:10.167346 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 14:21:10.167394 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 14:21:10.170997 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 14:21:10.204925 ignition[962]: INFO : Ignition 2.19.0 Jan 30 14:21:10.204925 ignition[962]: INFO : Stage: files Jan 30 14:21:10.206169 ignition[962]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 14:21:10.206169 ignition[962]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 14:21:10.206169 ignition[962]: DEBUG : files: compiled without relabeling support, skipping Jan 30 14:21:10.208521 ignition[962]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 14:21:10.208521 ignition[962]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 14:21:10.213432 ignition[962]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 14:21:10.214807 ignition[962]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 14:21:10.216741 ignition[962]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 14:21:10.215251 unknown[962]: wrote ssh authorized keys file for user: core Jan 30 14:21:10.219398 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 30 14:21:10.219398 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 30 14:21:10.292319 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 14:21:10.418582 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 30 14:21:10.418582 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 30 14:21:10.421140 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 30 14:21:10.452234 systemd-networkd[786]: eth1: Gained IPv6LL Jan 30 14:21:11.010227 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 30 14:21:11.128405 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 30 14:21:11.130593 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 30 14:21:11.130593 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 14:21:11.130593 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 14:21:11.130593 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 14:21:11.130593 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 14:21:11.130593 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 14:21:11.130593 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 14:21:11.130593 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 14:21:11.130593 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 14:21:11.130593 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 14:21:11.130593 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 14:21:11.130593 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 14:21:11.130593 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 14:21:11.130593 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Jan 30 14:21:11.220476 systemd-networkd[786]: eth0: Gained IPv6LL Jan 30 14:21:11.630226 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 30 14:21:12.119882 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 14:21:12.119882 ignition[962]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 30 14:21:12.123052 ignition[962]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 14:21:12.124040 ignition[962]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 14:21:12.124040 ignition[962]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 30 14:21:12.124040 ignition[962]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 30 14:21:12.124040 ignition[962]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 30 14:21:12.124040 ignition[962]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 30 14:21:12.124040 ignition[962]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 30 14:21:12.124040 ignition[962]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jan 30 14:21:12.124040 ignition[962]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 14:21:12.124040 ignition[962]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 14:21:12.124040 ignition[962]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 14:21:12.124040 ignition[962]: INFO : files: files passed Jan 30 14:21:12.124040 ignition[962]: INFO : Ignition finished successfully Jan 30 14:21:12.125702 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 14:21:12.131378 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 14:21:12.136333 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 14:21:12.140771 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 14:21:12.140885 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 14:21:12.152002 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 14:21:12.152002 initrd-setup-root-after-ignition[991]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 14:21:12.154624 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 14:21:12.156576 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 14:21:12.157506 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 14:21:12.164272 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 14:21:12.191003 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 14:21:12.191147 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 14:21:12.192959 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 14:21:12.193914 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 14:21:12.195008 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 14:21:12.200364 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 14:21:12.214422 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 14:21:12.227403 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 14:21:12.239875 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 14:21:12.240669 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 14:21:12.242155 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 14:21:12.243412 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 14:21:12.243542 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 14:21:12.244906 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 14:21:12.245579 systemd[1]: Stopped target basic.target - Basic System. Jan 30 14:21:12.246631 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 14:21:12.247715 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 14:21:12.248768 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 14:21:12.249902 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 14:21:12.250956 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 14:21:12.252100 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 14:21:12.253049 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 14:21:12.254122 systemd[1]: Stopped target swap.target - Swaps. Jan 30 14:21:12.255013 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 14:21:12.255162 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 14:21:12.256462 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 14:21:12.257557 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 14:21:12.258665 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 14:21:12.259154 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 14:21:12.259858 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 14:21:12.259981 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 14:21:12.261556 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 14:21:12.261697 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 14:21:12.264383 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 14:21:12.264628 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 14:21:12.266000 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 30 14:21:12.266144 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 14:21:12.278672 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 14:21:12.280063 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 14:21:12.280387 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 14:21:12.287511 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 14:21:12.290357 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 14:21:12.290677 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 14:21:12.298507 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 14:21:12.299529 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 14:21:12.312746 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 14:21:12.313459 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 14:21:12.322564 ignition[1015]: INFO : Ignition 2.19.0 Jan 30 14:21:12.322564 ignition[1015]: INFO : Stage: umount Jan 30 14:21:12.322564 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 14:21:12.322564 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 14:21:12.326940 ignition[1015]: INFO : umount: umount passed Jan 30 14:21:12.326940 ignition[1015]: INFO : Ignition finished successfully Jan 30 14:21:12.326241 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 14:21:12.326350 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 14:21:12.328008 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 14:21:12.328133 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 14:21:12.330921 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 14:21:12.330986 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 14:21:12.332449 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 14:21:12.332498 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 14:21:12.333475 systemd[1]: Stopped target network.target - Network. Jan 30 14:21:12.335727 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 14:21:12.335824 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 14:21:12.338575 systemd[1]: Stopped target paths.target - Path Units. Jan 30 14:21:12.339164 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 14:21:12.343168 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 14:21:12.344498 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 14:21:12.346670 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 14:21:12.348999 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 14:21:12.349059 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 14:21:12.349952 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 14:21:12.349990 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 14:21:12.351909 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 14:21:12.351967 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 14:21:12.353908 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 14:21:12.353957 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 14:21:12.355169 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 14:21:12.357907 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 14:21:12.361050 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 14:21:12.365679 systemd-networkd[786]: eth1: DHCPv6 lease lost Jan 30 14:21:12.370174 systemd-networkd[786]: eth0: DHCPv6 lease lost Jan 30 14:21:12.370655 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 14:21:12.370802 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 14:21:12.375867 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 14:21:12.376044 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 14:21:12.379619 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 14:21:12.379686 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 14:21:12.387296 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 14:21:12.390325 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 14:21:12.390409 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 14:21:12.392253 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 14:21:12.392306 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:21:12.392858 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 14:21:12.392895 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 14:21:12.393577 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 14:21:12.393618 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 14:21:12.394985 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 14:21:12.401253 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 14:21:12.401354 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 14:21:12.405516 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 14:21:12.405594 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 14:21:12.415590 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 14:21:12.415768 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 14:21:12.417832 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 14:21:12.417972 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 14:21:12.420651 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 14:21:12.420731 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 14:21:12.421408 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 14:21:12.421452 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 14:21:12.423284 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 14:21:12.423348 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 14:21:12.426045 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 14:21:12.426134 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 14:21:12.428219 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 14:21:12.428276 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:21:12.436591 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 14:21:12.437281 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 14:21:12.437352 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 14:21:12.438120 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 14:21:12.438173 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:21:12.445305 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 14:21:12.445451 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 14:21:12.447713 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 14:21:12.462943 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 14:21:12.473561 systemd[1]: Switching root. Jan 30 14:21:12.500496 systemd-journald[236]: Journal stopped Jan 30 14:21:13.407803 systemd-journald[236]: Received SIGTERM from PID 1 (systemd). Jan 30 14:21:13.407875 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 14:21:13.407896 kernel: SELinux: policy capability open_perms=1 Jan 30 14:21:13.407906 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 14:21:13.407916 kernel: SELinux: policy capability always_check_network=0 Jan 30 14:21:13.407926 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 14:21:13.407942 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 14:21:13.407951 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 14:21:13.407960 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 14:21:13.407970 kernel: audit: type=1403 audit(1738246872.631:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 14:21:13.407982 systemd[1]: Successfully loaded SELinux policy in 35.500ms. Jan 30 14:21:13.408004 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.327ms. Jan 30 14:21:13.408020 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 14:21:13.408031 systemd[1]: Detected virtualization kvm. Jan 30 14:21:13.408042 systemd[1]: Detected architecture arm64. Jan 30 14:21:13.408053 systemd[1]: Detected first boot. Jan 30 14:21:13.408064 systemd[1]: Hostname set to . Jan 30 14:21:13.408074 systemd[1]: Initializing machine ID from VM UUID. Jan 30 14:21:13.408106 zram_generator::config[1057]: No configuration found. Jan 30 14:21:13.408122 systemd[1]: Populated /etc with preset unit settings. Jan 30 14:21:13.408134 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 14:21:13.408144 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 14:21:13.408155 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 14:21:13.408167 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 14:21:13.408188 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 14:21:13.408201 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 14:21:13.408212 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 14:21:13.408225 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 14:21:13.408236 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 14:21:13.408247 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 14:21:13.408258 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 14:21:13.408269 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 14:21:13.408280 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 14:21:13.408291 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 14:21:13.408306 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 14:21:13.408319 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 14:21:13.408330 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 14:21:13.408341 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 30 14:21:13.408351 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 14:21:13.408362 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 14:21:13.408373 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 14:21:13.408385 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 14:21:13.408398 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 14:21:13.408409 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 14:21:13.408424 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 14:21:13.408435 systemd[1]: Reached target slices.target - Slice Units. Jan 30 14:21:13.408446 systemd[1]: Reached target swap.target - Swaps. Jan 30 14:21:13.408457 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 14:21:13.408468 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 14:21:13.408479 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 14:21:13.408490 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 14:21:13.408502 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 14:21:13.408513 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 14:21:13.408525 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 14:21:13.408537 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 14:21:13.408547 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 14:21:13.408558 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 14:21:13.408569 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 14:21:13.408580 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 14:21:13.408600 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 14:21:13.408613 systemd[1]: Reached target machines.target - Containers. Jan 30 14:21:13.408624 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 14:21:13.408636 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 14:21:13.408647 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 14:21:13.408658 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 14:21:13.408671 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 14:21:13.408682 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 14:21:13.408694 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 14:21:13.408710 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 14:21:13.408722 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 14:21:13.408733 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 14:21:13.408744 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 14:21:13.408755 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 14:21:13.408769 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 14:21:13.408780 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 14:21:13.408792 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 14:21:13.408803 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 14:21:13.408814 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 14:21:13.408825 kernel: fuse: init (API version 7.39) Jan 30 14:21:13.408836 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 14:21:13.408847 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 14:21:13.408858 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 14:21:13.408869 systemd[1]: Stopped verity-setup.service. Jan 30 14:21:13.408881 kernel: ACPI: bus type drm_connector registered Jan 30 14:21:13.408891 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 14:21:13.408902 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 14:21:13.408913 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 14:21:13.408926 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 14:21:13.408937 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 14:21:13.408948 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 14:21:13.408959 kernel: loop: module loaded Jan 30 14:21:13.408969 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 14:21:13.408980 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 14:21:13.408992 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 14:21:13.409003 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 14:21:13.409014 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 14:21:13.409027 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 14:21:13.409038 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 14:21:13.409074 systemd-journald[1124]: Collecting audit messages is disabled. Jan 30 14:21:13.411167 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 14:21:13.411205 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 14:21:13.411224 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 14:21:13.411235 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 14:21:13.411249 systemd-journald[1124]: Journal started Jan 30 14:21:13.411272 systemd-journald[1124]: Runtime Journal (/run/log/journal/1c4cda2a8ab242f0bb6ed734e9e21500) is 8.0M, max 76.5M, 68.5M free. Jan 30 14:21:13.118696 systemd[1]: Queued start job for default target multi-user.target. Jan 30 14:21:13.141146 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 30 14:21:13.141900 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 14:21:13.413146 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 14:21:13.413199 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 14:21:13.416147 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 14:21:13.419649 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 14:21:13.423123 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 14:21:13.426234 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 14:21:13.432139 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 14:21:13.443918 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 14:21:13.450325 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 14:21:13.457306 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 14:21:13.457958 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 14:21:13.457998 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 14:21:13.459907 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 14:21:13.468726 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 14:21:13.472320 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 14:21:13.474598 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 14:21:13.481349 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 14:21:13.484124 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 14:21:13.488540 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 14:21:13.491434 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 14:21:13.492059 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 14:21:13.497375 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 14:21:13.506295 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 14:21:13.508765 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 14:21:13.512536 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 14:21:13.513551 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 14:21:13.520978 systemd-journald[1124]: Time spent on flushing to /var/log/journal/1c4cda2a8ab242f0bb6ed734e9e21500 is 29.205ms for 1129 entries. Jan 30 14:21:13.520978 systemd-journald[1124]: System Journal (/var/log/journal/1c4cda2a8ab242f0bb6ed734e9e21500) is 8.0M, max 584.8M, 576.8M free. Jan 30 14:21:13.569127 systemd-journald[1124]: Received client request to flush runtime journal. Jan 30 14:21:13.569193 kernel: loop0: detected capacity change from 0 to 8 Jan 30 14:21:13.569210 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 14:21:13.520375 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 14:21:13.521405 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 14:21:13.540461 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 14:21:13.575138 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 14:21:13.577225 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 14:21:13.586235 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 14:21:13.595529 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 14:21:13.596476 udevadm[1177]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 30 14:21:13.607788 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:21:13.613579 kernel: loop1: detected capacity change from 0 to 114328 Jan 30 14:21:13.620565 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 14:21:13.623147 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 14:21:13.644710 kernel: loop2: detected capacity change from 0 to 194096 Jan 30 14:21:13.650350 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 14:21:13.661289 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 14:21:13.685922 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Jan 30 14:21:13.685937 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Jan 30 14:21:13.693499 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 14:21:13.695128 kernel: loop3: detected capacity change from 0 to 114432 Jan 30 14:21:13.734333 kernel: loop4: detected capacity change from 0 to 8 Jan 30 14:21:13.742136 kernel: loop5: detected capacity change from 0 to 114328 Jan 30 14:21:13.763185 kernel: loop6: detected capacity change from 0 to 194096 Jan 30 14:21:13.785345 kernel: loop7: detected capacity change from 0 to 114432 Jan 30 14:21:13.798382 (sd-merge)[1196]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Jan 30 14:21:13.799138 (sd-merge)[1196]: Merged extensions into '/usr'. Jan 30 14:21:13.808996 systemd[1]: Reloading requested from client PID 1171 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 14:21:13.809222 systemd[1]: Reloading... Jan 30 14:21:13.918116 zram_generator::config[1221]: No configuration found. Jan 30 14:21:14.047109 ldconfig[1166]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 14:21:14.056875 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:21:14.103117 systemd[1]: Reloading finished in 292 ms. Jan 30 14:21:14.150179 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 14:21:14.152666 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 14:21:14.162436 systemd[1]: Starting ensure-sysext.service... Jan 30 14:21:14.165527 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 14:21:14.182524 systemd[1]: Reloading requested from client PID 1259 ('systemctl') (unit ensure-sysext.service)... Jan 30 14:21:14.182555 systemd[1]: Reloading... Jan 30 14:21:14.207671 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 14:21:14.207951 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 14:21:14.210798 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 14:21:14.211023 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Jan 30 14:21:14.211068 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Jan 30 14:21:14.216695 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 14:21:14.216709 systemd-tmpfiles[1260]: Skipping /boot Jan 30 14:21:14.233490 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 14:21:14.233506 systemd-tmpfiles[1260]: Skipping /boot Jan 30 14:21:14.296118 zram_generator::config[1289]: No configuration found. Jan 30 14:21:14.386225 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:21:14.433778 systemd[1]: Reloading finished in 250 ms. Jan 30 14:21:14.459131 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 14:21:14.465669 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 14:21:14.480423 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 14:21:14.484692 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 14:21:14.495533 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 14:21:14.503496 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 14:21:14.514046 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 14:21:14.520493 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 14:21:14.523617 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 14:21:14.528414 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 14:21:14.537780 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 14:21:14.540398 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 14:21:14.541630 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 14:21:14.546447 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 14:21:14.549037 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 14:21:14.549223 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 14:21:14.553488 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 14:21:14.561477 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 14:21:14.564573 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 14:21:14.567734 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 14:21:14.568884 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 14:21:14.571328 systemd[1]: Finished ensure-sysext.service. Jan 30 14:21:14.584576 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 14:21:14.586814 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 14:21:14.587015 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 14:21:14.588793 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 14:21:14.589817 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 14:21:14.589969 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 14:21:14.592079 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 14:21:14.606270 systemd-udevd[1334]: Using default interface naming scheme 'v255'. Jan 30 14:21:14.614850 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 14:21:14.620751 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 14:21:14.621414 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 14:21:14.625622 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 14:21:14.629360 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 14:21:14.629528 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 14:21:14.643497 augenrules[1361]: No rules Jan 30 14:21:14.647312 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 14:21:14.654704 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 14:21:14.656640 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 14:21:14.656969 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 14:21:14.665489 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 14:21:14.675278 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 14:21:14.845653 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 30 14:21:14.853333 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 14:21:14.854817 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 14:21:14.874462 systemd-networkd[1376]: lo: Link UP Jan 30 14:21:14.874994 systemd-networkd[1376]: lo: Gained carrier Jan 30 14:21:14.882620 systemd-networkd[1376]: Enumeration completed Jan 30 14:21:14.882841 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 14:21:14.885499 systemd-networkd[1376]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:21:14.885647 systemd-networkd[1376]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 14:21:14.888566 systemd-networkd[1376]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:21:14.888574 systemd-networkd[1376]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 14:21:14.889454 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 14:21:14.890452 systemd-networkd[1376]: eth0: Link UP Jan 30 14:21:14.890461 systemd-networkd[1376]: eth0: Gained carrier Jan 30 14:21:14.890477 systemd-networkd[1376]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:21:14.897899 systemd-networkd[1376]: eth1: Link UP Jan 30 14:21:14.897909 systemd-networkd[1376]: eth1: Gained carrier Jan 30 14:21:14.897945 systemd-networkd[1376]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:21:14.902009 systemd-resolved[1332]: Positive Trust Anchors: Jan 30 14:21:14.902033 systemd-resolved[1332]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 14:21:14.902066 systemd-resolved[1332]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 14:21:14.902599 systemd-networkd[1376]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:21:14.911704 systemd-resolved[1332]: Using system hostname 'ci-4081-3-0-1-1410e96de7'. Jan 30 14:21:14.914102 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 14:21:14.915977 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 14:21:14.917003 systemd[1]: Reached target network.target - Network. Jan 30 14:21:14.917912 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 14:21:14.925110 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1381) Jan 30 14:21:14.931971 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Jan 30 14:21:14.932317 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 14:21:14.934539 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 14:21:14.936031 systemd-networkd[1376]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 14:21:14.936818 systemd-timesyncd[1350]: Network configuration changed, trying to establish connection. Jan 30 14:21:14.939131 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 14:21:14.942307 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 14:21:14.944064 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 14:21:14.944310 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 14:21:14.963254 systemd-networkd[1376]: eth0: DHCPv4 address 49.13.124.2/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 30 14:21:14.965021 systemd-timesyncd[1350]: Network configuration changed, trying to establish connection. Jan 30 14:21:14.968620 systemd-timesyncd[1350]: Network configuration changed, trying to establish connection. Jan 30 14:21:14.983391 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 14:21:14.984057 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 14:21:14.985852 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 14:21:14.986419 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 14:21:14.987839 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 14:21:14.987983 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 14:21:14.991526 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 14:21:14.991618 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 14:21:15.010108 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Jan 30 14:21:15.010233 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 30 14:21:15.010250 kernel: [drm] features: -context_init Jan 30 14:21:15.011225 kernel: [drm] number of scanouts: 1 Jan 30 14:21:15.011288 kernel: [drm] number of cap sets: 0 Jan 30 14:21:15.014521 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Jan 30 14:21:15.029102 kernel: Console: switching to colour frame buffer device 160x50 Jan 30 14:21:15.045268 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 30 14:21:15.063549 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:21:15.075270 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 14:21:15.075469 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:21:15.079404 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 30 14:21:15.089401 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 14:21:15.093464 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:21:15.107139 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 14:21:15.165728 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:21:15.228713 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 14:21:15.235598 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 14:21:15.252230 lvm[1440]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 14:21:15.278066 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 14:21:15.279355 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 14:21:15.280249 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 14:21:15.281028 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 14:21:15.281857 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 14:21:15.282981 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 14:21:15.283714 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 14:21:15.284402 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 14:21:15.285016 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 14:21:15.285055 systemd[1]: Reached target paths.target - Path Units. Jan 30 14:21:15.285583 systemd[1]: Reached target timers.target - Timer Units. Jan 30 14:21:15.287534 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 14:21:15.290014 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 14:21:15.296167 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 14:21:15.299007 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 14:21:15.300455 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 14:21:15.301239 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 14:21:15.301792 systemd[1]: Reached target basic.target - Basic System. Jan 30 14:21:15.302504 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 14:21:15.302538 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 14:21:15.305287 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 14:21:15.311052 lvm[1444]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 14:21:15.311305 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 30 14:21:15.316344 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 14:21:15.322387 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 14:21:15.327573 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 14:21:15.330248 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 14:21:15.339447 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 14:21:15.344989 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 14:21:15.362148 jq[1448]: false Jan 30 14:21:15.355357 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jan 30 14:21:15.361494 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 14:21:15.365793 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 14:21:15.373372 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 14:21:15.375051 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 14:21:15.376315 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 14:21:15.379397 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 14:21:15.385244 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 14:21:15.386737 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 14:21:15.389445 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 14:21:15.391142 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 14:21:15.396748 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 14:21:15.396933 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 14:21:15.401718 dbus-daemon[1447]: [system] SELinux support is enabled Jan 30 14:21:15.402689 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 14:21:15.409101 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 14:21:15.409230 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 14:21:15.410383 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 14:21:15.410406 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 14:21:15.425575 jq[1462]: true Jan 30 14:21:15.425811 extend-filesystems[1449]: Found loop4 Jan 30 14:21:15.425811 extend-filesystems[1449]: Found loop5 Jan 30 14:21:15.425811 extend-filesystems[1449]: Found loop6 Jan 30 14:21:15.425811 extend-filesystems[1449]: Found loop7 Jan 30 14:21:15.425811 extend-filesystems[1449]: Found sda Jan 30 14:21:15.425811 extend-filesystems[1449]: Found sda1 Jan 30 14:21:15.425811 extend-filesystems[1449]: Found sda2 Jan 30 14:21:15.425811 extend-filesystems[1449]: Found sda3 Jan 30 14:21:15.425811 extend-filesystems[1449]: Found usr Jan 30 14:21:15.425811 extend-filesystems[1449]: Found sda4 Jan 30 14:21:15.425811 extend-filesystems[1449]: Found sda6 Jan 30 14:21:15.425811 extend-filesystems[1449]: Found sda7 Jan 30 14:21:15.425811 extend-filesystems[1449]: Found sda9 Jan 30 14:21:15.425811 extend-filesystems[1449]: Checking size of /dev/sda9 Jan 30 14:21:15.485803 update_engine[1460]: I20250130 14:21:15.463117 1460 main.cc:92] Flatcar Update Engine starting Jan 30 14:21:15.485803 update_engine[1460]: I20250130 14:21:15.465435 1460 update_check_scheduler.cc:74] Next update check in 8m19s Jan 30 14:21:15.452537 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 14:21:15.494251 coreos-metadata[1446]: Jan 30 14:21:15.457 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jan 30 14:21:15.494251 coreos-metadata[1446]: Jan 30 14:21:15.459 INFO Fetch successful Jan 30 14:21:15.494251 coreos-metadata[1446]: Jan 30 14:21:15.463 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jan 30 14:21:15.494251 coreos-metadata[1446]: Jan 30 14:21:15.469 INFO Fetch successful Jan 30 14:21:15.452745 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 14:21:15.494722 tar[1464]: linux-arm64/helm Jan 30 14:21:15.468808 systemd[1]: Started update-engine.service - Update Engine. Jan 30 14:21:15.503901 extend-filesystems[1449]: Resized partition /dev/sda9 Jan 30 14:21:15.474614 (ntainerd)[1476]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 14:21:15.497705 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 14:21:15.508970 extend-filesystems[1492]: resize2fs 1.47.1 (20-May-2024) Jan 30 14:21:15.516679 jq[1480]: true Jan 30 14:21:15.521213 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Jan 30 14:21:15.619279 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1382) Jan 30 14:21:15.657368 locksmithd[1489]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 14:21:15.676623 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 30 14:21:15.680470 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 14:21:15.718608 systemd-logind[1459]: New seat seat0. Jan 30 14:21:15.730010 systemd-logind[1459]: Watching system buttons on /dev/input/event0 (Power Button) Jan 30 14:21:15.730043 systemd-logind[1459]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Jan 30 14:21:15.730593 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 14:21:15.737674 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Jan 30 14:21:15.762734 extend-filesystems[1492]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 30 14:21:15.762734 extend-filesystems[1492]: old_desc_blocks = 1, new_desc_blocks = 5 Jan 30 14:21:15.762734 extend-filesystems[1492]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Jan 30 14:21:15.777346 extend-filesystems[1449]: Resized filesystem in /dev/sda9 Jan 30 14:21:15.777346 extend-filesystems[1449]: Found sr0 Jan 30 14:21:15.783336 bash[1521]: Updated "/home/core/.ssh/authorized_keys" Jan 30 14:21:15.771607 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 14:21:15.771950 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 14:21:15.779832 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 14:21:15.798497 systemd[1]: Starting sshkeys.service... Jan 30 14:21:15.815298 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 30 14:21:15.829512 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 30 14:21:15.870714 coreos-metadata[1529]: Jan 30 14:21:15.870 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jan 30 14:21:15.874593 coreos-metadata[1529]: Jan 30 14:21:15.874 INFO Fetch successful Jan 30 14:21:15.884918 unknown[1529]: wrote ssh authorized keys file for user: core Jan 30 14:21:15.892343 systemd-networkd[1376]: eth0: Gained IPv6LL Jan 30 14:21:15.893341 systemd-timesyncd[1350]: Network configuration changed, trying to establish connection. Jan 30 14:21:15.896044 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 14:21:15.901970 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 14:21:15.909929 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:21:15.921882 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 14:21:15.942453 update-ssh-keys[1532]: Updated "/home/core/.ssh/authorized_keys" Jan 30 14:21:15.941566 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 30 14:21:15.949537 systemd[1]: Finished sshkeys.service. Jan 30 14:21:15.987190 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 14:21:16.030993 containerd[1476]: time="2025-01-30T14:21:16.030354320Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 30 14:21:16.102818 containerd[1476]: time="2025-01-30T14:21:16.100561880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 14:21:16.104140 containerd[1476]: time="2025-01-30T14:21:16.103997160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:21:16.104140 containerd[1476]: time="2025-01-30T14:21:16.104134840Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 14:21:16.104268 containerd[1476]: time="2025-01-30T14:21:16.104192960Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 14:21:16.104383 containerd[1476]: time="2025-01-30T14:21:16.104357160Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 14:21:16.104414 containerd[1476]: time="2025-01-30T14:21:16.104383320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 14:21:16.104471 containerd[1476]: time="2025-01-30T14:21:16.104450480Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:21:16.104471 containerd[1476]: time="2025-01-30T14:21:16.104467520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 14:21:16.105330 containerd[1476]: time="2025-01-30T14:21:16.105299480Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:21:16.105330 containerd[1476]: time="2025-01-30T14:21:16.105328360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 14:21:16.105412 containerd[1476]: time="2025-01-30T14:21:16.105345000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:21:16.105412 containerd[1476]: time="2025-01-30T14:21:16.105355680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 14:21:16.105466 containerd[1476]: time="2025-01-30T14:21:16.105446560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 14:21:16.105689 containerd[1476]: time="2025-01-30T14:21:16.105665360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 14:21:16.106543 containerd[1476]: time="2025-01-30T14:21:16.106456160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:21:16.106543 containerd[1476]: time="2025-01-30T14:21:16.106536880Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 14:21:16.107111 containerd[1476]: time="2025-01-30T14:21:16.106633160Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 14:21:16.107111 containerd[1476]: time="2025-01-30T14:21:16.106684600Z" level=info msg="metadata content store policy set" policy=shared Jan 30 14:21:16.115574 containerd[1476]: time="2025-01-30T14:21:16.115510560Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 14:21:16.115680 containerd[1476]: time="2025-01-30T14:21:16.115595080Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 14:21:16.115680 containerd[1476]: time="2025-01-30T14:21:16.115613080Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 14:21:16.115680 containerd[1476]: time="2025-01-30T14:21:16.115628640Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 14:21:16.115772 containerd[1476]: time="2025-01-30T14:21:16.115692200Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 14:21:16.116101 containerd[1476]: time="2025-01-30T14:21:16.115876000Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 14:21:16.116201 containerd[1476]: time="2025-01-30T14:21:16.116180360Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 14:21:16.120257 containerd[1476]: time="2025-01-30T14:21:16.120125480Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 14:21:16.120382 containerd[1476]: time="2025-01-30T14:21:16.120262760Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 14:21:16.120382 containerd[1476]: time="2025-01-30T14:21:16.120287200Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 14:21:16.120382 containerd[1476]: time="2025-01-30T14:21:16.120308400Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 14:21:16.120382 containerd[1476]: time="2025-01-30T14:21:16.120326320Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 14:21:16.120382 containerd[1476]: time="2025-01-30T14:21:16.120344360Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 14:21:16.120382 containerd[1476]: time="2025-01-30T14:21:16.120364720Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 14:21:16.120490 containerd[1476]: time="2025-01-30T14:21:16.120383920Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 14:21:16.120490 containerd[1476]: time="2025-01-30T14:21:16.120400280Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 14:21:16.120490 containerd[1476]: time="2025-01-30T14:21:16.120416720Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 14:21:16.120490 containerd[1476]: time="2025-01-30T14:21:16.120433480Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 14:21:16.120490 containerd[1476]: time="2025-01-30T14:21:16.120461320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 14:21:16.120490 containerd[1476]: time="2025-01-30T14:21:16.120480880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 14:21:16.120597 containerd[1476]: time="2025-01-30T14:21:16.120497520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 14:21:16.120597 containerd[1476]: time="2025-01-30T14:21:16.120516560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 14:21:16.120597 containerd[1476]: time="2025-01-30T14:21:16.120530520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 14:21:16.120597 containerd[1476]: time="2025-01-30T14:21:16.120548280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 14:21:16.120597 containerd[1476]: time="2025-01-30T14:21:16.120566040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 14:21:16.120597 containerd[1476]: time="2025-01-30T14:21:16.120589200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 14:21:16.120697 containerd[1476]: time="2025-01-30T14:21:16.120607080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 14:21:16.120697 containerd[1476]: time="2025-01-30T14:21:16.120627640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 14:21:16.120697 containerd[1476]: time="2025-01-30T14:21:16.120645080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 14:21:16.120697 containerd[1476]: time="2025-01-30T14:21:16.120658920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 14:21:16.120697 containerd[1476]: time="2025-01-30T14:21:16.120675760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 14:21:16.120781 containerd[1476]: time="2025-01-30T14:21:16.120697720Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 14:21:16.120781 containerd[1476]: time="2025-01-30T14:21:16.120728000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 14:21:16.120781 containerd[1476]: time="2025-01-30T14:21:16.120745160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 14:21:16.120781 containerd[1476]: time="2025-01-30T14:21:16.120759920Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 14:21:16.120908 containerd[1476]: time="2025-01-30T14:21:16.120886360Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 14:21:16.120938 containerd[1476]: time="2025-01-30T14:21:16.120915760Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 14:21:16.120938 containerd[1476]: time="2025-01-30T14:21:16.120932400Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 14:21:16.120974 containerd[1476]: time="2025-01-30T14:21:16.120950920Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 14:21:16.120974 containerd[1476]: time="2025-01-30T14:21:16.120962000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 14:21:16.121013 containerd[1476]: time="2025-01-30T14:21:16.120979040Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 14:21:16.121013 containerd[1476]: time="2025-01-30T14:21:16.120993280Z" level=info msg="NRI interface is disabled by configuration." Jan 30 14:21:16.121013 containerd[1476]: time="2025-01-30T14:21:16.121004960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 14:21:16.121476 containerd[1476]: time="2025-01-30T14:21:16.121409440Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 14:21:16.121765 containerd[1476]: time="2025-01-30T14:21:16.121485360Z" level=info msg="Connect containerd service" Jan 30 14:21:16.121765 containerd[1476]: time="2025-01-30T14:21:16.121594080Z" level=info msg="using legacy CRI server" Jan 30 14:21:16.121765 containerd[1476]: time="2025-01-30T14:21:16.121602000Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 14:21:16.121765 containerd[1476]: time="2025-01-30T14:21:16.121734960Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 14:21:16.126471 containerd[1476]: time="2025-01-30T14:21:16.126339400Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 14:21:16.128106 containerd[1476]: time="2025-01-30T14:21:16.127575800Z" level=info msg="Start subscribing containerd event" Jan 30 14:21:16.128106 containerd[1476]: time="2025-01-30T14:21:16.127694080Z" level=info msg="Start recovering state" Jan 30 14:21:16.128106 containerd[1476]: time="2025-01-30T14:21:16.127778320Z" level=info msg="Start event monitor" Jan 30 14:21:16.128106 containerd[1476]: time="2025-01-30T14:21:16.127790400Z" level=info msg="Start snapshots syncer" Jan 30 14:21:16.128106 containerd[1476]: time="2025-01-30T14:21:16.127799600Z" level=info msg="Start cni network conf syncer for default" Jan 30 14:21:16.128106 containerd[1476]: time="2025-01-30T14:21:16.127806560Z" level=info msg="Start streaming server" Jan 30 14:21:16.132783 containerd[1476]: time="2025-01-30T14:21:16.129477200Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 14:21:16.132783 containerd[1476]: time="2025-01-30T14:21:16.129545080Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 14:21:16.132783 containerd[1476]: time="2025-01-30T14:21:16.131145360Z" level=info msg="containerd successfully booted in 0.103213s" Jan 30 14:21:16.131312 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 14:21:16.148407 systemd-networkd[1376]: eth1: Gained IPv6LL Jan 30 14:21:16.151261 systemd-timesyncd[1350]: Network configuration changed, trying to establish connection. Jan 30 14:21:16.445885 tar[1464]: linux-arm64/LICENSE Jan 30 14:21:16.446143 tar[1464]: linux-arm64/README.md Jan 30 14:21:16.467149 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 14:21:16.801400 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:21:16.810604 (kubelet)[1559]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:21:16.958808 sshd_keygen[1486]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 14:21:16.986664 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 14:21:16.996846 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 14:21:17.005350 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 14:21:17.005742 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 14:21:17.015001 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 14:21:17.024935 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 14:21:17.035780 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 14:21:17.045536 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 30 14:21:17.046448 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 14:21:17.047064 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 14:21:17.048145 systemd[1]: Startup finished in 787ms (kernel) + 5.931s (initrd) + 4.452s (userspace) = 11.172s. Jan 30 14:21:17.453178 kubelet[1559]: E0130 14:21:17.453058 1559 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:21:17.455423 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:21:17.455603 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:21:27.662326 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 14:21:27.672621 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:21:27.788072 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:21:27.811662 (kubelet)[1596]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:21:27.863969 kubelet[1596]: E0130 14:21:27.863926 1596 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:21:27.867501 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:21:27.867775 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:21:37.912363 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 14:21:37.918369 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:21:38.049475 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:21:38.061698 (kubelet)[1612]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:21:38.118973 kubelet[1612]: E0130 14:21:38.118928 1612 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:21:38.122140 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:21:38.122293 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:21:46.348341 systemd-timesyncd[1350]: Contacted time server 144.76.76.107:123 (2.flatcar.pool.ntp.org). Jan 30 14:21:46.348422 systemd-timesyncd[1350]: Initial clock synchronization to Thu 2025-01-30 14:21:46.046440 UTC. Jan 30 14:21:48.162437 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 30 14:21:48.172382 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:21:48.286236 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:21:48.297670 (kubelet)[1628]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:21:48.349705 kubelet[1628]: E0130 14:21:48.349608 1628 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:21:48.351855 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:21:48.352129 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:21:58.412657 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 30 14:21:58.419538 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:21:58.532792 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:21:58.544685 (kubelet)[1643]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:21:58.593758 kubelet[1643]: E0130 14:21:58.593676 1643 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:21:58.596287 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:21:58.596441 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:21:59.243196 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 14:21:59.252602 systemd[1]: Started sshd@0-49.13.124.2:22-45.207.58.154:52503.service - OpenSSH per-connection server daemon (45.207.58.154:52503). Jan 30 14:22:00.903471 update_engine[1460]: I20250130 14:22:00.903147 1460 update_attempter.cc:509] Updating boot flags... Jan 30 14:22:00.945131 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1664) Jan 30 14:22:01.013275 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1667) Jan 30 14:22:01.768380 sshd[1653]: Invalid user ubuntu from 45.207.58.154 port 52503 Jan 30 14:22:02.047979 sshd[1653]: Received disconnect from 45.207.58.154 port 52503:11: Bye Bye [preauth] Jan 30 14:22:02.047979 sshd[1653]: Disconnected from invalid user ubuntu 45.207.58.154 port 52503 [preauth] Jan 30 14:22:02.049852 systemd[1]: sshd@0-49.13.124.2:22-45.207.58.154:52503.service: Deactivated successfully. Jan 30 14:22:07.703522 systemd[1]: Started sshd@1-49.13.124.2:22-165.232.147.130:52548.service - OpenSSH per-connection server daemon (165.232.147.130:52548). Jan 30 14:22:08.593421 sshd[1676]: Invalid user es from 165.232.147.130 port 52548 Jan 30 14:22:08.662617 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 30 14:22:08.671402 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:22:08.758117 sshd[1676]: Received disconnect from 165.232.147.130 port 52548:11: Bye Bye [preauth] Jan 30 14:22:08.758117 sshd[1676]: Disconnected from invalid user es 165.232.147.130 port 52548 [preauth] Jan 30 14:22:08.757725 systemd[1]: sshd@1-49.13.124.2:22-165.232.147.130:52548.service: Deactivated successfully. Jan 30 14:22:08.785775 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:22:08.791633 (kubelet)[1688]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:22:08.841995 kubelet[1688]: E0130 14:22:08.841922 1688 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:22:08.845737 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:22:08.845901 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:22:18.912280 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 30 14:22:18.917322 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:22:19.037644 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:22:19.049548 (kubelet)[1704]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:22:19.099862 kubelet[1704]: E0130 14:22:19.099792 1704 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:22:19.102822 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:22:19.102983 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:22:29.162822 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 30 14:22:29.175522 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:22:29.284765 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:22:29.290309 (kubelet)[1720]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:22:29.333076 kubelet[1720]: E0130 14:22:29.333001 1720 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:22:29.337167 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:22:29.337395 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:22:39.413227 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 30 14:22:39.426430 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:22:39.534477 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:22:39.543643 (kubelet)[1737]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:22:39.592854 kubelet[1737]: E0130 14:22:39.592814 1737 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:22:39.595669 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:22:39.595983 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:22:44.045332 systemd[1]: Started sshd@2-49.13.124.2:22-5.250.188.211:58182.service - OpenSSH per-connection server daemon (5.250.188.211:58182). Jan 30 14:22:44.374525 sshd[1746]: Invalid user sammy from 5.250.188.211 port 58182 Jan 30 14:22:44.421962 sshd[1746]: Received disconnect from 5.250.188.211 port 58182:11: Bye Bye [preauth] Jan 30 14:22:44.421962 sshd[1746]: Disconnected from invalid user sammy 5.250.188.211 port 58182 [preauth] Jan 30 14:22:44.424242 systemd[1]: sshd@2-49.13.124.2:22-5.250.188.211:58182.service: Deactivated successfully. Jan 30 14:22:49.662402 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Jan 30 14:22:49.669422 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:22:49.784140 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:22:49.790049 (kubelet)[1757]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:22:49.831895 kubelet[1757]: E0130 14:22:49.831849 1757 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:22:49.834635 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:22:49.834888 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:22:59.912644 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Jan 30 14:22:59.919802 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:23:00.054103 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:23:00.066678 (kubelet)[1774]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:23:00.115424 kubelet[1774]: E0130 14:23:00.115361 1774 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:23:00.117983 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:23:00.118178 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:23:10.162570 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Jan 30 14:23:10.168430 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:23:10.312694 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:23:10.318040 (kubelet)[1789]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:23:10.328389 systemd[1]: Started sshd@3-49.13.124.2:22-139.178.68.195:54536.service - OpenSSH per-connection server daemon (139.178.68.195:54536). Jan 30 14:23:10.366583 kubelet[1789]: E0130 14:23:10.366478 1789 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:23:10.370817 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:23:10.371275 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:23:11.315347 sshd[1791]: Accepted publickey for core from 139.178.68.195 port 54536 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:23:11.318123 sshd[1791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:23:11.327272 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 14:23:11.333435 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 14:23:11.336596 systemd-logind[1459]: New session 1 of user core. Jan 30 14:23:11.346957 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 14:23:11.352456 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 14:23:11.371275 (systemd)[1802]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 14:23:11.480675 systemd[1802]: Queued start job for default target default.target. Jan 30 14:23:11.491378 systemd[1802]: Created slice app.slice - User Application Slice. Jan 30 14:23:11.491468 systemd[1802]: Reached target paths.target - Paths. Jan 30 14:23:11.491784 systemd[1802]: Reached target timers.target - Timers. Jan 30 14:23:11.493340 systemd[1802]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 14:23:11.508612 systemd[1802]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 14:23:11.508736 systemd[1802]: Reached target sockets.target - Sockets. Jan 30 14:23:11.508749 systemd[1802]: Reached target basic.target - Basic System. Jan 30 14:23:11.508804 systemd[1802]: Reached target default.target - Main User Target. Jan 30 14:23:11.508832 systemd[1802]: Startup finished in 130ms. Jan 30 14:23:11.509424 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 14:23:11.517558 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 14:23:12.217873 systemd[1]: Started sshd@4-49.13.124.2:22-139.178.68.195:54544.service - OpenSSH per-connection server daemon (139.178.68.195:54544). Jan 30 14:23:13.193661 sshd[1813]: Accepted publickey for core from 139.178.68.195 port 54544 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:23:13.195999 sshd[1813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:23:13.203191 systemd-logind[1459]: New session 2 of user core. Jan 30 14:23:13.210465 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 14:23:13.875556 sshd[1813]: pam_unix(sshd:session): session closed for user core Jan 30 14:23:13.880330 systemd[1]: sshd@4-49.13.124.2:22-139.178.68.195:54544.service: Deactivated successfully. Jan 30 14:23:13.882419 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 14:23:13.883847 systemd-logind[1459]: Session 2 logged out. Waiting for processes to exit. Jan 30 14:23:13.886450 systemd-logind[1459]: Removed session 2. Jan 30 14:23:14.055585 systemd[1]: Started sshd@5-49.13.124.2:22-139.178.68.195:54558.service - OpenSSH per-connection server daemon (139.178.68.195:54558). Jan 30 14:23:15.024386 sshd[1820]: Accepted publickey for core from 139.178.68.195 port 54558 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:23:15.026514 sshd[1820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:23:15.034058 systemd-logind[1459]: New session 3 of user core. Jan 30 14:23:15.039752 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 14:23:15.695070 sshd[1820]: pam_unix(sshd:session): session closed for user core Jan 30 14:23:15.700916 systemd[1]: sshd@5-49.13.124.2:22-139.178.68.195:54558.service: Deactivated successfully. Jan 30 14:23:15.703751 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 14:23:15.705057 systemd-logind[1459]: Session 3 logged out. Waiting for processes to exit. Jan 30 14:23:15.706491 systemd-logind[1459]: Removed session 3. Jan 30 14:23:15.873591 systemd[1]: Started sshd@6-49.13.124.2:22-139.178.68.195:58060.service - OpenSSH per-connection server daemon (139.178.68.195:58060). Jan 30 14:23:16.566384 systemd[1]: Started sshd@7-49.13.124.2:22-165.232.147.130:35360.service - OpenSSH per-connection server daemon (165.232.147.130:35360). Jan 30 14:23:16.877819 sshd[1827]: Accepted publickey for core from 139.178.68.195 port 58060 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:23:16.879562 sshd[1827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:23:16.885239 systemd-logind[1459]: New session 4 of user core. Jan 30 14:23:16.888489 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 14:23:17.428761 sshd[1830]: Invalid user admin from 165.232.147.130 port 35360 Jan 30 14:23:17.561144 sshd[1827]: pam_unix(sshd:session): session closed for user core Jan 30 14:23:17.567449 systemd[1]: sshd@6-49.13.124.2:22-139.178.68.195:58060.service: Deactivated successfully. Jan 30 14:23:17.567803 systemd-logind[1459]: Session 4 logged out. Waiting for processes to exit. Jan 30 14:23:17.569828 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 14:23:17.571250 systemd-logind[1459]: Removed session 4. Jan 30 14:23:17.594872 sshd[1830]: Received disconnect from 165.232.147.130 port 35360:11: Bye Bye [preauth] Jan 30 14:23:17.594872 sshd[1830]: Disconnected from invalid user admin 165.232.147.130 port 35360 [preauth] Jan 30 14:23:17.597775 systemd[1]: sshd@7-49.13.124.2:22-165.232.147.130:35360.service: Deactivated successfully. Jan 30 14:23:17.736004 systemd[1]: Started sshd@8-49.13.124.2:22-139.178.68.195:58072.service - OpenSSH per-connection server daemon (139.178.68.195:58072). Jan 30 14:23:18.618665 systemd[1]: Started sshd@9-49.13.124.2:22-153.37.192.4:35638.service - OpenSSH per-connection server daemon (153.37.192.4:35638). Jan 30 14:23:18.705757 sshd[1839]: Accepted publickey for core from 139.178.68.195 port 58072 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:23:18.707646 sshd[1839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:23:18.712846 systemd-logind[1459]: New session 5 of user core. Jan 30 14:23:18.723426 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 14:23:19.233255 sudo[1844]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 14:23:19.233567 sudo[1844]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 14:23:19.251599 sudo[1844]: pam_unix(sudo:session): session closed for user root Jan 30 14:23:19.410521 sshd[1839]: pam_unix(sshd:session): session closed for user core Jan 30 14:23:19.415646 systemd[1]: sshd@8-49.13.124.2:22-139.178.68.195:58072.service: Deactivated successfully. Jan 30 14:23:19.418291 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 14:23:19.420098 systemd-logind[1459]: Session 5 logged out. Waiting for processes to exit. Jan 30 14:23:19.421572 systemd-logind[1459]: Removed session 5. Jan 30 14:23:19.592658 systemd[1]: Started sshd@10-49.13.124.2:22-139.178.68.195:58078.service - OpenSSH per-connection server daemon (139.178.68.195:58078). Jan 30 14:23:20.404925 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Jan 30 14:23:20.413432 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:23:20.548982 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:23:20.563707 (kubelet)[1859]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:23:20.571947 sshd[1849]: Accepted publickey for core from 139.178.68.195 port 58078 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:23:20.574427 sshd[1849]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:23:20.584505 systemd-logind[1459]: New session 6 of user core. Jan 30 14:23:20.591034 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 14:23:20.626713 kubelet[1859]: E0130 14:23:20.626651 1859 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:23:20.629622 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:23:20.629789 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:23:21.094825 sudo[1869]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 14:23:21.095623 sudo[1869]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 14:23:21.100151 sudo[1869]: pam_unix(sudo:session): session closed for user root Jan 30 14:23:21.107144 sudo[1868]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 30 14:23:21.107504 sudo[1868]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 14:23:21.124627 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 30 14:23:21.126744 auditctl[1872]: No rules Jan 30 14:23:21.127323 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 14:23:21.127669 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 30 14:23:21.131830 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 14:23:21.164180 augenrules[1890]: No rules Jan 30 14:23:21.165180 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 14:23:21.168319 sudo[1868]: pam_unix(sudo:session): session closed for user root Jan 30 14:23:21.328462 sshd[1849]: pam_unix(sshd:session): session closed for user core Jan 30 14:23:21.335034 systemd[1]: sshd@10-49.13.124.2:22-139.178.68.195:58078.service: Deactivated successfully. Jan 30 14:23:21.337652 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 14:23:21.338840 systemd-logind[1459]: Session 6 logged out. Waiting for processes to exit. Jan 30 14:23:21.341449 systemd-logind[1459]: Removed session 6. Jan 30 14:23:21.504384 systemd[1]: Started sshd@11-49.13.124.2:22-139.178.68.195:58084.service - OpenSSH per-connection server daemon (139.178.68.195:58084). Jan 30 14:23:22.495049 sshd[1898]: Accepted publickey for core from 139.178.68.195 port 58084 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:23:22.497392 sshd[1898]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:23:22.503706 systemd-logind[1459]: New session 7 of user core. Jan 30 14:23:22.513578 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 14:23:23.023414 sudo[1901]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 14:23:23.023720 sudo[1901]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 14:23:23.340655 (dockerd)[1916]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 14:23:23.340695 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 14:23:23.589693 dockerd[1916]: time="2025-01-30T14:23:23.589602526Z" level=info msg="Starting up" Jan 30 14:23:23.699718 dockerd[1916]: time="2025-01-30T14:23:23.699601068Z" level=info msg="Loading containers: start." Jan 30 14:23:23.818208 kernel: Initializing XFRM netlink socket Jan 30 14:23:23.908298 systemd-networkd[1376]: docker0: Link UP Jan 30 14:23:23.932000 dockerd[1916]: time="2025-01-30T14:23:23.931808799Z" level=info msg="Loading containers: done." Jan 30 14:23:23.947185 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3375427787-merged.mount: Deactivated successfully. Jan 30 14:23:23.948159 dockerd[1916]: time="2025-01-30T14:23:23.947494625Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 14:23:23.948159 dockerd[1916]: time="2025-01-30T14:23:23.947615387Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 30 14:23:23.948159 dockerd[1916]: time="2025-01-30T14:23:23.947735789Z" level=info msg="Daemon has completed initialization" Jan 30 14:23:24.002337 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 14:23:24.004025 dockerd[1916]: time="2025-01-30T14:23:24.001913226Z" level=info msg="API listen on /run/docker.sock" Jan 30 14:23:25.120657 containerd[1476]: time="2025-01-30T14:23:25.120594582Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 30 14:23:25.820748 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4199483917.mount: Deactivated successfully. Jan 30 14:23:27.311177 containerd[1476]: time="2025-01-30T14:23:27.309320469Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:23:27.312183 containerd[1476]: time="2025-01-30T14:23:27.311718867Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=29865027" Jan 30 14:23:27.313804 containerd[1476]: time="2025-01-30T14:23:27.313738659Z" level=info msg="ImageCreate event name:\"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:23:27.317767 containerd[1476]: time="2025-01-30T14:23:27.317710241Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:23:27.319273 containerd[1476]: time="2025-01-30T14:23:27.319230065Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"29861735\" in 2.198581563s" Jan 30 14:23:27.319426 containerd[1476]: time="2025-01-30T14:23:27.319407068Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\"" Jan 30 14:23:27.342744 containerd[1476]: time="2025-01-30T14:23:27.342705955Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 30 14:23:28.769999 containerd[1476]: time="2025-01-30T14:23:28.769916734Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:23:28.771123 containerd[1476]: time="2025-01-30T14:23:28.770959590Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=26901581" Jan 30 14:23:28.772100 containerd[1476]: time="2025-01-30T14:23:28.772039327Z" level=info msg="ImageCreate event name:\"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:23:28.776445 containerd[1476]: time="2025-01-30T14:23:28.775224256Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:23:28.776848 containerd[1476]: time="2025-01-30T14:23:28.776670159Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"28305351\" in 1.433923043s" Jan 30 14:23:28.776848 containerd[1476]: time="2025-01-30T14:23:28.776707359Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\"" Jan 30 14:23:28.797688 containerd[1476]: time="2025-01-30T14:23:28.797627923Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 30 14:23:30.082620 containerd[1476]: time="2025-01-30T14:23:30.082544573Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:23:30.084123 containerd[1476]: time="2025-01-30T14:23:30.084029356Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=16164358" Jan 30 14:23:30.085121 containerd[1476]: time="2025-01-30T14:23:30.084871888Z" level=info msg="ImageCreate event name:\"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:23:30.089870 containerd[1476]: time="2025-01-30T14:23:30.089797282Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:23:30.091320 containerd[1476]: time="2025-01-30T14:23:30.091160782Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"17568146\" in 1.293480259s" Jan 30 14:23:30.091320 containerd[1476]: time="2025-01-30T14:23:30.091211983Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\"" Jan 30 14:23:30.113180 containerd[1476]: time="2025-01-30T14:23:30.113095470Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 30 14:23:30.662413 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Jan 30 14:23:30.669424 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:23:30.804496 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:23:30.805500 (kubelet)[2144]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:23:30.869041 kubelet[2144]: E0130 14:23:30.868992 2144 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:23:30.873430 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:23:30.873770 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:23:31.157334 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount899139963.mount: Deactivated successfully. Jan 30 14:23:31.465356 containerd[1476]: time="2025-01-30T14:23:31.465274527Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:23:31.466860 containerd[1476]: time="2025-01-30T14:23:31.466798189Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=25662738" Jan 30 14:23:31.468530 containerd[1476]: time="2025-01-30T14:23:31.467444399Z" level=info msg="ImageCreate event name:\"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:23:31.483459 containerd[1476]: time="2025-01-30T14:23:31.483384473Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:23:31.485236 containerd[1476]: time="2025-01-30T14:23:31.485066698Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"25661731\" in 1.371830786s" Jan 30 14:23:31.485236 containerd[1476]: time="2025-01-30T14:23:31.485165539Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\"" Jan 30 14:23:31.508136 containerd[1476]: time="2025-01-30T14:23:31.507875593Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 30 14:23:32.132635 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1361736101.mount: Deactivated successfully. Jan 30 14:23:32.750614 containerd[1476]: time="2025-01-30T14:23:32.750478918Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:23:32.752752 containerd[1476]: time="2025-01-30T14:23:32.752705150Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485461" Jan 30 14:23:32.755176 containerd[1476]: time="2025-01-30T14:23:32.755122465Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:23:32.762853 containerd[1476]: time="2025-01-30T14:23:32.762427851Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:23:32.763784 containerd[1476]: time="2025-01-30T14:23:32.763743790Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.255819396s" Jan 30 14:23:32.763857 containerd[1476]: time="2025-01-30T14:23:32.763784310Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 30 14:23:32.788017 containerd[1476]: time="2025-01-30T14:23:32.787957500Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 30 14:23:33.309649 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount680340244.mount: Deactivated successfully. Jan 30 14:23:33.318369 containerd[1476]: time="2025-01-30T14:23:33.317145878Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:23:33.318369 containerd[1476]: time="2025-01-30T14:23:33.317993530Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268841" Jan 30 14:23:33.319612 containerd[1476]: time="2025-01-30T14:23:33.319528631Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:23:33.322638 containerd[1476]: time="2025-01-30T14:23:33.322591075Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:23:33.323505 containerd[1476]: time="2025-01-30T14:23:33.323464487Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 535.451307ms" Jan 30 14:23:33.323505 containerd[1476]: time="2025-01-30T14:23:33.323502768Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jan 30 14:23:33.346925 containerd[1476]: time="2025-01-30T14:23:33.346867660Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 30 14:23:33.936514 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1830741859.mount: Deactivated successfully. Jan 30 14:23:35.368198 containerd[1476]: time="2025-01-30T14:23:35.367655216Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:23:35.370202 containerd[1476]: time="2025-01-30T14:23:35.370134370Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191552" Jan 30 14:23:35.372131 containerd[1476]: time="2025-01-30T14:23:35.372024116Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:23:35.377327 containerd[1476]: time="2025-01-30T14:23:35.377217467Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:23:35.379852 containerd[1476]: time="2025-01-30T14:23:35.379753822Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 2.032815241s" Jan 30 14:23:35.380513 containerd[1476]: time="2025-01-30T14:23:35.380119787Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Jan 30 14:23:40.912471 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 14. Jan 30 14:23:40.922401 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:23:41.035878 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:23:41.047880 (kubelet)[2328]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:23:41.067229 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:23:41.067554 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 14:23:41.067736 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:23:41.077440 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:23:41.106398 systemd[1]: Reloading requested from client PID 2341 ('systemctl') (unit session-7.scope)... Jan 30 14:23:41.106429 systemd[1]: Reloading... Jan 30 14:23:41.233115 zram_generator::config[2383]: No configuration found. Jan 30 14:23:41.342998 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:23:41.412160 systemd[1]: Reloading finished in 305 ms. Jan 30 14:23:41.462770 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:23:41.467020 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:23:41.470324 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 14:23:41.471212 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:23:41.476480 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:23:41.601649 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:23:41.613631 (kubelet)[2433]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 14:23:41.658269 kubelet[2433]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 14:23:41.658613 kubelet[2433]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 14:23:41.658657 kubelet[2433]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 14:23:41.658775 kubelet[2433]: I0130 14:23:41.658745 2433 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 14:23:42.970606 kubelet[2433]: I0130 14:23:42.970526 2433 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 14:23:42.970606 kubelet[2433]: I0130 14:23:42.970575 2433 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 14:23:42.971250 kubelet[2433]: I0130 14:23:42.970935 2433 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 14:23:42.991590 kubelet[2433]: E0130 14:23:42.991504 2433 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://49.13.124.2:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 49.13.124.2:6443: connect: connection refused Jan 30 14:23:42.991971 kubelet[2433]: I0130 14:23:42.991849 2433 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 14:23:43.003395 kubelet[2433]: I0130 14:23:43.003347 2433 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 14:23:43.003806 kubelet[2433]: I0130 14:23:43.003758 2433 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 14:23:43.003987 kubelet[2433]: I0130 14:23:43.003792 2433 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-0-1-1410e96de7","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 14:23:43.004101 kubelet[2433]: I0130 14:23:43.004068 2433 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 14:23:43.004101 kubelet[2433]: I0130 14:23:43.004096 2433 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 14:23:43.004318 kubelet[2433]: I0130 14:23:43.004301 2433 state_mem.go:36] "Initialized new in-memory state store" Jan 30 14:23:43.007411 kubelet[2433]: I0130 14:23:43.007073 2433 kubelet.go:400] "Attempting to sync node with API server" Jan 30 14:23:43.007411 kubelet[2433]: I0130 14:23:43.007153 2433 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 14:23:43.007411 kubelet[2433]: I0130 14:23:43.007337 2433 kubelet.go:312] "Adding apiserver pod source" Jan 30 14:23:43.007411 kubelet[2433]: I0130 14:23:43.007375 2433 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 14:23:43.008774 kubelet[2433]: W0130 14:23:43.008723 2433 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://49.13.124.2:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-1-1410e96de7&limit=500&resourceVersion=0": dial tcp 49.13.124.2:6443: connect: connection refused Jan 30 14:23:43.009277 kubelet[2433]: E0130 14:23:43.009252 2433 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://49.13.124.2:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-1-1410e96de7&limit=500&resourceVersion=0": dial tcp 49.13.124.2:6443: connect: connection refused Jan 30 14:23:43.011232 kubelet[2433]: I0130 14:23:43.009693 2433 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 14:23:43.011232 kubelet[2433]: I0130 14:23:43.010168 2433 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 14:23:43.011232 kubelet[2433]: W0130 14:23:43.010286 2433 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 14:23:43.011470 kubelet[2433]: I0130 14:23:43.011451 2433 server.go:1264] "Started kubelet" Jan 30 14:23:43.016838 kubelet[2433]: I0130 14:23:43.016810 2433 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 14:23:43.017769 kubelet[2433]: W0130 14:23:43.017698 2433 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://49.13.124.2:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 49.13.124.2:6443: connect: connection refused Jan 30 14:23:43.017769 kubelet[2433]: E0130 14:23:43.017763 2433 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://49.13.124.2:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 49.13.124.2:6443: connect: connection refused Jan 30 14:23:43.018179 kubelet[2433]: E0130 14:23:43.017942 2433 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://49.13.124.2:6443/api/v1/namespaces/default/events\": dial tcp 49.13.124.2:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-0-1-1410e96de7.181f7e753279d317 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-0-1-1410e96de7,UID:ci-4081-3-0-1-1410e96de7,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-1-1410e96de7,},FirstTimestamp:2025-01-30 14:23:43.011418903 +0000 UTC m=+1.394128748,LastTimestamp:2025-01-30 14:23:43.011418903 +0000 UTC m=+1.394128748,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-1-1410e96de7,}" Jan 30 14:23:43.020812 kubelet[2433]: I0130 14:23:43.020697 2433 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 14:23:43.022153 kubelet[2433]: I0130 14:23:43.022104 2433 server.go:455] "Adding debug handlers to kubelet server" Jan 30 14:23:43.023153 kubelet[2433]: I0130 14:23:43.022925 2433 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 14:23:43.023854 kubelet[2433]: I0130 14:23:43.023236 2433 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 14:23:43.024591 kubelet[2433]: I0130 14:23:43.024548 2433 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 14:23:43.024675 kubelet[2433]: I0130 14:23:43.024656 2433 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 14:23:43.024805 kubelet[2433]: I0130 14:23:43.024788 2433 reconciler.go:26] "Reconciler: start to sync state" Jan 30 14:23:43.025203 kubelet[2433]: W0130 14:23:43.025164 2433 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://49.13.124.2:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 49.13.124.2:6443: connect: connection refused Jan 30 14:23:43.025292 kubelet[2433]: E0130 14:23:43.025211 2433 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://49.13.124.2:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 49.13.124.2:6443: connect: connection refused Jan 30 14:23:43.026674 kubelet[2433]: I0130 14:23:43.026634 2433 factory.go:221] Registration of the systemd container factory successfully Jan 30 14:23:43.027859 kubelet[2433]: I0130 14:23:43.026763 2433 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 14:23:43.027859 kubelet[2433]: E0130 14:23:43.027182 2433 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 14:23:43.028856 kubelet[2433]: E0130 14:23:43.028805 2433 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.124.2:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-1-1410e96de7?timeout=10s\": dial tcp 49.13.124.2:6443: connect: connection refused" interval="200ms" Jan 30 14:23:43.029031 kubelet[2433]: I0130 14:23:43.029005 2433 factory.go:221] Registration of the containerd container factory successfully Jan 30 14:23:43.044117 kubelet[2433]: I0130 14:23:43.044001 2433 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 14:23:43.045304 kubelet[2433]: I0130 14:23:43.045279 2433 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 14:23:43.045754 kubelet[2433]: I0130 14:23:43.045417 2433 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 14:23:43.045754 kubelet[2433]: I0130 14:23:43.045451 2433 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 14:23:43.045754 kubelet[2433]: E0130 14:23:43.045497 2433 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 14:23:43.051892 kubelet[2433]: I0130 14:23:43.051856 2433 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 14:23:43.051892 kubelet[2433]: I0130 14:23:43.051875 2433 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 14:23:43.051892 kubelet[2433]: I0130 14:23:43.051894 2433 state_mem.go:36] "Initialized new in-memory state store" Jan 30 14:23:43.052710 kubelet[2433]: W0130 14:23:43.052665 2433 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://49.13.124.2:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 49.13.124.2:6443: connect: connection refused Jan 30 14:23:43.052847 kubelet[2433]: E0130 14:23:43.052833 2433 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://49.13.124.2:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 49.13.124.2:6443: connect: connection refused Jan 30 14:23:43.055549 kubelet[2433]: I0130 14:23:43.055500 2433 policy_none.go:49] "None policy: Start" Jan 30 14:23:43.056737 kubelet[2433]: I0130 14:23:43.056695 2433 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 14:23:43.056737 kubelet[2433]: I0130 14:23:43.056747 2433 state_mem.go:35] "Initializing new in-memory state store" Jan 30 14:23:43.063244 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 14:23:43.077630 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 14:23:43.081783 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 14:23:43.094609 kubelet[2433]: I0130 14:23:43.093983 2433 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 14:23:43.094942 kubelet[2433]: I0130 14:23:43.094854 2433 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 14:23:43.095193 kubelet[2433]: I0130 14:23:43.095157 2433 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 14:23:43.097670 kubelet[2433]: E0130 14:23:43.097592 2433 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-0-1-1410e96de7\" not found" Jan 30 14:23:43.127718 kubelet[2433]: I0130 14:23:43.127648 2433 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-1-1410e96de7" Jan 30 14:23:43.128225 kubelet[2433]: E0130 14:23:43.128194 2433 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://49.13.124.2:6443/api/v1/nodes\": dial tcp 49.13.124.2:6443: connect: connection refused" node="ci-4081-3-0-1-1410e96de7" Jan 30 14:23:43.146389 kubelet[2433]: I0130 14:23:43.146310 2433 topology_manager.go:215] "Topology Admit Handler" podUID="a912907e3852582193575dd143ab0ff2" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-3-0-1-1410e96de7" Jan 30 14:23:43.149745 kubelet[2433]: I0130 14:23:43.149697 2433 topology_manager.go:215] "Topology Admit Handler" podUID="84373eaf6b2140a43ec2e7f957352ba9" podNamespace="kube-system" podName="kube-scheduler-ci-4081-3-0-1-1410e96de7" Jan 30 14:23:43.151686 kubelet[2433]: I0130 14:23:43.151649 2433 topology_manager.go:215] "Topology Admit Handler" podUID="7782b37187b6814e9f35acb622c4dbdb" podNamespace="kube-system" podName="kube-apiserver-ci-4081-3-0-1-1410e96de7" Jan 30 14:23:43.161904 systemd[1]: Created slice kubepods-burstable-poda912907e3852582193575dd143ab0ff2.slice - libcontainer container kubepods-burstable-poda912907e3852582193575dd143ab0ff2.slice. Jan 30 14:23:43.171528 systemd[1]: Created slice kubepods-burstable-pod84373eaf6b2140a43ec2e7f957352ba9.slice - libcontainer container kubepods-burstable-pod84373eaf6b2140a43ec2e7f957352ba9.slice. Jan 30 14:23:43.176412 systemd[1]: Created slice kubepods-burstable-pod7782b37187b6814e9f35acb622c4dbdb.slice - libcontainer container kubepods-burstable-pod7782b37187b6814e9f35acb622c4dbdb.slice. Jan 30 14:23:43.225963 kubelet[2433]: I0130 14:23:43.225505 2433 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a912907e3852582193575dd143ab0ff2-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-1-1410e96de7\" (UID: \"a912907e3852582193575dd143ab0ff2\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-1-1410e96de7" Jan 30 14:23:43.225963 kubelet[2433]: I0130 14:23:43.225574 2433 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a912907e3852582193575dd143ab0ff2-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-1-1410e96de7\" (UID: \"a912907e3852582193575dd143ab0ff2\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-1-1410e96de7" Jan 30 14:23:43.225963 kubelet[2433]: I0130 14:23:43.225623 2433 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a912907e3852582193575dd143ab0ff2-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-1-1410e96de7\" (UID: \"a912907e3852582193575dd143ab0ff2\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-1-1410e96de7" Jan 30 14:23:43.225963 kubelet[2433]: I0130 14:23:43.225664 2433 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7782b37187b6814e9f35acb622c4dbdb-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-1-1410e96de7\" (UID: \"7782b37187b6814e9f35acb622c4dbdb\") " pod="kube-system/kube-apiserver-ci-4081-3-0-1-1410e96de7" Jan 30 14:23:43.225963 kubelet[2433]: I0130 14:23:43.225707 2433 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a912907e3852582193575dd143ab0ff2-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-1-1410e96de7\" (UID: \"a912907e3852582193575dd143ab0ff2\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-1-1410e96de7" Jan 30 14:23:43.227198 kubelet[2433]: I0130 14:23:43.225742 2433 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a912907e3852582193575dd143ab0ff2-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-1-1410e96de7\" (UID: \"a912907e3852582193575dd143ab0ff2\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-1-1410e96de7" Jan 30 14:23:43.227198 kubelet[2433]: I0130 14:23:43.225776 2433 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84373eaf6b2140a43ec2e7f957352ba9-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-1-1410e96de7\" (UID: \"84373eaf6b2140a43ec2e7f957352ba9\") " pod="kube-system/kube-scheduler-ci-4081-3-0-1-1410e96de7" Jan 30 14:23:43.227198 kubelet[2433]: I0130 14:23:43.225809 2433 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7782b37187b6814e9f35acb622c4dbdb-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-1-1410e96de7\" (UID: \"7782b37187b6814e9f35acb622c4dbdb\") " pod="kube-system/kube-apiserver-ci-4081-3-0-1-1410e96de7" Jan 30 14:23:43.227198 kubelet[2433]: I0130 14:23:43.225845 2433 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7782b37187b6814e9f35acb622c4dbdb-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-1-1410e96de7\" (UID: \"7782b37187b6814e9f35acb622c4dbdb\") " pod="kube-system/kube-apiserver-ci-4081-3-0-1-1410e96de7" Jan 30 14:23:43.229840 kubelet[2433]: E0130 14:23:43.229769 2433 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.124.2:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-1-1410e96de7?timeout=10s\": dial tcp 49.13.124.2:6443: connect: connection refused" interval="400ms" Jan 30 14:23:43.331204 kubelet[2433]: I0130 14:23:43.331122 2433 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-1-1410e96de7" Jan 30 14:23:43.331591 kubelet[2433]: E0130 14:23:43.331544 2433 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://49.13.124.2:6443/api/v1/nodes\": dial tcp 49.13.124.2:6443: connect: connection refused" node="ci-4081-3-0-1-1410e96de7" Jan 30 14:23:43.470349 containerd[1476]: time="2025-01-30T14:23:43.470269567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-1-1410e96de7,Uid:a912907e3852582193575dd143ab0ff2,Namespace:kube-system,Attempt:0,}" Jan 30 14:23:43.475037 containerd[1476]: time="2025-01-30T14:23:43.474937624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-1-1410e96de7,Uid:84373eaf6b2140a43ec2e7f957352ba9,Namespace:kube-system,Attempt:0,}" Jan 30 14:23:43.481065 containerd[1476]: time="2025-01-30T14:23:43.480648214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-1-1410e96de7,Uid:7782b37187b6814e9f35acb622c4dbdb,Namespace:kube-system,Attempt:0,}" Jan 30 14:23:43.631262 kubelet[2433]: E0130 14:23:43.631167 2433 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.124.2:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-1-1410e96de7?timeout=10s\": dial tcp 49.13.124.2:6443: connect: connection refused" interval="800ms" Jan 30 14:23:43.735513 kubelet[2433]: I0130 14:23:43.735341 2433 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-1-1410e96de7" Jan 30 14:23:43.736195 kubelet[2433]: E0130 14:23:43.736076 2433 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://49.13.124.2:6443/api/v1/nodes\": dial tcp 49.13.124.2:6443: connect: connection refused" node="ci-4081-3-0-1-1410e96de7" Jan 30 14:23:43.829069 kubelet[2433]: W0130 14:23:43.828933 2433 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://49.13.124.2:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 49.13.124.2:6443: connect: connection refused Jan 30 14:23:43.829069 kubelet[2433]: E0130 14:23:43.829022 2433 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://49.13.124.2:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 49.13.124.2:6443: connect: connection refused Jan 30 14:23:44.011930 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2686572813.mount: Deactivated successfully. Jan 30 14:23:44.020788 containerd[1476]: time="2025-01-30T14:23:44.020728469Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:23:44.022289 containerd[1476]: time="2025-01-30T14:23:44.022238008Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:23:44.023490 containerd[1476]: time="2025-01-30T14:23:44.023449022Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Jan 30 14:23:44.024569 containerd[1476]: time="2025-01-30T14:23:44.024513395Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 14:23:44.025553 containerd[1476]: time="2025-01-30T14:23:44.025461326Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:23:44.027001 containerd[1476]: time="2025-01-30T14:23:44.026908304Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:23:44.028124 containerd[1476]: time="2025-01-30T14:23:44.028026077Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 14:23:44.032122 containerd[1476]: time="2025-01-30T14:23:44.030988793Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:23:44.032980 containerd[1476]: time="2025-01-30T14:23:44.032941417Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 557.821871ms" Jan 30 14:23:44.035871 containerd[1476]: time="2025-01-30T14:23:44.035815972Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 565.07288ms" Jan 30 14:23:44.038429 containerd[1476]: time="2025-01-30T14:23:44.038378403Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 557.622228ms" Jan 30 14:23:44.053469 kubelet[2433]: W0130 14:23:44.053394 2433 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://49.13.124.2:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 49.13.124.2:6443: connect: connection refused Jan 30 14:23:44.053469 kubelet[2433]: E0130 14:23:44.053442 2433 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://49.13.124.2:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 49.13.124.2:6443: connect: connection refused Jan 30 14:23:44.057710 kubelet[2433]: W0130 14:23:44.057567 2433 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://49.13.124.2:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-1-1410e96de7&limit=500&resourceVersion=0": dial tcp 49.13.124.2:6443: connect: connection refused Jan 30 14:23:44.057710 kubelet[2433]: E0130 14:23:44.057640 2433 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://49.13.124.2:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-1-1410e96de7&limit=500&resourceVersion=0": dial tcp 49.13.124.2:6443: connect: connection refused Jan 30 14:23:44.150383 containerd[1476]: time="2025-01-30T14:23:44.149718549Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:23:44.150383 containerd[1476]: time="2025-01-30T14:23:44.149785510Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:23:44.150383 containerd[1476]: time="2025-01-30T14:23:44.149797950Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:23:44.150383 containerd[1476]: time="2025-01-30T14:23:44.149879311Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:23:44.156617 containerd[1476]: time="2025-01-30T14:23:44.156379229Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:23:44.156617 containerd[1476]: time="2025-01-30T14:23:44.156440310Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:23:44.156617 containerd[1476]: time="2025-01-30T14:23:44.156455910Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:23:44.157340 containerd[1476]: time="2025-01-30T14:23:44.156555991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:23:44.161676 containerd[1476]: time="2025-01-30T14:23:44.161410250Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:23:44.161676 containerd[1476]: time="2025-01-30T14:23:44.161459651Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:23:44.161676 containerd[1476]: time="2025-01-30T14:23:44.161470171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:23:44.161676 containerd[1476]: time="2025-01-30T14:23:44.161541452Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:23:44.180286 systemd[1]: Started cri-containerd-e48afcdd96ee36208dc22601cbb0b15f8085e16d8a7701dbab488d1b29f03a6e.scope - libcontainer container e48afcdd96ee36208dc22601cbb0b15f8085e16d8a7701dbab488d1b29f03a6e. Jan 30 14:23:44.203550 systemd[1]: Started cri-containerd-a3ff80721a745322d8b05ab84be8594591467e98b8bce5c0a1e1489914f2258e.scope - libcontainer container a3ff80721a745322d8b05ab84be8594591467e98b8bce5c0a1e1489914f2258e. Jan 30 14:23:44.207099 systemd[1]: Started cri-containerd-a7061f3f66a5a3700ab9a90bf699e9cae21e73dab3a0bfd08fc08e6e5737eb62.scope - libcontainer container a7061f3f66a5a3700ab9a90bf699e9cae21e73dab3a0bfd08fc08e6e5737eb62. Jan 30 14:23:44.253258 kubelet[2433]: W0130 14:23:44.253142 2433 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://49.13.124.2:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 49.13.124.2:6443: connect: connection refused Jan 30 14:23:44.253258 kubelet[2433]: E0130 14:23:44.253205 2433 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://49.13.124.2:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 49.13.124.2:6443: connect: connection refused Jan 30 14:23:44.265780 containerd[1476]: time="2025-01-30T14:23:44.265577710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-1-1410e96de7,Uid:a912907e3852582193575dd143ab0ff2,Namespace:kube-system,Attempt:0,} returns sandbox id \"a3ff80721a745322d8b05ab84be8594591467e98b8bce5c0a1e1489914f2258e\"" Jan 30 14:23:44.270317 containerd[1476]: time="2025-01-30T14:23:44.270163285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-1-1410e96de7,Uid:7782b37187b6814e9f35acb622c4dbdb,Namespace:kube-system,Attempt:0,} returns sandbox id \"e48afcdd96ee36208dc22601cbb0b15f8085e16d8a7701dbab488d1b29f03a6e\"" Jan 30 14:23:44.274343 containerd[1476]: time="2025-01-30T14:23:44.274195574Z" level=info msg="CreateContainer within sandbox \"a3ff80721a745322d8b05ab84be8594591467e98b8bce5c0a1e1489914f2258e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 14:23:44.274581 containerd[1476]: time="2025-01-30T14:23:44.274518898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-1-1410e96de7,Uid:84373eaf6b2140a43ec2e7f957352ba9,Namespace:kube-system,Attempt:0,} returns sandbox id \"a7061f3f66a5a3700ab9a90bf699e9cae21e73dab3a0bfd08fc08e6e5737eb62\"" Jan 30 14:23:44.276781 containerd[1476]: time="2025-01-30T14:23:44.276747525Z" level=info msg="CreateContainer within sandbox \"e48afcdd96ee36208dc22601cbb0b15f8085e16d8a7701dbab488d1b29f03a6e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 14:23:44.279213 containerd[1476]: time="2025-01-30T14:23:44.279172514Z" level=info msg="CreateContainer within sandbox \"a7061f3f66a5a3700ab9a90bf699e9cae21e73dab3a0bfd08fc08e6e5737eb62\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 14:23:44.304816 containerd[1476]: time="2025-01-30T14:23:44.304763703Z" level=info msg="CreateContainer within sandbox \"a3ff80721a745322d8b05ab84be8594591467e98b8bce5c0a1e1489914f2258e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d7e8ff834b1fe6ddd62c3c18499770ac9b796b4145923d8591a7b2dc36235547\"" Jan 30 14:23:44.305795 containerd[1476]: time="2025-01-30T14:23:44.305514392Z" level=info msg="CreateContainer within sandbox \"e48afcdd96ee36208dc22601cbb0b15f8085e16d8a7701dbab488d1b29f03a6e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d4ca5bd9448a22ff3f8adb5309ddf65a78aac0dbab3348bb00374671cf6f56bf\"" Jan 30 14:23:44.306253 containerd[1476]: time="2025-01-30T14:23:44.306217561Z" level=info msg="StartContainer for \"d4ca5bd9448a22ff3f8adb5309ddf65a78aac0dbab3348bb00374671cf6f56bf\"" Jan 30 14:23:44.309231 containerd[1476]: time="2025-01-30T14:23:44.307768460Z" level=info msg="CreateContainer within sandbox \"a7061f3f66a5a3700ab9a90bf699e9cae21e73dab3a0bfd08fc08e6e5737eb62\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"eee5e89d4c785a64834caf820d75e52781c1cdc0fa772f2561407607f1ab0ad5\"" Jan 30 14:23:44.309231 containerd[1476]: time="2025-01-30T14:23:44.307952342Z" level=info msg="StartContainer for \"d7e8ff834b1fe6ddd62c3c18499770ac9b796b4145923d8591a7b2dc36235547\"" Jan 30 14:23:44.319183 containerd[1476]: time="2025-01-30T14:23:44.319135757Z" level=info msg="StartContainer for \"eee5e89d4c785a64834caf820d75e52781c1cdc0fa772f2561407607f1ab0ad5\"" Jan 30 14:23:44.336506 systemd[1]: Started cri-containerd-d4ca5bd9448a22ff3f8adb5309ddf65a78aac0dbab3348bb00374671cf6f56bf.scope - libcontainer container d4ca5bd9448a22ff3f8adb5309ddf65a78aac0dbab3348bb00374671cf6f56bf. Jan 30 14:23:44.347270 systemd[1]: Started cri-containerd-d7e8ff834b1fe6ddd62c3c18499770ac9b796b4145923d8591a7b2dc36235547.scope - libcontainer container d7e8ff834b1fe6ddd62c3c18499770ac9b796b4145923d8591a7b2dc36235547. Jan 30 14:23:44.383764 systemd[1]: Started cri-containerd-eee5e89d4c785a64834caf820d75e52781c1cdc0fa772f2561407607f1ab0ad5.scope - libcontainer container eee5e89d4c785a64834caf820d75e52781c1cdc0fa772f2561407607f1ab0ad5. Jan 30 14:23:44.402310 containerd[1476]: time="2025-01-30T14:23:44.402163761Z" level=info msg="StartContainer for \"d4ca5bd9448a22ff3f8adb5309ddf65a78aac0dbab3348bb00374671cf6f56bf\" returns successfully" Jan 30 14:23:44.408299 containerd[1476]: time="2025-01-30T14:23:44.408228674Z" level=info msg="StartContainer for \"d7e8ff834b1fe6ddd62c3c18499770ac9b796b4145923d8591a7b2dc36235547\" returns successfully" Jan 30 14:23:44.434513 kubelet[2433]: E0130 14:23:44.434436 2433 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.124.2:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-1-1410e96de7?timeout=10s\": dial tcp 49.13.124.2:6443: connect: connection refused" interval="1.6s" Jan 30 14:23:44.467459 containerd[1476]: time="2025-01-30T14:23:44.467308709Z" level=info msg="StartContainer for \"eee5e89d4c785a64834caf820d75e52781c1cdc0fa772f2561407607f1ab0ad5\" returns successfully" Jan 30 14:23:44.538757 kubelet[2433]: I0130 14:23:44.538225 2433 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-1-1410e96de7" Jan 30 14:23:44.538757 kubelet[2433]: E0130 14:23:44.538575 2433 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://49.13.124.2:6443/api/v1/nodes\": dial tcp 49.13.124.2:6443: connect: connection refused" node="ci-4081-3-0-1-1410e96de7" Jan 30 14:23:46.141362 kubelet[2433]: I0130 14:23:46.141324 2433 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-1-1410e96de7" Jan 30 14:23:46.529551 kubelet[2433]: E0130 14:23:46.529501 2433 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-0-1-1410e96de7\" not found" node="ci-4081-3-0-1-1410e96de7" Jan 30 14:23:46.632786 kubelet[2433]: I0130 14:23:46.632740 2433 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-3-0-1-1410e96de7" Jan 30 14:23:47.017968 kubelet[2433]: I0130 14:23:47.017568 2433 apiserver.go:52] "Watching apiserver" Jan 30 14:23:47.025742 kubelet[2433]: I0130 14:23:47.025663 2433 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 14:23:48.554552 systemd[1]: Reloading requested from client PID 2706 ('systemctl') (unit session-7.scope)... Jan 30 14:23:48.554569 systemd[1]: Reloading... Jan 30 14:23:48.653118 zram_generator::config[2751]: No configuration found. Jan 30 14:23:48.754209 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:23:48.834369 systemd[1]: Reloading finished in 279 ms. Jan 30 14:23:48.887030 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:23:48.887696 kubelet[2433]: I0130 14:23:48.887302 2433 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 14:23:48.901629 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 14:23:48.902010 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:23:48.902145 systemd[1]: kubelet.service: Consumed 1.793s CPU time, 114.7M memory peak, 0B memory swap peak. Jan 30 14:23:48.913542 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:23:49.041160 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:23:49.053479 (kubelet)[2793]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 14:23:49.121785 kubelet[2793]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 14:23:49.124300 kubelet[2793]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 14:23:49.124300 kubelet[2793]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 14:23:49.124300 kubelet[2793]: I0130 14:23:49.122256 2793 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 14:23:49.127800 kubelet[2793]: I0130 14:23:49.127762 2793 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 14:23:49.127967 kubelet[2793]: I0130 14:23:49.127954 2793 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 14:23:49.128320 kubelet[2793]: I0130 14:23:49.128301 2793 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 14:23:49.130276 kubelet[2793]: I0130 14:23:49.130245 2793 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 14:23:49.131989 kubelet[2793]: I0130 14:23:49.131927 2793 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 14:23:49.141423 kubelet[2793]: I0130 14:23:49.141390 2793 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 14:23:49.141643 kubelet[2793]: I0130 14:23:49.141592 2793 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 14:23:49.141806 kubelet[2793]: I0130 14:23:49.141626 2793 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-0-1-1410e96de7","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 14:23:49.141902 kubelet[2793]: I0130 14:23:49.141810 2793 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 14:23:49.141902 kubelet[2793]: I0130 14:23:49.141820 2793 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 14:23:49.141902 kubelet[2793]: I0130 14:23:49.141854 2793 state_mem.go:36] "Initialized new in-memory state store" Jan 30 14:23:49.142008 kubelet[2793]: I0130 14:23:49.141993 2793 kubelet.go:400] "Attempting to sync node with API server" Jan 30 14:23:49.142032 kubelet[2793]: I0130 14:23:49.142014 2793 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 14:23:49.142659 kubelet[2793]: I0130 14:23:49.142636 2793 kubelet.go:312] "Adding apiserver pod source" Jan 30 14:23:49.142713 kubelet[2793]: I0130 14:23:49.142668 2793 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 14:23:49.146351 kubelet[2793]: I0130 14:23:49.146320 2793 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 14:23:49.146545 kubelet[2793]: I0130 14:23:49.146527 2793 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 14:23:49.146948 kubelet[2793]: I0130 14:23:49.146930 2793 server.go:1264] "Started kubelet" Jan 30 14:23:49.151855 kubelet[2793]: I0130 14:23:49.151818 2793 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 14:23:49.168154 kubelet[2793]: I0130 14:23:49.167292 2793 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 14:23:49.168816 kubelet[2793]: I0130 14:23:49.168798 2793 server.go:455] "Adding debug handlers to kubelet server" Jan 30 14:23:49.171140 kubelet[2793]: I0130 14:23:49.169800 2793 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 14:23:49.171485 kubelet[2793]: I0130 14:23:49.171467 2793 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 14:23:49.171632 kubelet[2793]: I0130 14:23:49.171621 2793 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 14:23:49.172676 kubelet[2793]: I0130 14:23:49.172657 2793 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 14:23:49.172902 kubelet[2793]: I0130 14:23:49.172888 2793 reconciler.go:26] "Reconciler: start to sync state" Jan 30 14:23:49.184586 kubelet[2793]: I0130 14:23:49.184556 2793 factory.go:221] Registration of the systemd container factory successfully Jan 30 14:23:49.184730 kubelet[2793]: I0130 14:23:49.184666 2793 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 14:23:49.188333 kubelet[2793]: I0130 14:23:49.186251 2793 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 14:23:49.190690 kubelet[2793]: I0130 14:23:49.190650 2793 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 14:23:49.190690 kubelet[2793]: I0130 14:23:49.190698 2793 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 14:23:49.190857 kubelet[2793]: I0130 14:23:49.190718 2793 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 14:23:49.190857 kubelet[2793]: E0130 14:23:49.190765 2793 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 14:23:49.191957 kubelet[2793]: E0130 14:23:49.191933 2793 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 14:23:49.193848 kubelet[2793]: I0130 14:23:49.193823 2793 factory.go:221] Registration of the containerd container factory successfully Jan 30 14:23:49.237467 kubelet[2793]: I0130 14:23:49.237421 2793 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 14:23:49.237467 kubelet[2793]: I0130 14:23:49.237441 2793 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 14:23:49.237467 kubelet[2793]: I0130 14:23:49.237463 2793 state_mem.go:36] "Initialized new in-memory state store" Jan 30 14:23:49.237679 kubelet[2793]: I0130 14:23:49.237629 2793 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 14:23:49.237679 kubelet[2793]: I0130 14:23:49.237639 2793 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 14:23:49.237679 kubelet[2793]: I0130 14:23:49.237658 2793 policy_none.go:49] "None policy: Start" Jan 30 14:23:49.238585 kubelet[2793]: I0130 14:23:49.238562 2793 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 14:23:49.238585 kubelet[2793]: I0130 14:23:49.238590 2793 state_mem.go:35] "Initializing new in-memory state store" Jan 30 14:23:49.238751 kubelet[2793]: I0130 14:23:49.238733 2793 state_mem.go:75] "Updated machine memory state" Jan 30 14:23:49.243954 kubelet[2793]: I0130 14:23:49.243707 2793 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 14:23:49.244111 kubelet[2793]: I0130 14:23:49.243959 2793 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 14:23:49.244111 kubelet[2793]: I0130 14:23:49.244056 2793 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 14:23:49.276839 kubelet[2793]: I0130 14:23:49.276807 2793 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-1-1410e96de7" Jan 30 14:23:49.287867 kubelet[2793]: I0130 14:23:49.287830 2793 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081-3-0-1-1410e96de7" Jan 30 14:23:49.288037 kubelet[2793]: I0130 14:23:49.287931 2793 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-3-0-1-1410e96de7" Jan 30 14:23:49.291109 kubelet[2793]: I0130 14:23:49.291022 2793 topology_manager.go:215] "Topology Admit Handler" podUID="7782b37187b6814e9f35acb622c4dbdb" podNamespace="kube-system" podName="kube-apiserver-ci-4081-3-0-1-1410e96de7" Jan 30 14:23:49.291234 kubelet[2793]: I0130 14:23:49.291210 2793 topology_manager.go:215] "Topology Admit Handler" podUID="a912907e3852582193575dd143ab0ff2" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-3-0-1-1410e96de7" Jan 30 14:23:49.291318 kubelet[2793]: I0130 14:23:49.291272 2793 topology_manager.go:215] "Topology Admit Handler" podUID="84373eaf6b2140a43ec2e7f957352ba9" podNamespace="kube-system" podName="kube-scheduler-ci-4081-3-0-1-1410e96de7" Jan 30 14:23:49.374702 kubelet[2793]: I0130 14:23:49.374186 2793 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a912907e3852582193575dd143ab0ff2-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-1-1410e96de7\" (UID: \"a912907e3852582193575dd143ab0ff2\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-1-1410e96de7" Jan 30 14:23:49.374702 kubelet[2793]: I0130 14:23:49.374237 2793 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84373eaf6b2140a43ec2e7f957352ba9-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-1-1410e96de7\" (UID: \"84373eaf6b2140a43ec2e7f957352ba9\") " pod="kube-system/kube-scheduler-ci-4081-3-0-1-1410e96de7" Jan 30 14:23:49.374702 kubelet[2793]: I0130 14:23:49.374264 2793 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7782b37187b6814e9f35acb622c4dbdb-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-1-1410e96de7\" (UID: \"7782b37187b6814e9f35acb622c4dbdb\") " pod="kube-system/kube-apiserver-ci-4081-3-0-1-1410e96de7" Jan 30 14:23:49.374702 kubelet[2793]: I0130 14:23:49.374291 2793 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a912907e3852582193575dd143ab0ff2-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-1-1410e96de7\" (UID: \"a912907e3852582193575dd143ab0ff2\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-1-1410e96de7" Jan 30 14:23:49.374702 kubelet[2793]: I0130 14:23:49.374326 2793 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a912907e3852582193575dd143ab0ff2-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-1-1410e96de7\" (UID: \"a912907e3852582193575dd143ab0ff2\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-1-1410e96de7" Jan 30 14:23:49.375158 kubelet[2793]: I0130 14:23:49.374383 2793 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a912907e3852582193575dd143ab0ff2-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-1-1410e96de7\" (UID: \"a912907e3852582193575dd143ab0ff2\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-1-1410e96de7" Jan 30 14:23:49.375158 kubelet[2793]: I0130 14:23:49.374417 2793 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a912907e3852582193575dd143ab0ff2-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-1-1410e96de7\" (UID: \"a912907e3852582193575dd143ab0ff2\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-1-1410e96de7" Jan 30 14:23:49.375158 kubelet[2793]: I0130 14:23:49.374440 2793 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7782b37187b6814e9f35acb622c4dbdb-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-1-1410e96de7\" (UID: \"7782b37187b6814e9f35acb622c4dbdb\") " pod="kube-system/kube-apiserver-ci-4081-3-0-1-1410e96de7" Jan 30 14:23:49.375158 kubelet[2793]: I0130 14:23:49.374471 2793 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7782b37187b6814e9f35acb622c4dbdb-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-1-1410e96de7\" (UID: \"7782b37187b6814e9f35acb622c4dbdb\") " pod="kube-system/kube-apiserver-ci-4081-3-0-1-1410e96de7" Jan 30 14:23:49.550360 sudo[2825]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 30 14:23:49.550712 sudo[2825]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 30 14:23:49.996181 sudo[2825]: pam_unix(sudo:session): session closed for user root Jan 30 14:23:50.145792 kubelet[2793]: I0130 14:23:50.145690 2793 apiserver.go:52] "Watching apiserver" Jan 30 14:23:50.173468 kubelet[2793]: I0130 14:23:50.173391 2793 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 14:23:50.230767 kubelet[2793]: E0130 14:23:50.230201 2793 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-3-0-1-1410e96de7\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-0-1-1410e96de7" Jan 30 14:23:50.261262 kubelet[2793]: I0130 14:23:50.260440 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-0-1-1410e96de7" podStartSLOduration=1.260422583 podStartE2EDuration="1.260422583s" podCreationTimestamp="2025-01-30 14:23:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:23:50.243613995 +0000 UTC m=+1.185011327" watchObservedRunningTime="2025-01-30 14:23:50.260422583 +0000 UTC m=+1.201819915" Jan 30 14:23:50.273141 kubelet[2793]: I0130 14:23:50.272328 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-0-1-1410e96de7" podStartSLOduration=1.272306196 podStartE2EDuration="1.272306196s" podCreationTimestamp="2025-01-30 14:23:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:23:50.261566876 +0000 UTC m=+1.202964208" watchObservedRunningTime="2025-01-30 14:23:50.272306196 +0000 UTC m=+1.213703528" Jan 30 14:23:50.284705 kubelet[2793]: I0130 14:23:50.284634 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-0-1-1410e96de7" podStartSLOduration=1.284610054 podStartE2EDuration="1.284610054s" podCreationTimestamp="2025-01-30 14:23:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:23:50.273317967 +0000 UTC m=+1.214715299" watchObservedRunningTime="2025-01-30 14:23:50.284610054 +0000 UTC m=+1.226007426" Jan 30 14:23:52.562111 sudo[1901]: pam_unix(sudo:session): session closed for user root Jan 30 14:23:52.723244 sshd[1898]: pam_unix(sshd:session): session closed for user core Jan 30 14:23:52.727164 systemd[1]: sshd@11-49.13.124.2:22-139.178.68.195:58084.service: Deactivated successfully. Jan 30 14:23:52.730223 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 14:23:52.730465 systemd[1]: session-7.scope: Consumed 8.530s CPU time, 186.4M memory peak, 0B memory swap peak. Jan 30 14:23:52.732846 systemd-logind[1459]: Session 7 logged out. Waiting for processes to exit. Jan 30 14:23:52.734702 systemd-logind[1459]: Removed session 7. Jan 30 14:23:53.129637 systemd[1]: Started sshd@12-49.13.124.2:22-183.88.232.183:41912.service - OpenSSH per-connection server daemon (183.88.232.183:41912). Jan 30 14:23:54.241569 sshd[2863]: Invalid user yudi from 183.88.232.183 port 41912 Jan 30 14:23:54.448620 sshd[2863]: Received disconnect from 183.88.232.183 port 41912:11: Bye Bye [preauth] Jan 30 14:23:54.448620 sshd[2863]: Disconnected from invalid user yudi 183.88.232.183 port 41912 [preauth] Jan 30 14:23:54.451864 systemd[1]: sshd@12-49.13.124.2:22-183.88.232.183:41912.service: Deactivated successfully. Jan 30 14:23:57.179429 systemd[1]: Started sshd@13-49.13.124.2:22-45.207.58.154:39654.service - OpenSSH per-connection server daemon (45.207.58.154:39654). Jan 30 14:23:58.854020 sshd[2868]: Invalid user deploy from 45.207.58.154 port 39654 Jan 30 14:23:59.153732 sshd[2868]: Received disconnect from 45.207.58.154 port 39654:11: Bye Bye [preauth] Jan 30 14:23:59.155165 sshd[2868]: Disconnected from invalid user deploy 45.207.58.154 port 39654 [preauth] Jan 30 14:23:59.155745 systemd[1]: sshd@13-49.13.124.2:22-45.207.58.154:39654.service: Deactivated successfully. Jan 30 14:24:04.111687 kubelet[2793]: I0130 14:24:04.111355 2793 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 14:24:04.112785 kubelet[2793]: I0130 14:24:04.112493 2793 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 14:24:04.112837 containerd[1476]: time="2025-01-30T14:24:04.111944354Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 14:24:04.998741 kubelet[2793]: I0130 14:24:04.998694 2793 topology_manager.go:215] "Topology Admit Handler" podUID="ecec26af-246d-470f-a285-47e09f6e919b" podNamespace="kube-system" podName="kube-proxy-cpfbh" Jan 30 14:24:05.011629 systemd[1]: Created slice kubepods-besteffort-podecec26af_246d_470f_a285_47e09f6e919b.slice - libcontainer container kubepods-besteffort-podecec26af_246d_470f_a285_47e09f6e919b.slice. Jan 30 14:24:05.016932 kubelet[2793]: I0130 14:24:05.016878 2793 topology_manager.go:215] "Topology Admit Handler" podUID="f7a69c7c-015f-47af-92f2-6cf24e22bf49" podNamespace="kube-system" podName="cilium-rtf8z" Jan 30 14:24:05.028244 systemd[1]: Created slice kubepods-burstable-podf7a69c7c_015f_47af_92f2_6cf24e22bf49.slice - libcontainer container kubepods-burstable-podf7a69c7c_015f_47af_92f2_6cf24e22bf49.slice. Jan 30 14:24:05.078425 kubelet[2793]: I0130 14:24:05.078376 2793 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f7a69c7c-015f-47af-92f2-6cf24e22bf49-lib-modules\") pod \"cilium-rtf8z\" (UID: \"f7a69c7c-015f-47af-92f2-6cf24e22bf49\") " pod="kube-system/cilium-rtf8z" Jan 30 14:24:05.078425 kubelet[2793]: I0130 14:24:05.078421 2793 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f7a69c7c-015f-47af-92f2-6cf24e22bf49-hubble-tls\") pod \"cilium-rtf8z\" (UID: \"f7a69c7c-015f-47af-92f2-6cf24e22bf49\") " pod="kube-system/cilium-rtf8z" Jan 30 14:24:05.078605 kubelet[2793]: I0130 14:24:05.078446 2793 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f7a69c7c-015f-47af-92f2-6cf24e22bf49-cilium-run\") pod \"cilium-rtf8z\" (UID: \"f7a69c7c-015f-47af-92f2-6cf24e22bf49\") " pod="kube-system/cilium-rtf8z" Jan 30 14:24:05.078605 kubelet[2793]: I0130 14:24:05.078462 2793 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f7a69c7c-015f-47af-92f2-6cf24e22bf49-etc-cni-netd\") pod \"cilium-rtf8z\" (UID: \"f7a69c7c-015f-47af-92f2-6cf24e22bf49\") " pod="kube-system/cilium-rtf8z" Jan 30 14:24:05.078605 kubelet[2793]: I0130 14:24:05.078479 2793 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f7a69c7c-015f-47af-92f2-6cf24e22bf49-host-proc-sys-net\") pod \"cilium-rtf8z\" (UID: \"f7a69c7c-015f-47af-92f2-6cf24e22bf49\") " pod="kube-system/cilium-rtf8z" Jan 30 14:24:05.078605 kubelet[2793]: I0130 14:24:05.078496 2793 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f7a69c7c-015f-47af-92f2-6cf24e22bf49-bpf-maps\") pod \"cilium-rtf8z\" (UID: \"f7a69c7c-015f-47af-92f2-6cf24e22bf49\") " pod="kube-system/cilium-rtf8z" Jan 30 14:24:05.078605 kubelet[2793]: I0130 14:24:05.078510 2793 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f7a69c7c-015f-47af-92f2-6cf24e22bf49-hostproc\") pod \"cilium-rtf8z\" (UID: \"f7a69c7c-015f-47af-92f2-6cf24e22bf49\") " pod="kube-system/cilium-rtf8z" Jan 30 14:24:05.078605 kubelet[2793]: I0130 14:24:05.078525 2793 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f7a69c7c-015f-47af-92f2-6cf24e22bf49-cilium-cgroup\") pod \"cilium-rtf8z\" (UID: \"f7a69c7c-015f-47af-92f2-6cf24e22bf49\") " pod="kube-system/cilium-rtf8z" Jan 30 14:24:05.078741 kubelet[2793]: I0130 14:24:05.078539 2793 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ecec26af-246d-470f-a285-47e09f6e919b-xtables-lock\") pod \"kube-proxy-cpfbh\" (UID: \"ecec26af-246d-470f-a285-47e09f6e919b\") " pod="kube-system/kube-proxy-cpfbh" Jan 30 14:24:05.078741 kubelet[2793]: I0130 14:24:05.078554 2793 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f7a69c7c-015f-47af-92f2-6cf24e22bf49-cilium-config-path\") pod \"cilium-rtf8z\" (UID: \"f7a69c7c-015f-47af-92f2-6cf24e22bf49\") " pod="kube-system/cilium-rtf8z" Jan 30 14:24:05.078741 kubelet[2793]: I0130 14:24:05.078569 2793 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ecec26af-246d-470f-a285-47e09f6e919b-kube-proxy\") pod \"kube-proxy-cpfbh\" (UID: \"ecec26af-246d-470f-a285-47e09f6e919b\") " pod="kube-system/kube-proxy-cpfbh" Jan 30 14:24:05.078741 kubelet[2793]: I0130 14:24:05.078587 2793 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ecec26af-246d-470f-a285-47e09f6e919b-lib-modules\") pod \"kube-proxy-cpfbh\" (UID: \"ecec26af-246d-470f-a285-47e09f6e919b\") " pod="kube-system/kube-proxy-cpfbh" Jan 30 14:24:05.078741 kubelet[2793]: I0130 14:24:05.078601 2793 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f7a69c7c-015f-47af-92f2-6cf24e22bf49-cni-path\") pod \"cilium-rtf8z\" (UID: \"f7a69c7c-015f-47af-92f2-6cf24e22bf49\") " pod="kube-system/cilium-rtf8z" Jan 30 14:24:05.078741 kubelet[2793]: I0130 14:24:05.078618 2793 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f7a69c7c-015f-47af-92f2-6cf24e22bf49-clustermesh-secrets\") pod \"cilium-rtf8z\" (UID: \"f7a69c7c-015f-47af-92f2-6cf24e22bf49\") " pod="kube-system/cilium-rtf8z" Jan 30 14:24:05.078867 kubelet[2793]: I0130 14:24:05.078633 2793 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cj95\" (UniqueName: \"kubernetes.io/projected/f7a69c7c-015f-47af-92f2-6cf24e22bf49-kube-api-access-2cj95\") pod \"cilium-rtf8z\" (UID: \"f7a69c7c-015f-47af-92f2-6cf24e22bf49\") " pod="kube-system/cilium-rtf8z" Jan 30 14:24:05.078867 kubelet[2793]: I0130 14:24:05.078649 2793 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt8x6\" (UniqueName: \"kubernetes.io/projected/ecec26af-246d-470f-a285-47e09f6e919b-kube-api-access-zt8x6\") pod \"kube-proxy-cpfbh\" (UID: \"ecec26af-246d-470f-a285-47e09f6e919b\") " pod="kube-system/kube-proxy-cpfbh" Jan 30 14:24:05.078867 kubelet[2793]: I0130 14:24:05.078663 2793 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f7a69c7c-015f-47af-92f2-6cf24e22bf49-xtables-lock\") pod \"cilium-rtf8z\" (UID: \"f7a69c7c-015f-47af-92f2-6cf24e22bf49\") " pod="kube-system/cilium-rtf8z" Jan 30 14:24:05.078867 kubelet[2793]: I0130 14:24:05.078680 2793 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f7a69c7c-015f-47af-92f2-6cf24e22bf49-host-proc-sys-kernel\") pod \"cilium-rtf8z\" (UID: \"f7a69c7c-015f-47af-92f2-6cf24e22bf49\") " pod="kube-system/cilium-rtf8z" Jan 30 14:24:05.220580 kubelet[2793]: I0130 14:24:05.220515 2793 topology_manager.go:215] "Topology Admit Handler" podUID="485eb47f-9a7e-4b8c-8647-e4da385f5f38" podNamespace="kube-system" podName="cilium-operator-599987898-f2lb5" Jan 30 14:24:05.230276 systemd[1]: Created slice kubepods-besteffort-pod485eb47f_9a7e_4b8c_8647_e4da385f5f38.slice - libcontainer container kubepods-besteffort-pod485eb47f_9a7e_4b8c_8647_e4da385f5f38.slice. Jan 30 14:24:05.282516 kubelet[2793]: I0130 14:24:05.282279 2793 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbmlg\" (UniqueName: \"kubernetes.io/projected/485eb47f-9a7e-4b8c-8647-e4da385f5f38-kube-api-access-dbmlg\") pod \"cilium-operator-599987898-f2lb5\" (UID: \"485eb47f-9a7e-4b8c-8647-e4da385f5f38\") " pod="kube-system/cilium-operator-599987898-f2lb5" Jan 30 14:24:05.282516 kubelet[2793]: I0130 14:24:05.282404 2793 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/485eb47f-9a7e-4b8c-8647-e4da385f5f38-cilium-config-path\") pod \"cilium-operator-599987898-f2lb5\" (UID: \"485eb47f-9a7e-4b8c-8647-e4da385f5f38\") " pod="kube-system/cilium-operator-599987898-f2lb5" Jan 30 14:24:05.321497 containerd[1476]: time="2025-01-30T14:24:05.321418007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cpfbh,Uid:ecec26af-246d-470f-a285-47e09f6e919b,Namespace:kube-system,Attempt:0,}" Jan 30 14:24:05.340162 containerd[1476]: time="2025-01-30T14:24:05.335365861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rtf8z,Uid:f7a69c7c-015f-47af-92f2-6cf24e22bf49,Namespace:kube-system,Attempt:0,}" Jan 30 14:24:05.352054 containerd[1476]: time="2025-01-30T14:24:05.351820939Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:24:05.352861 containerd[1476]: time="2025-01-30T14:24:05.352706788Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:24:05.352861 containerd[1476]: time="2025-01-30T14:24:05.352728788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:24:05.353123 containerd[1476]: time="2025-01-30T14:24:05.352896389Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:24:05.380606 systemd[1]: Started cri-containerd-a8a557cf3137ad9416837074a447ee36a92e343232afc34fd5664832e14e476e.scope - libcontainer container a8a557cf3137ad9416837074a447ee36a92e343232afc34fd5664832e14e476e. Jan 30 14:24:05.382751 containerd[1476]: time="2025-01-30T14:24:05.382450513Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:24:05.382751 containerd[1476]: time="2025-01-30T14:24:05.382606555Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:24:05.384679 containerd[1476]: time="2025-01-30T14:24:05.382619475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:24:05.385124 containerd[1476]: time="2025-01-30T14:24:05.384861056Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:24:05.418929 systemd[1]: Started cri-containerd-5f92022ed7438e477159d8e973bab37f43a6023df1c7c1ba453fc91e605e7fe9.scope - libcontainer container 5f92022ed7438e477159d8e973bab37f43a6023df1c7c1ba453fc91e605e7fe9. Jan 30 14:24:05.439101 containerd[1476]: time="2025-01-30T14:24:05.438952576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cpfbh,Uid:ecec26af-246d-470f-a285-47e09f6e919b,Namespace:kube-system,Attempt:0,} returns sandbox id \"a8a557cf3137ad9416837074a447ee36a92e343232afc34fd5664832e14e476e\"" Jan 30 14:24:05.445393 containerd[1476]: time="2025-01-30T14:24:05.445141556Z" level=info msg="CreateContainer within sandbox \"a8a557cf3137ad9416837074a447ee36a92e343232afc34fd5664832e14e476e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 14:24:05.458940 containerd[1476]: time="2025-01-30T14:24:05.458898648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rtf8z,Uid:f7a69c7c-015f-47af-92f2-6cf24e22bf49,Namespace:kube-system,Attempt:0,} returns sandbox id \"5f92022ed7438e477159d8e973bab37f43a6023df1c7c1ba453fc91e605e7fe9\"" Jan 30 14:24:05.461247 containerd[1476]: time="2025-01-30T14:24:05.461199510Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 30 14:24:05.466372 containerd[1476]: time="2025-01-30T14:24:05.466320599Z" level=info msg="CreateContainer within sandbox \"a8a557cf3137ad9416837074a447ee36a92e343232afc34fd5664832e14e476e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6c261827974cbe9405a58958136855730ce5fdeebb9eb24acd38113c865493d7\"" Jan 30 14:24:05.468173 containerd[1476]: time="2025-01-30T14:24:05.467007046Z" level=info msg="StartContainer for \"6c261827974cbe9405a58958136855730ce5fdeebb9eb24acd38113c865493d7\"" Jan 30 14:24:05.497396 systemd[1]: Started cri-containerd-6c261827974cbe9405a58958136855730ce5fdeebb9eb24acd38113c865493d7.scope - libcontainer container 6c261827974cbe9405a58958136855730ce5fdeebb9eb24acd38113c865493d7. Jan 30 14:24:05.529524 containerd[1476]: time="2025-01-30T14:24:05.529464605Z" level=info msg="StartContainer for \"6c261827974cbe9405a58958136855730ce5fdeebb9eb24acd38113c865493d7\" returns successfully" Jan 30 14:24:05.536669 containerd[1476]: time="2025-01-30T14:24:05.536525153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-f2lb5,Uid:485eb47f-9a7e-4b8c-8647-e4da385f5f38,Namespace:kube-system,Attempt:0,}" Jan 30 14:24:05.567341 containerd[1476]: time="2025-01-30T14:24:05.567059287Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:24:05.567341 containerd[1476]: time="2025-01-30T14:24:05.567224648Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:24:05.567341 containerd[1476]: time="2025-01-30T14:24:05.567236488Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:24:05.567779 containerd[1476]: time="2025-01-30T14:24:05.567700173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:24:05.587133 systemd[1]: Started cri-containerd-26b4f56b9dfeae764c932be776db03016938ececa36613c08b6a7eefa642692e.scope - libcontainer container 26b4f56b9dfeae764c932be776db03016938ececa36613c08b6a7eefa642692e. Jan 30 14:24:05.627882 containerd[1476]: time="2025-01-30T14:24:05.627591708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-f2lb5,Uid:485eb47f-9a7e-4b8c-8647-e4da385f5f38,Namespace:kube-system,Attempt:0,} returns sandbox id \"26b4f56b9dfeae764c932be776db03016938ececa36613c08b6a7eefa642692e\"" Jan 30 14:24:06.266515 kubelet[2793]: I0130 14:24:06.266445 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-cpfbh" podStartSLOduration=2.266425983 podStartE2EDuration="2.266425983s" podCreationTimestamp="2025-01-30 14:24:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:24:06.266254181 +0000 UTC m=+17.207651513" watchObservedRunningTime="2025-01-30 14:24:06.266425983 +0000 UTC m=+17.207823275" Jan 30 14:24:09.342574 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4164856216.mount: Deactivated successfully. Jan 30 14:24:10.696542 containerd[1476]: time="2025-01-30T14:24:10.696469935Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:24:10.698437 containerd[1476]: time="2025-01-30T14:24:10.697911669Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jan 30 14:24:10.700122 containerd[1476]: time="2025-01-30T14:24:10.699917127Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:24:10.702672 containerd[1476]: time="2025-01-30T14:24:10.701846545Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 5.24013611s" Jan 30 14:24:10.702672 containerd[1476]: time="2025-01-30T14:24:10.701890185Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 30 14:24:10.706265 containerd[1476]: time="2025-01-30T14:24:10.704842293Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 30 14:24:10.707572 containerd[1476]: time="2025-01-30T14:24:10.707488957Z" level=info msg="CreateContainer within sandbox \"5f92022ed7438e477159d8e973bab37f43a6023df1c7c1ba453fc91e605e7fe9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 14:24:10.719733 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount700749414.mount: Deactivated successfully. Jan 30 14:24:10.729533 containerd[1476]: time="2025-01-30T14:24:10.729446000Z" level=info msg="CreateContainer within sandbox \"5f92022ed7438e477159d8e973bab37f43a6023df1c7c1ba453fc91e605e7fe9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8c8745d8376cdd09f82a9404364b9ebd4aeb0d0d03fd632ea90db53dae7c14ad\"" Jan 30 14:24:10.730304 containerd[1476]: time="2025-01-30T14:24:10.730236447Z" level=info msg="StartContainer for \"8c8745d8376cdd09f82a9404364b9ebd4aeb0d0d03fd632ea90db53dae7c14ad\"" Jan 30 14:24:10.767531 systemd[1]: Started cri-containerd-8c8745d8376cdd09f82a9404364b9ebd4aeb0d0d03fd632ea90db53dae7c14ad.scope - libcontainer container 8c8745d8376cdd09f82a9404364b9ebd4aeb0d0d03fd632ea90db53dae7c14ad. Jan 30 14:24:10.798660 containerd[1476]: time="2025-01-30T14:24:10.797914591Z" level=info msg="StartContainer for \"8c8745d8376cdd09f82a9404364b9ebd4aeb0d0d03fd632ea90db53dae7c14ad\" returns successfully" Jan 30 14:24:10.814557 systemd[1]: cri-containerd-8c8745d8376cdd09f82a9404364b9ebd4aeb0d0d03fd632ea90db53dae7c14ad.scope: Deactivated successfully. Jan 30 14:24:11.010416 containerd[1476]: time="2025-01-30T14:24:11.010122867Z" level=info msg="shim disconnected" id=8c8745d8376cdd09f82a9404364b9ebd4aeb0d0d03fd632ea90db53dae7c14ad namespace=k8s.io Jan 30 14:24:11.010416 containerd[1476]: time="2025-01-30T14:24:11.010223108Z" level=warning msg="cleaning up after shim disconnected" id=8c8745d8376cdd09f82a9404364b9ebd4aeb0d0d03fd632ea90db53dae7c14ad namespace=k8s.io Jan 30 14:24:11.010416 containerd[1476]: time="2025-01-30T14:24:11.010235988Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:24:11.279210 containerd[1476]: time="2025-01-30T14:24:11.278102799Z" level=info msg="CreateContainer within sandbox \"5f92022ed7438e477159d8e973bab37f43a6023df1c7c1ba453fc91e605e7fe9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 14:24:11.297578 containerd[1476]: time="2025-01-30T14:24:11.297429736Z" level=info msg="CreateContainer within sandbox \"5f92022ed7438e477159d8e973bab37f43a6023df1c7c1ba453fc91e605e7fe9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1450ca36e757cab3b429976615caf16120656c6b79b22f89992856b3d8bced74\"" Jan 30 14:24:11.300238 containerd[1476]: time="2025-01-30T14:24:11.300190601Z" level=info msg="StartContainer for \"1450ca36e757cab3b429976615caf16120656c6b79b22f89992856b3d8bced74\"" Jan 30 14:24:11.333516 systemd[1]: Started cri-containerd-1450ca36e757cab3b429976615caf16120656c6b79b22f89992856b3d8bced74.scope - libcontainer container 1450ca36e757cab3b429976615caf16120656c6b79b22f89992856b3d8bced74. Jan 30 14:24:11.365825 containerd[1476]: time="2025-01-30T14:24:11.365710921Z" level=info msg="StartContainer for \"1450ca36e757cab3b429976615caf16120656c6b79b22f89992856b3d8bced74\" returns successfully" Jan 30 14:24:11.377275 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 14:24:11.377855 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:24:11.378144 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 30 14:24:11.384494 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 14:24:11.386990 systemd[1]: cri-containerd-1450ca36e757cab3b429976615caf16120656c6b79b22f89992856b3d8bced74.scope: Deactivated successfully. Jan 30 14:24:11.409631 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:24:11.422142 containerd[1476]: time="2025-01-30T14:24:11.421370990Z" level=info msg="shim disconnected" id=1450ca36e757cab3b429976615caf16120656c6b79b22f89992856b3d8bced74 namespace=k8s.io Jan 30 14:24:11.422142 containerd[1476]: time="2025-01-30T14:24:11.421479591Z" level=warning msg="cleaning up after shim disconnected" id=1450ca36e757cab3b429976615caf16120656c6b79b22f89992856b3d8bced74 namespace=k8s.io Jan 30 14:24:11.422142 containerd[1476]: time="2025-01-30T14:24:11.421502271Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:24:11.717123 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8c8745d8376cdd09f82a9404364b9ebd4aeb0d0d03fd632ea90db53dae7c14ad-rootfs.mount: Deactivated successfully. Jan 30 14:24:12.285818 containerd[1476]: time="2025-01-30T14:24:12.285748801Z" level=info msg="CreateContainer within sandbox \"5f92022ed7438e477159d8e973bab37f43a6023df1c7c1ba453fc91e605e7fe9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 14:24:12.307988 containerd[1476]: time="2025-01-30T14:24:12.307850122Z" level=info msg="CreateContainer within sandbox \"5f92022ed7438e477159d8e973bab37f43a6023df1c7c1ba453fc91e605e7fe9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a9611979419b2431db02edd9b5e0e1f91f60672b0c8f7e2d5e5acd8c3866ea19\"" Jan 30 14:24:12.310920 containerd[1476]: time="2025-01-30T14:24:12.309280254Z" level=info msg="StartContainer for \"a9611979419b2431db02edd9b5e0e1f91f60672b0c8f7e2d5e5acd8c3866ea19\"" Jan 30 14:24:12.349375 systemd[1]: Started cri-containerd-a9611979419b2431db02edd9b5e0e1f91f60672b0c8f7e2d5e5acd8c3866ea19.scope - libcontainer container a9611979419b2431db02edd9b5e0e1f91f60672b0c8f7e2d5e5acd8c3866ea19. Jan 30 14:24:12.380890 containerd[1476]: time="2025-01-30T14:24:12.380844505Z" level=info msg="StartContainer for \"a9611979419b2431db02edd9b5e0e1f91f60672b0c8f7e2d5e5acd8c3866ea19\" returns successfully" Jan 30 14:24:12.386275 systemd[1]: cri-containerd-a9611979419b2431db02edd9b5e0e1f91f60672b0c8f7e2d5e5acd8c3866ea19.scope: Deactivated successfully. Jan 30 14:24:12.418319 containerd[1476]: time="2025-01-30T14:24:12.418033562Z" level=info msg="shim disconnected" id=a9611979419b2431db02edd9b5e0e1f91f60672b0c8f7e2d5e5acd8c3866ea19 namespace=k8s.io Jan 30 14:24:12.418319 containerd[1476]: time="2025-01-30T14:24:12.418111243Z" level=warning msg="cleaning up after shim disconnected" id=a9611979419b2431db02edd9b5e0e1f91f60672b0c8f7e2d5e5acd8c3866ea19 namespace=k8s.io Jan 30 14:24:12.418319 containerd[1476]: time="2025-01-30T14:24:12.418120963Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:24:12.717879 systemd[1]: run-containerd-runc-k8s.io-a9611979419b2431db02edd9b5e0e1f91f60672b0c8f7e2d5e5acd8c3866ea19-runc.heGO6l.mount: Deactivated successfully. Jan 30 14:24:12.719229 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a9611979419b2431db02edd9b5e0e1f91f60672b0c8f7e2d5e5acd8c3866ea19-rootfs.mount: Deactivated successfully. Jan 30 14:24:13.297109 containerd[1476]: time="2025-01-30T14:24:13.296169639Z" level=info msg="CreateContainer within sandbox \"5f92022ed7438e477159d8e973bab37f43a6023df1c7c1ba453fc91e605e7fe9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 14:24:13.323478 containerd[1476]: time="2025-01-30T14:24:13.322823160Z" level=info msg="CreateContainer within sandbox \"5f92022ed7438e477159d8e973bab37f43a6023df1c7c1ba453fc91e605e7fe9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"032022e02b6d4bc9aac1938d320e7c00f848d02affff596f8940ed8cdfa0352c\"" Jan 30 14:24:13.324387 containerd[1476]: time="2025-01-30T14:24:13.324335613Z" level=info msg="StartContainer for \"032022e02b6d4bc9aac1938d320e7c00f848d02affff596f8940ed8cdfa0352c\"" Jan 30 14:24:13.361367 systemd[1]: Started cri-containerd-032022e02b6d4bc9aac1938d320e7c00f848d02affff596f8940ed8cdfa0352c.scope - libcontainer container 032022e02b6d4bc9aac1938d320e7c00f848d02affff596f8940ed8cdfa0352c. Jan 30 14:24:13.412784 systemd[1]: cri-containerd-032022e02b6d4bc9aac1938d320e7c00f848d02affff596f8940ed8cdfa0352c.scope: Deactivated successfully. Jan 30 14:24:13.415533 containerd[1476]: time="2025-01-30T14:24:13.415375754Z" level=info msg="StartContainer for \"032022e02b6d4bc9aac1938d320e7c00f848d02affff596f8940ed8cdfa0352c\" returns successfully" Jan 30 14:24:13.453500 containerd[1476]: time="2025-01-30T14:24:13.453315856Z" level=info msg="shim disconnected" id=032022e02b6d4bc9aac1938d320e7c00f848d02affff596f8940ed8cdfa0352c namespace=k8s.io Jan 30 14:24:13.453705 containerd[1476]: time="2025-01-30T14:24:13.453478058Z" level=warning msg="cleaning up after shim disconnected" id=032022e02b6d4bc9aac1938d320e7c00f848d02affff596f8940ed8cdfa0352c namespace=k8s.io Jan 30 14:24:13.453705 containerd[1476]: time="2025-01-30T14:24:13.453577819Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:24:13.716393 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-032022e02b6d4bc9aac1938d320e7c00f848d02affff596f8940ed8cdfa0352c-rootfs.mount: Deactivated successfully. Jan 30 14:24:13.784671 containerd[1476]: time="2025-01-30T14:24:13.783885877Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:24:13.786507 containerd[1476]: time="2025-01-30T14:24:13.786464381Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jan 30 14:24:13.788278 containerd[1476]: time="2025-01-30T14:24:13.788149876Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:24:13.790029 containerd[1476]: time="2025-01-30T14:24:13.789950372Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.085062999s" Jan 30 14:24:13.790126 containerd[1476]: time="2025-01-30T14:24:13.790031173Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 30 14:24:13.795225 containerd[1476]: time="2025-01-30T14:24:13.795146299Z" level=info msg="CreateContainer within sandbox \"26b4f56b9dfeae764c932be776db03016938ececa36613c08b6a7eefa642692e\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 30 14:24:13.814726 containerd[1476]: time="2025-01-30T14:24:13.814659635Z" level=info msg="CreateContainer within sandbox \"26b4f56b9dfeae764c932be776db03016938ececa36613c08b6a7eefa642692e\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"321e01b630620e96fdba9bedc375004e4cabce9cd71b8bb167d6135eac596318\"" Jan 30 14:24:13.817531 containerd[1476]: time="2025-01-30T14:24:13.817355779Z" level=info msg="StartContainer for \"321e01b630620e96fdba9bedc375004e4cabce9cd71b8bb167d6135eac596318\"" Jan 30 14:24:13.847728 systemd[1]: Started cri-containerd-321e01b630620e96fdba9bedc375004e4cabce9cd71b8bb167d6135eac596318.scope - libcontainer container 321e01b630620e96fdba9bedc375004e4cabce9cd71b8bb167d6135eac596318. Jan 30 14:24:13.882065 containerd[1476]: time="2025-01-30T14:24:13.881985762Z" level=info msg="StartContainer for \"321e01b630620e96fdba9bedc375004e4cabce9cd71b8bb167d6135eac596318\" returns successfully" Jan 30 14:24:14.297852 containerd[1476]: time="2025-01-30T14:24:14.297702972Z" level=info msg="CreateContainer within sandbox \"5f92022ed7438e477159d8e973bab37f43a6023df1c7c1ba453fc91e605e7fe9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 14:24:14.316104 containerd[1476]: time="2025-01-30T14:24:14.315120008Z" level=info msg="CreateContainer within sandbox \"5f92022ed7438e477159d8e973bab37f43a6023df1c7c1ba453fc91e605e7fe9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"83475a103c2b27fdc2c116b9ebff5b5648663dfae9650089eeccc9fbf20f03b5\"" Jan 30 14:24:14.318316 containerd[1476]: time="2025-01-30T14:24:14.316514140Z" level=info msg="StartContainer for \"83475a103c2b27fdc2c116b9ebff5b5648663dfae9650089eeccc9fbf20f03b5\"" Jan 30 14:24:14.362153 kubelet[2793]: I0130 14:24:14.362073 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-f2lb5" podStartSLOduration=1.2013073 podStartE2EDuration="9.362053428s" podCreationTimestamp="2025-01-30 14:24:05 +0000 UTC" firstStartedPulling="2025-01-30 14:24:05.630730338 +0000 UTC m=+16.572127670" lastFinishedPulling="2025-01-30 14:24:13.791476506 +0000 UTC m=+24.732873798" observedRunningTime="2025-01-30 14:24:14.318333876 +0000 UTC m=+25.259731208" watchObservedRunningTime="2025-01-30 14:24:14.362053428 +0000 UTC m=+25.303450760" Jan 30 14:24:14.373372 systemd[1]: Started cri-containerd-83475a103c2b27fdc2c116b9ebff5b5648663dfae9650089eeccc9fbf20f03b5.scope - libcontainer container 83475a103c2b27fdc2c116b9ebff5b5648663dfae9650089eeccc9fbf20f03b5. Jan 30 14:24:14.439908 containerd[1476]: time="2025-01-30T14:24:14.439853084Z" level=info msg="StartContainer for \"83475a103c2b27fdc2c116b9ebff5b5648663dfae9650089eeccc9fbf20f03b5\" returns successfully" Jan 30 14:24:14.590121 kubelet[2793]: I0130 14:24:14.589895 2793 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 30 14:24:14.729332 kubelet[2793]: I0130 14:24:14.729289 2793 topology_manager.go:215] "Topology Admit Handler" podUID="6ec2fbda-173d-4dfc-b238-2c1593590df3" podNamespace="kube-system" podName="coredns-7db6d8ff4d-9ggzz" Jan 30 14:24:14.733215 kubelet[2793]: I0130 14:24:14.731835 2793 topology_manager.go:215] "Topology Admit Handler" podUID="3e945d52-aa44-4400-ae68-803bec3643ce" podNamespace="kube-system" podName="coredns-7db6d8ff4d-pmcjw" Jan 30 14:24:14.739602 systemd[1]: Created slice kubepods-burstable-pod6ec2fbda_173d_4dfc_b238_2c1593590df3.slice - libcontainer container kubepods-burstable-pod6ec2fbda_173d_4dfc_b238_2c1593590df3.slice. Jan 30 14:24:14.749557 systemd[1]: Created slice kubepods-burstable-pod3e945d52_aa44_4400_ae68_803bec3643ce.slice - libcontainer container kubepods-burstable-pod3e945d52_aa44_4400_ae68_803bec3643ce.slice. Jan 30 14:24:14.755657 kubelet[2793]: I0130 14:24:14.755542 2793 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3e945d52-aa44-4400-ae68-803bec3643ce-config-volume\") pod \"coredns-7db6d8ff4d-pmcjw\" (UID: \"3e945d52-aa44-4400-ae68-803bec3643ce\") " pod="kube-system/coredns-7db6d8ff4d-pmcjw" Jan 30 14:24:14.755657 kubelet[2793]: I0130 14:24:14.755594 2793 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qv4lg\" (UniqueName: \"kubernetes.io/projected/6ec2fbda-173d-4dfc-b238-2c1593590df3-kube-api-access-qv4lg\") pod \"coredns-7db6d8ff4d-9ggzz\" (UID: \"6ec2fbda-173d-4dfc-b238-2c1593590df3\") " pod="kube-system/coredns-7db6d8ff4d-9ggzz" Jan 30 14:24:14.755657 kubelet[2793]: I0130 14:24:14.755615 2793 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5tzl\" (UniqueName: \"kubernetes.io/projected/3e945d52-aa44-4400-ae68-803bec3643ce-kube-api-access-l5tzl\") pod \"coredns-7db6d8ff4d-pmcjw\" (UID: \"3e945d52-aa44-4400-ae68-803bec3643ce\") " pod="kube-system/coredns-7db6d8ff4d-pmcjw" Jan 30 14:24:14.755889 kubelet[2793]: I0130 14:24:14.755635 2793 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6ec2fbda-173d-4dfc-b238-2c1593590df3-config-volume\") pod \"coredns-7db6d8ff4d-9ggzz\" (UID: \"6ec2fbda-173d-4dfc-b238-2c1593590df3\") " pod="kube-system/coredns-7db6d8ff4d-9ggzz" Jan 30 14:24:14.756736 kubelet[2793]: W0130 14:24:14.756683 2793 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4081-3-0-1-1410e96de7" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-0-1-1410e96de7' and this object Jan 30 14:24:14.756736 kubelet[2793]: E0130 14:24:14.756716 2793 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4081-3-0-1-1410e96de7" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-0-1-1410e96de7' and this object Jan 30 14:24:15.857601 kubelet[2793]: E0130 14:24:15.857544 2793 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jan 30 14:24:15.857988 kubelet[2793]: E0130 14:24:15.857670 2793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3e945d52-aa44-4400-ae68-803bec3643ce-config-volume podName:3e945d52-aa44-4400-ae68-803bec3643ce nodeName:}" failed. No retries permitted until 2025-01-30 14:24:16.357630006 +0000 UTC m=+27.299027338 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/3e945d52-aa44-4400-ae68-803bec3643ce-config-volume") pod "coredns-7db6d8ff4d-pmcjw" (UID: "3e945d52-aa44-4400-ae68-803bec3643ce") : failed to sync configmap cache: timed out waiting for the condition Jan 30 14:24:15.857988 kubelet[2793]: E0130 14:24:15.857545 2793 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jan 30 14:24:15.857988 kubelet[2793]: E0130 14:24:15.857881 2793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6ec2fbda-173d-4dfc-b238-2c1593590df3-config-volume podName:6ec2fbda-173d-4dfc-b238-2c1593590df3 nodeName:}" failed. No retries permitted until 2025-01-30 14:24:16.357872368 +0000 UTC m=+27.299269700 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/6ec2fbda-173d-4dfc-b238-2c1593590df3-config-volume") pod "coredns-7db6d8ff4d-9ggzz" (UID: "6ec2fbda-173d-4dfc-b238-2c1593590df3") : failed to sync configmap cache: timed out waiting for the condition Jan 30 14:24:16.545351 containerd[1476]: time="2025-01-30T14:24:16.544831965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9ggzz,Uid:6ec2fbda-173d-4dfc-b238-2c1593590df3,Namespace:kube-system,Attempt:0,}" Jan 30 14:24:16.557629 containerd[1476]: time="2025-01-30T14:24:16.557543757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pmcjw,Uid:3e945d52-aa44-4400-ae68-803bec3643ce,Namespace:kube-system,Attempt:0,}" Jan 30 14:24:17.719796 systemd-networkd[1376]: cilium_host: Link UP Jan 30 14:24:17.720577 systemd-networkd[1376]: cilium_net: Link UP Jan 30 14:24:17.720583 systemd-networkd[1376]: cilium_net: Gained carrier Jan 30 14:24:17.721724 systemd-networkd[1376]: cilium_host: Gained carrier Jan 30 14:24:17.838185 systemd-networkd[1376]: cilium_net: Gained IPv6LL Jan 30 14:24:17.843556 systemd-networkd[1376]: cilium_vxlan: Link UP Jan 30 14:24:17.843777 systemd-networkd[1376]: cilium_vxlan: Gained carrier Jan 30 14:24:18.139129 kernel: NET: Registered PF_ALG protocol family Jan 30 14:24:18.164765 systemd-networkd[1376]: cilium_host: Gained IPv6LL Jan 30 14:24:18.886994 systemd-networkd[1376]: lxc_health: Link UP Jan 30 14:24:18.890491 systemd-networkd[1376]: lxc_health: Gained carrier Jan 30 14:24:19.104779 systemd-networkd[1376]: lxca62a51f66eeb: Link UP Jan 30 14:24:19.113134 kernel: eth0: renamed from tmpa049f Jan 30 14:24:19.120571 systemd-networkd[1376]: lxca62a51f66eeb: Gained carrier Jan 30 14:24:19.129287 systemd-networkd[1376]: lxcb892dc82186e: Link UP Jan 30 14:24:19.139727 kernel: eth0: renamed from tmp9fea7 Jan 30 14:24:19.144798 systemd-networkd[1376]: lxcb892dc82186e: Gained carrier Jan 30 14:24:19.189555 systemd-networkd[1376]: cilium_vxlan: Gained IPv6LL Jan 30 14:24:19.358652 kubelet[2793]: I0130 14:24:19.358538 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rtf8z" podStartSLOduration=10.115453968 podStartE2EDuration="15.358520866s" podCreationTimestamp="2025-01-30 14:24:04 +0000 UTC" firstStartedPulling="2025-01-30 14:24:05.460676265 +0000 UTC m=+16.402073597" lastFinishedPulling="2025-01-30 14:24:10.703743123 +0000 UTC m=+21.645140495" observedRunningTime="2025-01-30 14:24:15.319899145 +0000 UTC m=+26.261296477" watchObservedRunningTime="2025-01-30 14:24:19.358520866 +0000 UTC m=+30.299918198" Jan 30 14:24:20.598733 systemd-networkd[1376]: lxcb892dc82186e: Gained IPv6LL Jan 30 14:24:20.660257 systemd-networkd[1376]: lxc_health: Gained IPv6LL Jan 30 14:24:20.788251 systemd-networkd[1376]: lxca62a51f66eeb: Gained IPv6LL Jan 30 14:24:23.204110 containerd[1476]: time="2025-01-30T14:24:23.203975117Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:24:23.206125 containerd[1476]: time="2025-01-30T14:24:23.204901404Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:24:23.206125 containerd[1476]: time="2025-01-30T14:24:23.204928605Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:24:23.206125 containerd[1476]: time="2025-01-30T14:24:23.205026805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:24:23.222451 containerd[1476]: time="2025-01-30T14:24:23.222312912Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:24:23.222451 containerd[1476]: time="2025-01-30T14:24:23.222382792Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:24:23.222451 containerd[1476]: time="2025-01-30T14:24:23.222398673Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:24:23.222827 containerd[1476]: time="2025-01-30T14:24:23.222705955Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:24:23.236894 systemd[1]: run-containerd-runc-k8s.io-a049f1d2243485e96c6647355b726cb51aaded1251f3def74a7abacfb6244347-runc.Pa3IwK.mount: Deactivated successfully. Jan 30 14:24:23.251526 systemd[1]: Started cri-containerd-a049f1d2243485e96c6647355b726cb51aaded1251f3def74a7abacfb6244347.scope - libcontainer container a049f1d2243485e96c6647355b726cb51aaded1251f3def74a7abacfb6244347. Jan 30 14:24:23.271429 systemd[1]: Started cri-containerd-9fea7feaed67e481f016473d612620861400547616dc87120ac856f7dd598316.scope - libcontainer container 9fea7feaed67e481f016473d612620861400547616dc87120ac856f7dd598316. Jan 30 14:24:23.318621 containerd[1476]: time="2025-01-30T14:24:23.318574647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9ggzz,Uid:6ec2fbda-173d-4dfc-b238-2c1593590df3,Namespace:kube-system,Attempt:0,} returns sandbox id \"a049f1d2243485e96c6647355b726cb51aaded1251f3def74a7abacfb6244347\"" Jan 30 14:24:23.329736 containerd[1476]: time="2025-01-30T14:24:23.329679981Z" level=info msg="CreateContainer within sandbox \"a049f1d2243485e96c6647355b726cb51aaded1251f3def74a7abacfb6244347\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 14:24:23.345762 containerd[1476]: time="2025-01-30T14:24:23.345717276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pmcjw,Uid:3e945d52-aa44-4400-ae68-803bec3643ce,Namespace:kube-system,Attempt:0,} returns sandbox id \"9fea7feaed67e481f016473d612620861400547616dc87120ac856f7dd598316\"" Jan 30 14:24:23.352661 containerd[1476]: time="2025-01-30T14:24:23.352592055Z" level=info msg="CreateContainer within sandbox \"9fea7feaed67e481f016473d612620861400547616dc87120ac856f7dd598316\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 14:24:23.358076 containerd[1476]: time="2025-01-30T14:24:23.358013740Z" level=info msg="CreateContainer within sandbox \"a049f1d2243485e96c6647355b726cb51aaded1251f3def74a7abacfb6244347\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7a140289c0603e6406fd3680285c998bafb96e6aa2bc8791cac9ef43ecb80028\"" Jan 30 14:24:23.360836 containerd[1476]: time="2025-01-30T14:24:23.359051229Z" level=info msg="StartContainer for \"7a140289c0603e6406fd3680285c998bafb96e6aa2bc8791cac9ef43ecb80028\"" Jan 30 14:24:23.375105 containerd[1476]: time="2025-01-30T14:24:23.375024404Z" level=info msg="CreateContainer within sandbox \"9fea7feaed67e481f016473d612620861400547616dc87120ac856f7dd598316\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"45582500a0e5f0883974156dbf369ad2bb848eb54b1c234022cee35b777f7c79\"" Jan 30 14:24:23.379282 containerd[1476]: time="2025-01-30T14:24:23.379242320Z" level=info msg="StartContainer for \"45582500a0e5f0883974156dbf369ad2bb848eb54b1c234022cee35b777f7c79\"" Jan 30 14:24:23.404313 systemd[1]: Started cri-containerd-7a140289c0603e6406fd3680285c998bafb96e6aa2bc8791cac9ef43ecb80028.scope - libcontainer container 7a140289c0603e6406fd3680285c998bafb96e6aa2bc8791cac9ef43ecb80028. Jan 30 14:24:23.429608 systemd[1]: Started cri-containerd-45582500a0e5f0883974156dbf369ad2bb848eb54b1c234022cee35b777f7c79.scope - libcontainer container 45582500a0e5f0883974156dbf369ad2bb848eb54b1c234022cee35b777f7c79. Jan 30 14:24:23.463252 containerd[1476]: time="2025-01-30T14:24:23.463016309Z" level=info msg="StartContainer for \"7a140289c0603e6406fd3680285c998bafb96e6aa2bc8791cac9ef43ecb80028\" returns successfully" Jan 30 14:24:23.474615 containerd[1476]: time="2025-01-30T14:24:23.474564927Z" level=info msg="StartContainer for \"45582500a0e5f0883974156dbf369ad2bb848eb54b1c234022cee35b777f7c79\" returns successfully" Jan 30 14:24:24.344386 kubelet[2793]: I0130 14:24:24.343070 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-pmcjw" podStartSLOduration=19.343046902 podStartE2EDuration="19.343046902s" podCreationTimestamp="2025-01-30 14:24:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:24:24.340599162 +0000 UTC m=+35.281996534" watchObservedRunningTime="2025-01-30 14:24:24.343046902 +0000 UTC m=+35.284444274" Jan 30 14:24:24.377145 kubelet[2793]: I0130 14:24:24.376054 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-9ggzz" podStartSLOduration=19.37603506 podStartE2EDuration="19.37603506s" podCreationTimestamp="2025-01-30 14:24:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:24:24.360912093 +0000 UTC m=+35.302309425" watchObservedRunningTime="2025-01-30 14:24:24.37603506 +0000 UTC m=+35.317432352" Jan 30 14:24:26.572570 systemd[1]: Started sshd@14-49.13.124.2:22-165.232.147.130:47674.service - OpenSSH per-connection server daemon (165.232.147.130:47674). Jan 30 14:24:27.460024 sshd[4172]: Invalid user alex from 165.232.147.130 port 47674 Jan 30 14:24:27.618808 sshd[4172]: Received disconnect from 165.232.147.130 port 47674:11: Bye Bye [preauth] Jan 30 14:24:27.618808 sshd[4172]: Disconnected from invalid user alex 165.232.147.130 port 47674 [preauth] Jan 30 14:24:27.621259 systemd[1]: sshd@14-49.13.124.2:22-165.232.147.130:47674.service: Deactivated successfully. Jan 30 14:24:38.018646 systemd[1]: Started sshd@15-49.13.124.2:22-83.212.75.149:54210.service - OpenSSH per-connection server daemon (83.212.75.149:54210). Jan 30 14:24:38.381075 sshd[4181]: Invalid user deploy from 83.212.75.149 port 54210 Jan 30 14:24:38.439110 sshd[4181]: Received disconnect from 83.212.75.149 port 54210:11: Bye Bye [preauth] Jan 30 14:24:38.439290 sshd[4181]: Disconnected from invalid user deploy 83.212.75.149 port 54210 [preauth] Jan 30 14:24:38.442341 systemd[1]: sshd@15-49.13.124.2:22-83.212.75.149:54210.service: Deactivated successfully. Jan 30 14:25:18.641675 systemd[1]: sshd@9-49.13.124.2:22-153.37.192.4:35638.service: Deactivated successfully. Jan 30 14:25:38.242472 systemd[1]: Started sshd@16-49.13.124.2:22-165.232.147.130:33270.service - OpenSSH per-connection server daemon (165.232.147.130:33270). Jan 30 14:25:39.104613 sshd[4194]: Invalid user user from 165.232.147.130 port 33270 Jan 30 14:25:39.275390 sshd[4194]: Received disconnect from 165.232.147.130 port 33270:11: Bye Bye [preauth] Jan 30 14:25:39.275390 sshd[4194]: Disconnected from invalid user user 165.232.147.130 port 33270 [preauth] Jan 30 14:25:39.279053 systemd[1]: sshd@16-49.13.124.2:22-165.232.147.130:33270.service: Deactivated successfully. Jan 30 14:26:01.353487 systemd[1]: Started sshd@17-49.13.124.2:22-45.207.58.154:55038.service - OpenSSH per-connection server daemon (45.207.58.154:55038). Jan 30 14:26:03.749405 sshd[4201]: Invalid user admin from 45.207.58.154 port 55038 Jan 30 14:26:04.211029 sshd[4201]: Received disconnect from 45.207.58.154 port 55038:11: Bye Bye [preauth] Jan 30 14:26:04.211029 sshd[4201]: Disconnected from invalid user admin 45.207.58.154 port 55038 [preauth] Jan 30 14:26:04.214323 systemd[1]: sshd@17-49.13.124.2:22-45.207.58.154:55038.service: Deactivated successfully. Jan 30 14:26:14.853445 systemd[1]: Started sshd@18-49.13.124.2:22-36.26.72.149:57078.service - OpenSSH per-connection server daemon (36.26.72.149:57078). Jan 30 14:26:15.354939 systemd[1]: Started sshd@19-49.13.124.2:22-5.250.188.211:56050.service - OpenSSH per-connection server daemon (5.250.188.211:56050). Jan 30 14:26:15.701540 sshd[4211]: Invalid user ftpuser from 5.250.188.211 port 56050 Jan 30 14:26:15.754262 sshd[4211]: Received disconnect from 5.250.188.211 port 56050:11: Bye Bye [preauth] Jan 30 14:26:15.754262 sshd[4211]: Disconnected from invalid user ftpuser 5.250.188.211 port 56050 [preauth] Jan 30 14:26:15.757574 systemd[1]: sshd@19-49.13.124.2:22-5.250.188.211:56050.service: Deactivated successfully. Jan 30 14:26:40.157601 systemd[1]: Started sshd@20-49.13.124.2:22-83.212.75.149:40028.service - OpenSSH per-connection server daemon (83.212.75.149:40028). Jan 30 14:26:40.518979 sshd[4221]: Invalid user server from 83.212.75.149 port 40028 Jan 30 14:26:40.574792 sshd[4221]: Received disconnect from 83.212.75.149 port 40028:11: Bye Bye [preauth] Jan 30 14:26:40.574792 sshd[4221]: Disconnected from invalid user server 83.212.75.149 port 40028 [preauth] Jan 30 14:26:40.577482 systemd[1]: sshd@20-49.13.124.2:22-83.212.75.149:40028.service: Deactivated successfully. Jan 30 14:26:47.882111 systemd[1]: Started sshd@21-49.13.124.2:22-165.232.147.130:55400.service - OpenSSH per-connection server daemon (165.232.147.130:55400). Jan 30 14:26:48.787575 sshd[4226]: Invalid user alex from 165.232.147.130 port 55400 Jan 30 14:26:48.953396 sshd[4226]: Received disconnect from 165.232.147.130 port 55400:11: Bye Bye [preauth] Jan 30 14:26:48.953396 sshd[4226]: Disconnected from invalid user alex 165.232.147.130 port 55400 [preauth] Jan 30 14:26:48.956766 systemd[1]: sshd@21-49.13.124.2:22-165.232.147.130:55400.service: Deactivated successfully. Jan 30 14:27:07.255534 systemd[1]: Started sshd@22-49.13.124.2:22-140.206.168.98:43900.service - OpenSSH per-connection server daemon (140.206.168.98:43900). Jan 30 14:27:11.448472 sshd[4235]: Invalid user server from 140.206.168.98 port 43900 Jan 30 14:27:11.804364 sshd[4235]: Received disconnect from 140.206.168.98 port 43900:11: Bye Bye [preauth] Jan 30 14:27:11.804364 sshd[4235]: Disconnected from invalid user server 140.206.168.98 port 43900 [preauth] Jan 30 14:27:11.805981 systemd[1]: sshd@22-49.13.124.2:22-140.206.168.98:43900.service: Deactivated successfully. Jan 30 14:27:29.924523 systemd[1]: Started sshd@23-49.13.124.2:22-5.250.188.211:55354.service - OpenSSH per-connection server daemon (5.250.188.211:55354). Jan 30 14:27:30.257655 sshd[4240]: Invalid user user from 5.250.188.211 port 55354 Jan 30 14:27:30.307758 sshd[4240]: Received disconnect from 5.250.188.211 port 55354:11: Bye Bye [preauth] Jan 30 14:27:30.307758 sshd[4240]: Disconnected from invalid user user 5.250.188.211 port 55354 [preauth] Jan 30 14:27:30.310397 systemd[1]: sshd@23-49.13.124.2:22-5.250.188.211:55354.service: Deactivated successfully. Jan 30 14:27:57.139506 systemd[1]: Started sshd@24-49.13.124.2:22-83.212.75.149:46014.service - OpenSSH per-connection server daemon (83.212.75.149:46014). Jan 30 14:27:57.506870 sshd[4249]: Invalid user ftpuser from 83.212.75.149 port 46014 Jan 30 14:27:57.563205 sshd[4249]: Received disconnect from 83.212.75.149 port 46014:11: Bye Bye [preauth] Jan 30 14:27:57.563205 sshd[4249]: Disconnected from invalid user ftpuser 83.212.75.149 port 46014 [preauth] Jan 30 14:27:57.565214 systemd[1]: sshd@24-49.13.124.2:22-83.212.75.149:46014.service: Deactivated successfully. Jan 30 14:28:03.968364 systemd[1]: Started sshd@25-49.13.124.2:22-45.207.58.154:42189.service - OpenSSH per-connection server daemon (45.207.58.154:42189). Jan 30 14:28:06.725834 sshd[4254]: Invalid user alex from 45.207.58.154 port 42189 Jan 30 14:28:06.994260 sshd[4254]: Received disconnect from 45.207.58.154 port 42189:11: Bye Bye [preauth] Jan 30 14:28:06.994260 sshd[4254]: Disconnected from invalid user alex 45.207.58.154 port 42189 [preauth] Jan 30 14:28:06.995798 systemd[1]: sshd@25-49.13.124.2:22-45.207.58.154:42189.service: Deactivated successfully. Jan 30 14:28:14.865884 systemd[1]: sshd@18-49.13.124.2:22-36.26.72.149:57078.service: Deactivated successfully. Jan 30 14:28:16.930619 systemd[1]: Started sshd@26-49.13.124.2:22-140.206.168.98:50088.service - OpenSSH per-connection server daemon (140.206.168.98:50088). Jan 30 14:28:28.696417 systemd[1]: Started sshd@27-49.13.124.2:22-183.88.232.183:39890.service - OpenSSH per-connection server daemon (183.88.232.183:39890). Jan 30 14:28:29.742645 sshd[4265]: Invalid user lourdes from 183.88.232.183 port 39890 Jan 30 14:28:29.936102 sshd[4265]: Received disconnect from 183.88.232.183 port 39890:11: Bye Bye [preauth] Jan 30 14:28:29.936102 sshd[4265]: Disconnected from invalid user lourdes 183.88.232.183 port 39890 [preauth] Jan 30 14:28:29.939482 systemd[1]: sshd@27-49.13.124.2:22-183.88.232.183:39890.service: Deactivated successfully. Jan 30 14:28:38.384460 systemd[1]: Started sshd@28-49.13.124.2:22-139.178.68.195:46298.service - OpenSSH per-connection server daemon (139.178.68.195:46298). Jan 30 14:28:39.378380 sshd[4272]: Accepted publickey for core from 139.178.68.195 port 46298 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:28:39.381423 sshd[4272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:28:39.391768 systemd-logind[1459]: New session 8 of user core. Jan 30 14:28:39.399482 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 14:28:40.152191 sshd[4272]: pam_unix(sshd:session): session closed for user core Jan 30 14:28:40.160281 systemd[1]: sshd@28-49.13.124.2:22-139.178.68.195:46298.service: Deactivated successfully. Jan 30 14:28:40.164940 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 14:28:40.167934 systemd-logind[1459]: Session 8 logged out. Waiting for processes to exit. Jan 30 14:28:40.169477 systemd-logind[1459]: Removed session 8. Jan 30 14:28:45.327899 systemd[1]: Started sshd@29-49.13.124.2:22-139.178.68.195:44816.service - OpenSSH per-connection server daemon (139.178.68.195:44816). Jan 30 14:28:46.228589 systemd[1]: Started sshd@30-49.13.124.2:22-5.250.188.211:46628.service - OpenSSH per-connection server daemon (5.250.188.211:46628). Jan 30 14:28:46.315793 sshd[4286]: Accepted publickey for core from 139.178.68.195 port 44816 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:28:46.318906 sshd[4286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:28:46.325210 systemd-logind[1459]: New session 9 of user core. Jan 30 14:28:46.329653 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 14:28:46.560290 sshd[4289]: Invalid user server from 5.250.188.211 port 46628 Jan 30 14:28:46.606867 sshd[4289]: Received disconnect from 5.250.188.211 port 46628:11: Bye Bye [preauth] Jan 30 14:28:46.607137 sshd[4289]: Disconnected from invalid user server 5.250.188.211 port 46628 [preauth] Jan 30 14:28:46.610498 systemd[1]: sshd@30-49.13.124.2:22-5.250.188.211:46628.service: Deactivated successfully. Jan 30 14:28:47.074710 sshd[4286]: pam_unix(sshd:session): session closed for user core Jan 30 14:28:47.080143 systemd[1]: sshd@29-49.13.124.2:22-139.178.68.195:44816.service: Deactivated successfully. Jan 30 14:28:47.084818 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 14:28:47.088703 systemd-logind[1459]: Session 9 logged out. Waiting for processes to exit. Jan 30 14:28:47.090073 systemd-logind[1459]: Removed session 9. Jan 30 14:28:52.257185 systemd[1]: Started sshd@31-49.13.124.2:22-139.178.68.195:44818.service - OpenSSH per-connection server daemon (139.178.68.195:44818). Jan 30 14:28:53.231725 sshd[4307]: Accepted publickey for core from 139.178.68.195 port 44818 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:28:53.234730 sshd[4307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:28:53.241023 systemd-logind[1459]: New session 10 of user core. Jan 30 14:28:53.246383 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 14:28:53.985404 sshd[4307]: pam_unix(sshd:session): session closed for user core Jan 30 14:28:53.992488 systemd[1]: sshd@31-49.13.124.2:22-139.178.68.195:44818.service: Deactivated successfully. Jan 30 14:28:53.996886 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 14:28:53.999061 systemd-logind[1459]: Session 10 logged out. Waiting for processes to exit. Jan 30 14:28:54.001518 systemd-logind[1459]: Removed session 10. Jan 30 14:28:59.164562 systemd[1]: Started sshd@32-49.13.124.2:22-139.178.68.195:59602.service - OpenSSH per-connection server daemon (139.178.68.195:59602). Jan 30 14:29:00.139949 sshd[4321]: Accepted publickey for core from 139.178.68.195 port 59602 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:29:00.142523 sshd[4321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:29:00.147599 systemd-logind[1459]: New session 11 of user core. Jan 30 14:29:00.157363 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 14:29:00.896255 sshd[4321]: pam_unix(sshd:session): session closed for user core Jan 30 14:29:00.901655 systemd[1]: sshd@32-49.13.124.2:22-139.178.68.195:59602.service: Deactivated successfully. Jan 30 14:29:00.904396 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 14:29:00.907563 systemd-logind[1459]: Session 11 logged out. Waiting for processes to exit. Jan 30 14:29:00.908774 systemd-logind[1459]: Removed session 11. Jan 30 14:29:02.665501 systemd[1]: Started sshd@33-49.13.124.2:22-140.206.168.98:38330.service - OpenSSH per-connection server daemon (140.206.168.98:38330). Jan 30 14:29:06.076520 systemd[1]: Started sshd@34-49.13.124.2:22-139.178.68.195:55364.service - OpenSSH per-connection server daemon (139.178.68.195:55364). Jan 30 14:29:07.068255 sshd[4339]: Accepted publickey for core from 139.178.68.195 port 55364 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:29:07.070581 sshd[4339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:29:07.079492 systemd-logind[1459]: New session 12 of user core. Jan 30 14:29:07.085438 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 14:29:07.830737 sshd[4339]: pam_unix(sshd:session): session closed for user core Jan 30 14:29:07.837366 systemd[1]: sshd@34-49.13.124.2:22-139.178.68.195:55364.service: Deactivated successfully. Jan 30 14:29:07.840841 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 14:29:07.842045 systemd-logind[1459]: Session 12 logged out. Waiting for processes to exit. Jan 30 14:29:07.843064 systemd-logind[1459]: Removed session 12. Jan 30 14:29:13.003633 systemd[1]: Started sshd@35-49.13.124.2:22-139.178.68.195:55368.service - OpenSSH per-connection server daemon (139.178.68.195:55368). Jan 30 14:29:13.974981 sshd[4353]: Accepted publickey for core from 139.178.68.195 port 55368 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:29:13.978063 sshd[4353]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:29:13.983755 systemd-logind[1459]: New session 13 of user core. Jan 30 14:29:13.999407 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 14:29:14.724793 sshd[4353]: pam_unix(sshd:session): session closed for user core Jan 30 14:29:14.733294 systemd[1]: sshd@35-49.13.124.2:22-139.178.68.195:55368.service: Deactivated successfully. Jan 30 14:29:14.737136 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 14:29:14.738261 systemd-logind[1459]: Session 13 logged out. Waiting for processes to exit. Jan 30 14:29:14.740263 systemd-logind[1459]: Removed session 13. Jan 30 14:29:15.841070 systemd[1]: Started sshd@36-49.13.124.2:22-117.41.160.73:47322.service - OpenSSH per-connection server daemon (117.41.160.73:47322). Jan 30 14:29:19.904536 systemd[1]: Started sshd@37-49.13.124.2:22-139.178.68.195:37312.service - OpenSSH per-connection server daemon (139.178.68.195:37312). Jan 30 14:29:20.883856 sshd[4368]: Accepted publickey for core from 139.178.68.195 port 37312 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:29:20.886468 sshd[4368]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:29:20.893952 systemd-logind[1459]: New session 14 of user core. Jan 30 14:29:20.904484 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 14:29:21.640254 sshd[4368]: pam_unix(sshd:session): session closed for user core Jan 30 14:29:21.646353 systemd[1]: sshd@37-49.13.124.2:22-139.178.68.195:37312.service: Deactivated successfully. Jan 30 14:29:21.648525 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 14:29:21.649824 systemd-logind[1459]: Session 14 logged out. Waiting for processes to exit. Jan 30 14:29:21.650824 systemd-logind[1459]: Removed session 14. Jan 30 14:29:26.819202 systemd[1]: Started sshd@38-49.13.124.2:22-139.178.68.195:59140.service - OpenSSH per-connection server daemon (139.178.68.195:59140). Jan 30 14:29:26.957579 systemd[1]: Started sshd@39-49.13.124.2:22-80.251.219.209:60384.service - OpenSSH per-connection server daemon (80.251.219.209:60384). Jan 30 14:29:27.799811 sshd[4383]: Accepted publickey for core from 139.178.68.195 port 59140 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:29:27.802319 sshd[4383]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:29:27.808853 systemd-logind[1459]: New session 15 of user core. Jan 30 14:29:27.811323 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 14:29:27.950759 sshd[4386]: Invalid user babak from 80.251.219.209 port 60384 Jan 30 14:29:28.143734 sshd[4386]: Received disconnect from 80.251.219.209 port 60384:11: Bye Bye [preauth] Jan 30 14:29:28.143734 sshd[4386]: Disconnected from invalid user babak 80.251.219.209 port 60384 [preauth] Jan 30 14:29:28.147144 systemd[1]: sshd@39-49.13.124.2:22-80.251.219.209:60384.service: Deactivated successfully. Jan 30 14:29:28.562638 sshd[4383]: pam_unix(sshd:session): session closed for user core Jan 30 14:29:28.566964 systemd[1]: sshd@38-49.13.124.2:22-139.178.68.195:59140.service: Deactivated successfully. Jan 30 14:29:28.568718 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 14:29:28.571532 systemd-logind[1459]: Session 15 logged out. Waiting for processes to exit. Jan 30 14:29:28.573395 systemd-logind[1459]: Removed session 15. Jan 30 14:29:29.586458 systemd[1]: Started sshd@40-49.13.124.2:22-83.212.75.149:51880.service - OpenSSH per-connection server daemon (83.212.75.149:51880). Jan 30 14:29:29.951715 sshd[4401]: Invalid user ftpuser from 83.212.75.149 port 51880 Jan 30 14:29:30.006020 sshd[4401]: Received disconnect from 83.212.75.149 port 51880:11: Bye Bye [preauth] Jan 30 14:29:30.006020 sshd[4401]: Disconnected from invalid user ftpuser 83.212.75.149 port 51880 [preauth] Jan 30 14:29:30.008057 systemd[1]: sshd@40-49.13.124.2:22-83.212.75.149:51880.service: Deactivated successfully. Jan 30 14:29:33.738427 systemd[1]: Started sshd@41-49.13.124.2:22-139.178.68.195:59144.service - OpenSSH per-connection server daemon (139.178.68.195:59144). Jan 30 14:29:34.720040 sshd[4406]: Accepted publickey for core from 139.178.68.195 port 59144 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:29:34.721739 sshd[4406]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:29:34.727446 systemd-logind[1459]: New session 16 of user core. Jan 30 14:29:34.731399 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 14:29:34.877180 update_engine[1460]: I20250130 14:29:34.876731 1460 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 30 14:29:34.877180 update_engine[1460]: I20250130 14:29:34.876850 1460 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 30 14:29:34.877857 update_engine[1460]: I20250130 14:29:34.877245 1460 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 30 14:29:34.878290 update_engine[1460]: I20250130 14:29:34.878150 1460 omaha_request_params.cc:62] Current group set to lts Jan 30 14:29:34.878290 update_engine[1460]: I20250130 14:29:34.878294 1460 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 30 14:29:34.878495 update_engine[1460]: I20250130 14:29:34.878363 1460 update_attempter.cc:643] Scheduling an action processor start. Jan 30 14:29:34.878652 update_engine[1460]: I20250130 14:29:34.878587 1460 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 30 14:29:34.878716 update_engine[1460]: I20250130 14:29:34.878659 1460 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 30 14:29:34.878783 update_engine[1460]: I20250130 14:29:34.878755 1460 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 30 14:29:34.878875 update_engine[1460]: I20250130 14:29:34.878777 1460 omaha_request_action.cc:272] Request: Jan 30 14:29:34.878875 update_engine[1460]: Jan 30 14:29:34.878875 update_engine[1460]: Jan 30 14:29:34.878875 update_engine[1460]: Jan 30 14:29:34.878875 update_engine[1460]: Jan 30 14:29:34.878875 update_engine[1460]: Jan 30 14:29:34.878875 update_engine[1460]: Jan 30 14:29:34.878875 update_engine[1460]: Jan 30 14:29:34.878875 update_engine[1460]: Jan 30 14:29:34.878875 update_engine[1460]: I20250130 14:29:34.878789 1460 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 14:29:34.879289 locksmithd[1489]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 30 14:29:34.880708 update_engine[1460]: I20250130 14:29:34.880656 1460 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 14:29:34.881110 update_engine[1460]: I20250130 14:29:34.881057 1460 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 14:29:34.883023 update_engine[1460]: E20250130 14:29:34.882872 1460 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 14:29:34.883023 update_engine[1460]: I20250130 14:29:34.882974 1460 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 30 14:29:35.481448 sshd[4406]: pam_unix(sshd:session): session closed for user core Jan 30 14:29:35.486330 systemd[1]: sshd@41-49.13.124.2:22-139.178.68.195:59144.service: Deactivated successfully. Jan 30 14:29:35.488517 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 14:29:35.490196 systemd-logind[1459]: Session 16 logged out. Waiting for processes to exit. Jan 30 14:29:35.491579 systemd-logind[1459]: Removed session 16. Jan 30 14:29:40.666642 systemd[1]: Started sshd@42-49.13.124.2:22-139.178.68.195:36620.service - OpenSSH per-connection server daemon (139.178.68.195:36620). Jan 30 14:29:41.667737 sshd[4422]: Accepted publickey for core from 139.178.68.195 port 36620 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:29:41.669945 sshd[4422]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:29:41.680632 systemd-logind[1459]: New session 17 of user core. Jan 30 14:29:41.684395 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 14:29:42.431298 sshd[4422]: pam_unix(sshd:session): session closed for user core Jan 30 14:29:42.437520 systemd[1]: sshd@42-49.13.124.2:22-139.178.68.195:36620.service: Deactivated successfully. Jan 30 14:29:42.439949 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 14:29:42.441377 systemd-logind[1459]: Session 17 logged out. Waiting for processes to exit. Jan 30 14:29:42.442463 systemd-logind[1459]: Removed session 17. Jan 30 14:29:44.880255 update_engine[1460]: I20250130 14:29:44.879364 1460 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 14:29:44.880255 update_engine[1460]: I20250130 14:29:44.879678 1460 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 14:29:44.880255 update_engine[1460]: I20250130 14:29:44.880190 1460 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 14:29:44.881806 update_engine[1460]: E20250130 14:29:44.881745 1460 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 14:29:44.882063 update_engine[1460]: I20250130 14:29:44.882030 1460 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 30 14:29:47.609572 systemd[1]: Started sshd@43-49.13.124.2:22-139.178.68.195:39848.service - OpenSSH per-connection server daemon (139.178.68.195:39848). Jan 30 14:29:48.589939 sshd[4436]: Accepted publickey for core from 139.178.68.195 port 39848 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:29:48.595112 sshd[4436]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:29:48.601286 systemd-logind[1459]: New session 18 of user core. Jan 30 14:29:48.610433 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 14:29:49.353739 sshd[4436]: pam_unix(sshd:session): session closed for user core Jan 30 14:29:49.358592 systemd[1]: sshd@43-49.13.124.2:22-139.178.68.195:39848.service: Deactivated successfully. Jan 30 14:29:49.363208 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 14:29:49.367877 systemd-logind[1459]: Session 18 logged out. Waiting for processes to exit. Jan 30 14:29:49.371137 systemd-logind[1459]: Removed session 18. Jan 30 14:29:54.530414 systemd[1]: Started sshd@44-49.13.124.2:22-139.178.68.195:39862.service - OpenSSH per-connection server daemon (139.178.68.195:39862). Jan 30 14:29:54.880157 update_engine[1460]: I20250130 14:29:54.879853 1460 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 14:29:54.880673 update_engine[1460]: I20250130 14:29:54.880270 1460 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 14:29:54.880673 update_engine[1460]: I20250130 14:29:54.880573 1460 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 14:29:54.881374 update_engine[1460]: E20250130 14:29:54.881317 1460 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 14:29:54.881457 update_engine[1460]: I20250130 14:29:54.881395 1460 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 30 14:29:55.176464 systemd[1]: Started sshd@45-49.13.124.2:22-140.206.168.98:56158.service - OpenSSH per-connection server daemon (140.206.168.98:56158). Jan 30 14:29:55.508244 sshd[4452]: Accepted publickey for core from 139.178.68.195 port 39862 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:29:55.510394 sshd[4452]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:29:55.515156 systemd-logind[1459]: New session 19 of user core. Jan 30 14:29:55.522787 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 14:29:56.261341 sshd[4452]: pam_unix(sshd:session): session closed for user core Jan 30 14:29:56.268017 systemd[1]: sshd@44-49.13.124.2:22-139.178.68.195:39862.service: Deactivated successfully. Jan 30 14:29:56.271039 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 14:29:56.272652 systemd-logind[1459]: Session 19 logged out. Waiting for processes to exit. Jan 30 14:29:56.273932 systemd-logind[1459]: Removed session 19. Jan 30 14:29:56.669573 systemd[1]: Started sshd@46-49.13.124.2:22-183.88.232.183:39130.service - OpenSSH per-connection server daemon (183.88.232.183:39130). Jan 30 14:29:57.685011 sshd[4469]: Invalid user jiaxuan from 183.88.232.183 port 39130 Jan 30 14:29:57.879328 sshd[4469]: Received disconnect from 183.88.232.183 port 39130:11: Bye Bye [preauth] Jan 30 14:29:57.879328 sshd[4469]: Disconnected from invalid user jiaxuan 183.88.232.183 port 39130 [preauth] Jan 30 14:29:57.880625 systemd[1]: sshd@46-49.13.124.2:22-183.88.232.183:39130.service: Deactivated successfully. Jan 30 14:30:01.436021 systemd[1]: Started sshd@47-49.13.124.2:22-139.178.68.195:56022.service - OpenSSH per-connection server daemon (139.178.68.195:56022). Jan 30 14:30:01.507385 systemd[1]: Started sshd@48-49.13.124.2:22-45.207.58.154:57573.service - OpenSSH per-connection server daemon (45.207.58.154:57573). Jan 30 14:30:02.418912 sshd[4474]: Accepted publickey for core from 139.178.68.195 port 56022 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:30:02.421520 sshd[4474]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:30:02.427957 systemd-logind[1459]: New session 20 of user core. Jan 30 14:30:02.435436 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 14:30:03.167694 sshd[4474]: pam_unix(sshd:session): session closed for user core Jan 30 14:30:03.172497 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 14:30:03.173676 systemd[1]: sshd@47-49.13.124.2:22-139.178.68.195:56022.service: Deactivated successfully. Jan 30 14:30:03.176558 systemd-logind[1459]: Session 20 logged out. Waiting for processes to exit. Jan 30 14:30:03.177968 systemd-logind[1459]: Removed session 20. Jan 30 14:30:04.047532 sshd[4477]: Invalid user server from 45.207.58.154 port 57573 Jan 30 14:30:04.354955 sshd[4477]: Received disconnect from 45.207.58.154 port 57573:11: Bye Bye [preauth] Jan 30 14:30:04.354955 sshd[4477]: Disconnected from invalid user server 45.207.58.154 port 57573 [preauth] Jan 30 14:30:04.357307 systemd[1]: sshd@48-49.13.124.2:22-45.207.58.154:57573.service: Deactivated successfully. Jan 30 14:30:04.878360 update_engine[1460]: I20250130 14:30:04.878149 1460 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 14:30:04.878813 update_engine[1460]: I20250130 14:30:04.878651 1460 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 14:30:04.879315 update_engine[1460]: I20250130 14:30:04.879225 1460 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 14:30:04.880513 update_engine[1460]: E20250130 14:30:04.880434 1460 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 14:30:04.880621 update_engine[1460]: I20250130 14:30:04.880527 1460 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 30 14:30:04.880621 update_engine[1460]: I20250130 14:30:04.880543 1460 omaha_request_action.cc:617] Omaha request response: Jan 30 14:30:04.880734 update_engine[1460]: E20250130 14:30:04.880705 1460 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 30 14:30:04.880791 update_engine[1460]: I20250130 14:30:04.880743 1460 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 30 14:30:04.880791 update_engine[1460]: I20250130 14:30:04.880754 1460 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 30 14:30:04.880862 update_engine[1460]: I20250130 14:30:04.880762 1460 update_attempter.cc:306] Processing Done. Jan 30 14:30:04.880893 update_engine[1460]: E20250130 14:30:04.880854 1460 update_attempter.cc:619] Update failed. Jan 30 14:30:04.880893 update_engine[1460]: I20250130 14:30:04.880872 1460 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 30 14:30:04.880893 update_engine[1460]: I20250130 14:30:04.880880 1460 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 30 14:30:04.880893 update_engine[1460]: I20250130 14:30:04.880889 1460 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 30 14:30:04.881231 update_engine[1460]: I20250130 14:30:04.880980 1460 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 30 14:30:04.881231 update_engine[1460]: I20250130 14:30:04.881049 1460 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 30 14:30:04.881231 update_engine[1460]: I20250130 14:30:04.881065 1460 omaha_request_action.cc:272] Request: Jan 30 14:30:04.881231 update_engine[1460]: Jan 30 14:30:04.881231 update_engine[1460]: Jan 30 14:30:04.881231 update_engine[1460]: Jan 30 14:30:04.881231 update_engine[1460]: Jan 30 14:30:04.881231 update_engine[1460]: Jan 30 14:30:04.881231 update_engine[1460]: Jan 30 14:30:04.881231 update_engine[1460]: I20250130 14:30:04.881076 1460 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 14:30:04.881518 update_engine[1460]: I20250130 14:30:04.881335 1460 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 14:30:04.881582 update_engine[1460]: I20250130 14:30:04.881551 1460 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 14:30:04.881885 locksmithd[1489]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 30 14:30:04.882470 update_engine[1460]: E20250130 14:30:04.882346 1460 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 14:30:04.882470 update_engine[1460]: I20250130 14:30:04.882428 1460 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 30 14:30:04.882470 update_engine[1460]: I20250130 14:30:04.882437 1460 omaha_request_action.cc:617] Omaha request response: Jan 30 14:30:04.882470 update_engine[1460]: I20250130 14:30:04.882450 1460 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 30 14:30:04.882470 update_engine[1460]: I20250130 14:30:04.882457 1460 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 30 14:30:04.882470 update_engine[1460]: I20250130 14:30:04.882463 1460 update_attempter.cc:306] Processing Done. Jan 30 14:30:04.882470 update_engine[1460]: I20250130 14:30:04.882470 1460 update_attempter.cc:310] Error event sent. Jan 30 14:30:04.882470 update_engine[1460]: I20250130 14:30:04.882480 1460 update_check_scheduler.cc:74] Next update check in 46m22s Jan 30 14:30:04.882826 locksmithd[1489]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 30 14:30:07.621492 systemd[1]: Started sshd@49-49.13.124.2:22-5.250.188.211:44070.service - OpenSSH per-connection server daemon (5.250.188.211:44070). Jan 30 14:30:07.955891 sshd[4495]: Invalid user user from 5.250.188.211 port 44070 Jan 30 14:30:08.006415 sshd[4495]: Received disconnect from 5.250.188.211 port 44070:11: Bye Bye [preauth] Jan 30 14:30:08.006415 sshd[4495]: Disconnected from invalid user user 5.250.188.211 port 44070 [preauth] Jan 30 14:30:08.009116 systemd[1]: sshd@49-49.13.124.2:22-5.250.188.211:44070.service: Deactivated successfully. Jan 30 14:30:08.347513 systemd[1]: Started sshd@50-49.13.124.2:22-139.178.68.195:44360.service - OpenSSH per-connection server daemon (139.178.68.195:44360). Jan 30 14:30:09.325223 sshd[4500]: Accepted publickey for core from 139.178.68.195 port 44360 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:30:09.327576 sshd[4500]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:30:09.335944 systemd-logind[1459]: New session 21 of user core. Jan 30 14:30:09.340403 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 14:30:09.989422 sshd[4455]: Connection closed by 140.206.168.98 port 56158 [preauth] Jan 30 14:30:09.990884 systemd[1]: sshd@45-49.13.124.2:22-140.206.168.98:56158.service: Deactivated successfully. Jan 30 14:30:10.076178 sshd[4500]: pam_unix(sshd:session): session closed for user core Jan 30 14:30:10.080852 systemd[1]: sshd@50-49.13.124.2:22-139.178.68.195:44360.service: Deactivated successfully. Jan 30 14:30:10.085250 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 14:30:10.087595 systemd-logind[1459]: Session 21 logged out. Waiting for processes to exit. Jan 30 14:30:10.088715 systemd-logind[1459]: Removed session 21. Jan 30 14:30:15.248604 systemd[1]: Started sshd@51-49.13.124.2:22-139.178.68.195:57774.service - OpenSSH per-connection server daemon (139.178.68.195:57774). Jan 30 14:30:16.219925 sshd[4517]: Accepted publickey for core from 139.178.68.195 port 57774 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:30:16.222977 sshd[4517]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:30:16.230041 systemd-logind[1459]: New session 22 of user core. Jan 30 14:30:16.240464 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 30 14:30:16.944447 systemd[1]: sshd@26-49.13.124.2:22-140.206.168.98:50088.service: Deactivated successfully. Jan 30 14:30:16.972295 sshd[4517]: pam_unix(sshd:session): session closed for user core Jan 30 14:30:16.978888 systemd[1]: sshd@51-49.13.124.2:22-139.178.68.195:57774.service: Deactivated successfully. Jan 30 14:30:16.982077 systemd[1]: session-22.scope: Deactivated successfully. Jan 30 14:30:16.982985 systemd-logind[1459]: Session 22 logged out. Waiting for processes to exit. Jan 30 14:30:16.984341 systemd-logind[1459]: Removed session 22. Jan 30 14:30:22.147565 systemd[1]: Started sshd@52-49.13.124.2:22-139.178.68.195:57778.service - OpenSSH per-connection server daemon (139.178.68.195:57778). Jan 30 14:30:23.124374 sshd[4532]: Accepted publickey for core from 139.178.68.195 port 57778 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:30:23.126528 sshd[4532]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:30:23.132038 systemd-logind[1459]: New session 23 of user core. Jan 30 14:30:23.140439 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 30 14:30:23.875549 sshd[4532]: pam_unix(sshd:session): session closed for user core Jan 30 14:30:23.881292 systemd[1]: sshd@52-49.13.124.2:22-139.178.68.195:57778.service: Deactivated successfully. Jan 30 14:30:23.884073 systemd[1]: session-23.scope: Deactivated successfully. Jan 30 14:30:23.885054 systemd-logind[1459]: Session 23 logged out. Waiting for processes to exit. Jan 30 14:30:23.886612 systemd-logind[1459]: Removed session 23. Jan 30 14:30:29.050553 systemd[1]: Started sshd@53-49.13.124.2:22-139.178.68.195:57140.service - OpenSSH per-connection server daemon (139.178.68.195:57140). Jan 30 14:30:30.023459 sshd[4546]: Accepted publickey for core from 139.178.68.195 port 57140 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:30:30.025361 sshd[4546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:30:30.032733 systemd-logind[1459]: New session 24 of user core. Jan 30 14:30:30.041379 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 30 14:30:30.773686 sshd[4546]: pam_unix(sshd:session): session closed for user core Jan 30 14:30:30.778729 systemd-logind[1459]: Session 24 logged out. Waiting for processes to exit. Jan 30 14:30:30.779005 systemd[1]: sshd@53-49.13.124.2:22-139.178.68.195:57140.service: Deactivated successfully. Jan 30 14:30:30.781519 systemd[1]: session-24.scope: Deactivated successfully. Jan 30 14:30:30.784786 systemd-logind[1459]: Removed session 24. Jan 30 14:30:35.944236 systemd[1]: Started sshd@54-49.13.124.2:22-139.178.68.195:36298.service - OpenSSH per-connection server daemon (139.178.68.195:36298). Jan 30 14:30:36.941867 sshd[4562]: Accepted publickey for core from 139.178.68.195 port 36298 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:30:36.943958 sshd[4562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:30:36.949465 systemd-logind[1459]: New session 25 of user core. Jan 30 14:30:36.954351 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 30 14:30:37.696283 sshd[4562]: pam_unix(sshd:session): session closed for user core Jan 30 14:30:37.700785 systemd[1]: sshd@54-49.13.124.2:22-139.178.68.195:36298.service: Deactivated successfully. Jan 30 14:30:37.702570 systemd[1]: session-25.scope: Deactivated successfully. Jan 30 14:30:37.706437 systemd-logind[1459]: Session 25 logged out. Waiting for processes to exit. Jan 30 14:30:37.707631 systemd-logind[1459]: Removed session 25. Jan 30 14:30:39.889768 systemd[1]: Started sshd@55-49.13.124.2:22-140.206.168.98:36338.service - OpenSSH per-connection server daemon (140.206.168.98:36338). Jan 30 14:30:42.189457 systemd[1]: Started sshd@56-49.13.124.2:22-83.212.75.149:54342.service - OpenSSH per-connection server daemon (83.212.75.149:54342). Jan 30 14:30:42.551527 sshd[4579]: Invalid user user1 from 83.212.75.149 port 54342 Jan 30 14:30:42.609276 sshd[4579]: Received disconnect from 83.212.75.149 port 54342:11: Bye Bye [preauth] Jan 30 14:30:42.609438 sshd[4579]: Disconnected from invalid user user1 83.212.75.149 port 54342 [preauth] Jan 30 14:30:42.611719 systemd[1]: sshd@56-49.13.124.2:22-83.212.75.149:54342.service: Deactivated successfully. Jan 30 14:30:42.874630 systemd[1]: Started sshd@57-49.13.124.2:22-139.178.68.195:36306.service - OpenSSH per-connection server daemon (139.178.68.195:36306). Jan 30 14:30:43.852591 sshd[4584]: Accepted publickey for core from 139.178.68.195 port 36306 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:30:43.854835 sshd[4584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:30:43.861537 systemd-logind[1459]: New session 26 of user core. Jan 30 14:30:43.869548 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 30 14:30:44.625574 sshd[4584]: pam_unix(sshd:session): session closed for user core Jan 30 14:30:44.631812 systemd-logind[1459]: Session 26 logged out. Waiting for processes to exit. Jan 30 14:30:44.632733 systemd[1]: sshd@57-49.13.124.2:22-139.178.68.195:36306.service: Deactivated successfully. Jan 30 14:30:44.635988 systemd[1]: session-26.scope: Deactivated successfully. Jan 30 14:30:44.638119 systemd-logind[1459]: Removed session 26. Jan 30 14:30:49.804480 systemd[1]: Started sshd@58-49.13.124.2:22-139.178.68.195:60148.service - OpenSSH per-connection server daemon (139.178.68.195:60148). Jan 30 14:30:50.789436 sshd[4599]: Accepted publickey for core from 139.178.68.195 port 60148 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:30:50.791798 sshd[4599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:30:50.798154 systemd-logind[1459]: New session 27 of user core. Jan 30 14:30:50.804394 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 30 14:30:51.546941 sshd[4599]: pam_unix(sshd:session): session closed for user core Jan 30 14:30:51.552903 systemd[1]: sshd@58-49.13.124.2:22-139.178.68.195:60148.service: Deactivated successfully. Jan 30 14:30:51.556161 systemd[1]: session-27.scope: Deactivated successfully. Jan 30 14:30:51.560291 systemd-logind[1459]: Session 27 logged out. Waiting for processes to exit. Jan 30 14:30:51.562265 systemd-logind[1459]: Removed session 27. Jan 30 14:30:56.726425 systemd[1]: Started sshd@59-49.13.124.2:22-139.178.68.195:44466.service - OpenSSH per-connection server daemon (139.178.68.195:44466). Jan 30 14:30:57.714990 sshd[4613]: Accepted publickey for core from 139.178.68.195 port 44466 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:30:57.717257 sshd[4613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:30:57.723007 systemd-logind[1459]: New session 28 of user core. Jan 30 14:30:57.729405 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 30 14:30:58.499684 sshd[4613]: pam_unix(sshd:session): session closed for user core Jan 30 14:30:58.504781 systemd[1]: sshd@59-49.13.124.2:22-139.178.68.195:44466.service: Deactivated successfully. Jan 30 14:30:58.507452 systemd[1]: session-28.scope: Deactivated successfully. Jan 30 14:30:58.508469 systemd-logind[1459]: Session 28 logged out. Waiting for processes to exit. Jan 30 14:30:58.510549 systemd-logind[1459]: Removed session 28. Jan 30 14:31:02.684946 systemd[1]: sshd@33-49.13.124.2:22-140.206.168.98:38330.service: Deactivated successfully. Jan 30 14:31:03.676997 systemd[1]: Started sshd@60-49.13.124.2:22-139.178.68.195:44468.service - OpenSSH per-connection server daemon (139.178.68.195:44468). Jan 30 14:31:04.650209 sshd[4629]: Accepted publickey for core from 139.178.68.195 port 44468 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:31:04.652568 sshd[4629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:31:04.658611 systemd-logind[1459]: New session 29 of user core. Jan 30 14:31:04.664423 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 30 14:31:05.395305 sshd[4629]: pam_unix(sshd:session): session closed for user core Jan 30 14:31:05.399262 systemd-logind[1459]: Session 29 logged out. Waiting for processes to exit. Jan 30 14:31:05.399424 systemd[1]: sshd@60-49.13.124.2:22-139.178.68.195:44468.service: Deactivated successfully. Jan 30 14:31:05.402927 systemd[1]: session-29.scope: Deactivated successfully. Jan 30 14:31:05.405699 systemd-logind[1459]: Removed session 29. Jan 30 14:31:10.572973 systemd[1]: Started sshd@61-49.13.124.2:22-139.178.68.195:51228.service - OpenSSH per-connection server daemon (139.178.68.195:51228). Jan 30 14:31:10.656685 systemd[1]: Started sshd@62-49.13.124.2:22-80.251.219.209:60896.service - OpenSSH per-connection server daemon (80.251.219.209:60896). Jan 30 14:31:11.567881 sshd[4645]: Accepted publickey for core from 139.178.68.195 port 51228 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:31:11.569971 sshd[4645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:31:11.577230 systemd-logind[1459]: New session 30 of user core. Jan 30 14:31:11.585384 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 30 14:31:11.646963 sshd[4648]: Invalid user lourdes from 80.251.219.209 port 60896 Jan 30 14:31:11.852873 sshd[4648]: Received disconnect from 80.251.219.209 port 60896:11: Bye Bye [preauth] Jan 30 14:31:11.852873 sshd[4648]: Disconnected from invalid user lourdes 80.251.219.209 port 60896 [preauth] Jan 30 14:31:11.855264 systemd[1]: sshd@62-49.13.124.2:22-80.251.219.209:60896.service: Deactivated successfully. Jan 30 14:31:12.322484 sshd[4645]: pam_unix(sshd:session): session closed for user core Jan 30 14:31:12.328612 systemd[1]: sshd@61-49.13.124.2:22-139.178.68.195:51228.service: Deactivated successfully. Jan 30 14:31:12.330926 systemd[1]: session-30.scope: Deactivated successfully. Jan 30 14:31:12.331886 systemd-logind[1459]: Session 30 logged out. Waiting for processes to exit. Jan 30 14:31:12.333389 systemd-logind[1459]: Removed session 30. Jan 30 14:31:15.853911 systemd[1]: sshd@36-49.13.124.2:22-117.41.160.73:47322.service: Deactivated successfully. Jan 30 14:31:17.499552 systemd[1]: Started sshd@63-49.13.124.2:22-139.178.68.195:38064.service - OpenSSH per-connection server daemon (139.178.68.195:38064). Jan 30 14:31:18.475139 sshd[4667]: Accepted publickey for core from 139.178.68.195 port 38064 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:31:18.476570 sshd[4667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:31:18.482662 systemd-logind[1459]: New session 31 of user core. Jan 30 14:31:18.489420 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 30 14:31:19.229878 sshd[4667]: pam_unix(sshd:session): session closed for user core Jan 30 14:31:19.236901 systemd-logind[1459]: Session 31 logged out. Waiting for processes to exit. Jan 30 14:31:19.238939 systemd[1]: sshd@63-49.13.124.2:22-139.178.68.195:38064.service: Deactivated successfully. Jan 30 14:31:19.245074 systemd[1]: session-31.scope: Deactivated successfully. Jan 30 14:31:19.247265 systemd-logind[1459]: Removed session 31. Jan 30 14:31:24.422980 systemd[1]: Started sshd@64-49.13.124.2:22-139.178.68.195:38066.service - OpenSSH per-connection server daemon (139.178.68.195:38066). Jan 30 14:31:25.409378 sshd[4682]: Accepted publickey for core from 139.178.68.195 port 38066 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:31:25.411843 sshd[4682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:31:25.418945 systemd-logind[1459]: New session 32 of user core. Jan 30 14:31:25.424672 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 30 14:31:26.173785 sshd[4682]: pam_unix(sshd:session): session closed for user core Jan 30 14:31:26.177913 systemd-logind[1459]: Session 32 logged out. Waiting for processes to exit. Jan 30 14:31:26.178966 systemd[1]: sshd@64-49.13.124.2:22-139.178.68.195:38066.service: Deactivated successfully. Jan 30 14:31:26.181430 systemd[1]: session-32.scope: Deactivated successfully. Jan 30 14:31:26.184053 systemd-logind[1459]: Removed session 32. Jan 30 14:31:27.592546 systemd[1]: Started sshd@65-49.13.124.2:22-183.88.232.183:38370.service - OpenSSH per-connection server daemon (183.88.232.183:38370). Jan 30 14:31:28.649743 sshd[4696]: Invalid user chart from 183.88.232.183 port 38370 Jan 30 14:31:28.846749 sshd[4696]: Received disconnect from 183.88.232.183 port 38370:11: Bye Bye [preauth] Jan 30 14:31:28.846749 sshd[4696]: Disconnected from invalid user chart 183.88.232.183 port 38370 [preauth] Jan 30 14:31:28.850915 systemd[1]: sshd@65-49.13.124.2:22-183.88.232.183:38370.service: Deactivated successfully. Jan 30 14:31:30.246441 systemd[1]: Started sshd@66-49.13.124.2:22-140.206.168.98:44010.service - OpenSSH per-connection server daemon (140.206.168.98:44010). Jan 30 14:31:31.351552 systemd[1]: Started sshd@67-49.13.124.2:22-139.178.68.195:56288.service - OpenSSH per-connection server daemon (139.178.68.195:56288). Jan 30 14:31:32.329793 sshd[4704]: Accepted publickey for core from 139.178.68.195 port 56288 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:31:32.332433 sshd[4704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:31:32.338131 systemd-logind[1459]: New session 33 of user core. Jan 30 14:31:32.344322 systemd[1]: Started session-33.scope - Session 33 of User core. Jan 30 14:31:33.093782 sshd[4704]: pam_unix(sshd:session): session closed for user core Jan 30 14:31:33.098725 systemd[1]: sshd@67-49.13.124.2:22-139.178.68.195:56288.service: Deactivated successfully. Jan 30 14:31:33.101779 systemd[1]: session-33.scope: Deactivated successfully. Jan 30 14:31:33.106249 systemd-logind[1459]: Session 33 logged out. Waiting for processes to exit. Jan 30 14:31:33.108210 systemd-logind[1459]: Removed session 33. Jan 30 14:31:36.999448 systemd[1]: Started sshd@68-49.13.124.2:22-5.250.188.211:57182.service - OpenSSH per-connection server daemon (5.250.188.211:57182). Jan 30 14:31:37.338340 sshd[4721]: Invalid user sammy from 5.250.188.211 port 57182 Jan 30 14:31:37.387203 sshd[4721]: Received disconnect from 5.250.188.211 port 57182:11: Bye Bye [preauth] Jan 30 14:31:37.387203 sshd[4721]: Disconnected from invalid user sammy 5.250.188.211 port 57182 [preauth] Jan 30 14:31:37.390341 systemd[1]: sshd@68-49.13.124.2:22-5.250.188.211:57182.service: Deactivated successfully. Jan 30 14:31:38.265343 systemd[1]: Started sshd@69-49.13.124.2:22-139.178.68.195:40878.service - OpenSSH per-connection server daemon (139.178.68.195:40878). Jan 30 14:31:39.254700 sshd[4727]: Accepted publickey for core from 139.178.68.195 port 40878 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:31:39.256531 sshd[4727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:31:39.265221 systemd-logind[1459]: New session 34 of user core. Jan 30 14:31:39.269097 systemd[1]: Started session-34.scope - Session 34 of User core. Jan 30 14:31:40.003552 sshd[4727]: pam_unix(sshd:session): session closed for user core Jan 30 14:31:40.009641 systemd[1]: sshd@69-49.13.124.2:22-139.178.68.195:40878.service: Deactivated successfully. Jan 30 14:31:40.013281 systemd[1]: session-34.scope: Deactivated successfully. Jan 30 14:31:40.015304 systemd-logind[1459]: Session 34 logged out. Waiting for processes to exit. Jan 30 14:31:40.016507 systemd-logind[1459]: Removed session 34. Jan 30 14:31:45.182433 systemd[1]: Started sshd@70-49.13.124.2:22-139.178.68.195:49888.service - OpenSSH per-connection server daemon (139.178.68.195:49888). Jan 30 14:31:46.151169 sshd[4741]: Accepted publickey for core from 139.178.68.195 port 49888 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:31:46.153042 sshd[4741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:31:46.160181 systemd-logind[1459]: New session 35 of user core. Jan 30 14:31:46.165363 systemd[1]: Started session-35.scope - Session 35 of User core. Jan 30 14:31:46.906045 sshd[4741]: pam_unix(sshd:session): session closed for user core Jan 30 14:31:46.911160 systemd[1]: sshd@70-49.13.124.2:22-139.178.68.195:49888.service: Deactivated successfully. Jan 30 14:31:46.914055 systemd[1]: session-35.scope: Deactivated successfully. Jan 30 14:31:46.915238 systemd-logind[1459]: Session 35 logged out. Waiting for processes to exit. Jan 30 14:31:46.917135 systemd-logind[1459]: Removed session 35. Jan 30 14:31:52.083545 systemd[1]: Started sshd@71-49.13.124.2:22-139.178.68.195:49894.service - OpenSSH per-connection server daemon (139.178.68.195:49894). Jan 30 14:31:53.060508 sshd[4756]: Accepted publickey for core from 139.178.68.195 port 49894 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:31:53.062813 sshd[4756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:31:53.068620 systemd-logind[1459]: New session 36 of user core. Jan 30 14:31:53.073363 systemd[1]: Started session-36.scope - Session 36 of User core. Jan 30 14:31:53.811663 sshd[4756]: pam_unix(sshd:session): session closed for user core Jan 30 14:31:53.816498 systemd[1]: sshd@71-49.13.124.2:22-139.178.68.195:49894.service: Deactivated successfully. Jan 30 14:31:53.821039 systemd[1]: session-36.scope: Deactivated successfully. Jan 30 14:31:53.822166 systemd-logind[1459]: Session 36 logged out. Waiting for processes to exit. Jan 30 14:31:53.823408 systemd-logind[1459]: Removed session 36. Jan 30 14:31:57.228683 systemd[1]: Started sshd@72-49.13.124.2:22-83.212.75.149:33916.service - OpenSSH per-connection server daemon (83.212.75.149:33916). Jan 30 14:31:57.591153 sshd[4769]: Invalid user ftpuser from 83.212.75.149 port 33916 Jan 30 14:31:57.646532 sshd[4769]: Received disconnect from 83.212.75.149 port 33916:11: Bye Bye [preauth] Jan 30 14:31:57.646532 sshd[4769]: Disconnected from invalid user ftpuser 83.212.75.149 port 33916 [preauth] Jan 30 14:31:57.650180 systemd[1]: sshd@72-49.13.124.2:22-83.212.75.149:33916.service: Deactivated successfully. Jan 30 14:31:58.985256 systemd[1]: Started sshd@73-49.13.124.2:22-139.178.68.195:55374.service - OpenSSH per-connection server daemon (139.178.68.195:55374). Jan 30 14:31:59.983152 sshd[4774]: Accepted publickey for core from 139.178.68.195 port 55374 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:31:59.985381 sshd[4774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:31:59.993184 systemd-logind[1459]: New session 37 of user core. Jan 30 14:32:00.000138 systemd[1]: Started session-37.scope - Session 37 of User core. Jan 30 14:32:00.734847 sshd[4774]: pam_unix(sshd:session): session closed for user core Jan 30 14:32:00.743724 systemd[1]: sshd@73-49.13.124.2:22-139.178.68.195:55374.service: Deactivated successfully. Jan 30 14:32:00.749788 systemd[1]: session-37.scope: Deactivated successfully. Jan 30 14:32:00.753870 systemd-logind[1459]: Session 37 logged out. Waiting for processes to exit. Jan 30 14:32:00.756800 systemd-logind[1459]: Removed session 37. Jan 30 14:32:00.813681 systemd[1]: Started sshd@74-49.13.124.2:22-45.207.58.154:44724.service - OpenSSH per-connection server daemon (45.207.58.154:44724). Jan 30 14:32:03.447929 sshd[4787]: Invalid user git from 45.207.58.154 port 44724 Jan 30 14:32:03.491773 systemd[1]: Started sshd@75-49.13.124.2:22-185.147.125.200:8494.service - OpenSSH per-connection server daemon (185.147.125.200:8494). Jan 30 14:32:03.759923 sshd[4787]: Received disconnect from 45.207.58.154 port 44724:11: Bye Bye [preauth] Jan 30 14:32:03.759923 sshd[4787]: Disconnected from invalid user git 45.207.58.154 port 44724 [preauth] Jan 30 14:32:03.761831 systemd[1]: sshd@74-49.13.124.2:22-45.207.58.154:44724.service: Deactivated successfully. Jan 30 14:32:03.766933 sshd[4790]: Invalid user prueba from 185.147.125.200 port 8494 Jan 30 14:32:03.813034 sshd[4794]: pam_faillock(sshd:auth): User unknown Jan 30 14:32:03.817993 sshd[4790]: Postponed keyboard-interactive for invalid user prueba from 185.147.125.200 port 8494 ssh2 [preauth] Jan 30 14:32:03.857075 sshd[4794]: pam_unix(sshd:auth): check pass; user unknown Jan 30 14:32:03.857152 sshd[4794]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=185.147.125.200 Jan 30 14:32:03.858399 sshd[4794]: pam_faillock(sshd:auth): User unknown Jan 30 14:32:05.907414 systemd[1]: Started sshd@76-49.13.124.2:22-139.178.68.195:47082.service - OpenSSH per-connection server daemon (139.178.68.195:47082). Jan 30 14:32:05.950079 sshd[4790]: PAM: Permission denied for illegal user prueba from 185.147.125.200 Jan 30 14:32:05.950079 sshd[4790]: Failed keyboard-interactive/pam for invalid user prueba from 185.147.125.200 port 8494 ssh2 Jan 30 14:32:05.992218 sshd[4790]: Received disconnect from 185.147.125.200 port 8494:11: Client disconnecting normally [preauth] Jan 30 14:32:05.992218 sshd[4790]: Disconnected from invalid user prueba 185.147.125.200 port 8494 [preauth] Jan 30 14:32:05.995129 systemd[1]: sshd@75-49.13.124.2:22-185.147.125.200:8494.service: Deactivated successfully. Jan 30 14:32:06.881289 sshd[4798]: Accepted publickey for core from 139.178.68.195 port 47082 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:32:06.883444 sshd[4798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:32:06.888974 systemd-logind[1459]: New session 38 of user core. Jan 30 14:32:06.896521 systemd[1]: Started session-38.scope - Session 38 of User core. Jan 30 14:32:07.625444 sshd[4798]: pam_unix(sshd:session): session closed for user core Jan 30 14:32:07.632032 systemd[1]: sshd@76-49.13.124.2:22-139.178.68.195:47082.service: Deactivated successfully. Jan 30 14:32:07.635414 systemd[1]: session-38.scope: Deactivated successfully. Jan 30 14:32:07.636842 systemd-logind[1459]: Session 38 logged out. Waiting for processes to exit. Jan 30 14:32:07.640408 systemd-logind[1459]: Removed session 38. Jan 30 14:32:12.805816 systemd[1]: Started sshd@77-49.13.124.2:22-139.178.68.195:47086.service - OpenSSH per-connection server daemon (139.178.68.195:47086). Jan 30 14:32:13.793726 sshd[4813]: Accepted publickey for core from 139.178.68.195 port 47086 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:32:13.795805 sshd[4813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:32:13.803274 systemd-logind[1459]: New session 39 of user core. Jan 30 14:32:13.807372 systemd[1]: Started session-39.scope - Session 39 of User core. Jan 30 14:32:14.557932 sshd[4813]: pam_unix(sshd:session): session closed for user core Jan 30 14:32:14.562471 systemd[1]: sshd@77-49.13.124.2:22-139.178.68.195:47086.service: Deactivated successfully. Jan 30 14:32:14.567052 systemd[1]: session-39.scope: Deactivated successfully. Jan 30 14:32:14.570299 systemd-logind[1459]: Session 39 logged out. Waiting for processes to exit. Jan 30 14:32:14.573159 systemd-logind[1459]: Removed session 39. Jan 30 14:32:19.734885 systemd[1]: Started sshd@78-49.13.124.2:22-139.178.68.195:54852.service - OpenSSH per-connection server daemon (139.178.68.195:54852). Jan 30 14:32:20.715159 sshd[4827]: Accepted publickey for core from 139.178.68.195 port 54852 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:32:20.718128 sshd[4827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:32:20.725698 systemd-logind[1459]: New session 40 of user core. Jan 30 14:32:20.732347 systemd[1]: Started session-40.scope - Session 40 of User core. Jan 30 14:32:21.469742 sshd[4827]: pam_unix(sshd:session): session closed for user core Jan 30 14:32:21.474205 systemd[1]: sshd@78-49.13.124.2:22-139.178.68.195:54852.service: Deactivated successfully. Jan 30 14:32:21.477408 systemd[1]: session-40.scope: Deactivated successfully. Jan 30 14:32:21.478700 systemd-logind[1459]: Session 40 logged out. Waiting for processes to exit. Jan 30 14:32:21.480649 systemd-logind[1459]: Removed session 40. Jan 30 14:32:21.513475 systemd[1]: Started sshd@79-49.13.124.2:22-140.206.168.98:51510.service - OpenSSH per-connection server daemon (140.206.168.98:51510). Jan 30 14:32:26.653615 systemd[1]: Started sshd@80-49.13.124.2:22-139.178.68.195:55284.service - OpenSSH per-connection server daemon (139.178.68.195:55284). Jan 30 14:32:27.628169 sshd[4844]: Accepted publickey for core from 139.178.68.195 port 55284 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:32:27.628973 sshd[4844]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:32:27.634411 systemd-logind[1459]: New session 41 of user core. Jan 30 14:32:27.644326 systemd[1]: Started session-41.scope - Session 41 of User core. Jan 30 14:32:28.381539 sshd[4844]: pam_unix(sshd:session): session closed for user core Jan 30 14:32:28.385371 systemd[1]: sshd@80-49.13.124.2:22-139.178.68.195:55284.service: Deactivated successfully. Jan 30 14:32:28.388545 systemd[1]: session-41.scope: Deactivated successfully. Jan 30 14:32:28.390513 systemd-logind[1459]: Session 41 logged out. Waiting for processes to exit. Jan 30 14:32:28.391905 systemd-logind[1459]: Removed session 41. Jan 30 14:32:33.563956 systemd[1]: Started sshd@81-49.13.124.2:22-139.178.68.195:55300.service - OpenSSH per-connection server daemon (139.178.68.195:55300). Jan 30 14:32:34.551766 sshd[4858]: Accepted publickey for core from 139.178.68.195 port 55300 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:32:34.553952 sshd[4858]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:32:34.560059 systemd-logind[1459]: New session 42 of user core. Jan 30 14:32:34.566318 systemd[1]: Started session-42.scope - Session 42 of User core. Jan 30 14:32:34.662538 systemd[1]: Started sshd@82-49.13.124.2:22-80.251.219.209:59246.service - OpenSSH per-connection server daemon (80.251.219.209:59246). Jan 30 14:32:35.306614 sshd[4858]: pam_unix(sshd:session): session closed for user core Jan 30 14:32:35.311461 systemd[1]: sshd@81-49.13.124.2:22-139.178.68.195:55300.service: Deactivated successfully. Jan 30 14:32:35.315195 systemd[1]: session-42.scope: Deactivated successfully. Jan 30 14:32:35.317126 systemd-logind[1459]: Session 42 logged out. Waiting for processes to exit. Jan 30 14:32:35.318937 systemd-logind[1459]: Removed session 42. Jan 30 14:32:35.658210 sshd[4862]: Invalid user test_ftp from 80.251.219.209 port 59246 Jan 30 14:32:35.850760 sshd[4862]: Received disconnect from 80.251.219.209 port 59246:11: Bye Bye [preauth] Jan 30 14:32:35.850760 sshd[4862]: Disconnected from invalid user test_ftp 80.251.219.209 port 59246 [preauth] Jan 30 14:32:35.852807 systemd[1]: sshd@82-49.13.124.2:22-80.251.219.209:59246.service: Deactivated successfully. Jan 30 14:32:39.905600 systemd[1]: sshd@55-49.13.124.2:22-140.206.168.98:36338.service: Deactivated successfully. Jan 30 14:32:40.487403 systemd[1]: Started sshd@83-49.13.124.2:22-139.178.68.195:57162.service - OpenSSH per-connection server daemon (139.178.68.195:57162). Jan 30 14:32:41.474473 sshd[4881]: Accepted publickey for core from 139.178.68.195 port 57162 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:32:41.477044 sshd[4881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:32:41.483844 systemd-logind[1459]: New session 43 of user core. Jan 30 14:32:41.492523 systemd[1]: Started session-43.scope - Session 43 of User core. Jan 30 14:32:42.238061 sshd[4881]: pam_unix(sshd:session): session closed for user core Jan 30 14:32:42.245194 systemd[1]: sshd@83-49.13.124.2:22-139.178.68.195:57162.service: Deactivated successfully. Jan 30 14:32:42.248870 systemd[1]: session-43.scope: Deactivated successfully. Jan 30 14:32:42.252347 systemd-logind[1459]: Session 43 logged out. Waiting for processes to exit. Jan 30 14:32:42.253652 systemd-logind[1459]: Removed session 43. Jan 30 14:32:42.416535 systemd[1]: Started sshd@84-49.13.124.2:22-139.178.68.195:57164.service - OpenSSH per-connection server daemon (139.178.68.195:57164). Jan 30 14:32:43.397157 sshd[4895]: Accepted publickey for core from 139.178.68.195 port 57164 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:32:43.399767 sshd[4895]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:32:43.405873 systemd-logind[1459]: New session 44 of user core. Jan 30 14:32:43.413397 systemd[1]: Started session-44.scope - Session 44 of User core. Jan 30 14:32:44.199550 sshd[4895]: pam_unix(sshd:session): session closed for user core Jan 30 14:32:44.204271 systemd[1]: sshd@84-49.13.124.2:22-139.178.68.195:57164.service: Deactivated successfully. Jan 30 14:32:44.206304 systemd[1]: session-44.scope: Deactivated successfully. Jan 30 14:32:44.207405 systemd-logind[1459]: Session 44 logged out. Waiting for processes to exit. Jan 30 14:32:44.208940 systemd-logind[1459]: Removed session 44. Jan 30 14:32:44.376638 systemd[1]: Started sshd@85-49.13.124.2:22-139.178.68.195:57168.service - OpenSSH per-connection server daemon (139.178.68.195:57168). Jan 30 14:32:45.367172 sshd[4906]: Accepted publickey for core from 139.178.68.195 port 57168 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:32:45.369413 sshd[4906]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:32:45.374973 systemd-logind[1459]: New session 45 of user core. Jan 30 14:32:45.386458 systemd[1]: Started session-45.scope - Session 45 of User core. Jan 30 14:32:46.134274 sshd[4906]: pam_unix(sshd:session): session closed for user core Jan 30 14:32:46.139410 systemd[1]: sshd@85-49.13.124.2:22-139.178.68.195:57168.service: Deactivated successfully. Jan 30 14:32:46.142492 systemd[1]: session-45.scope: Deactivated successfully. Jan 30 14:32:46.143523 systemd-logind[1459]: Session 45 logged out. Waiting for processes to exit. Jan 30 14:32:46.145038 systemd-logind[1459]: Removed session 45. Jan 30 14:32:51.310016 systemd[1]: Started sshd@86-49.13.124.2:22-139.178.68.195:43084.service - OpenSSH per-connection server daemon (139.178.68.195:43084). Jan 30 14:32:52.283616 sshd[4920]: Accepted publickey for core from 139.178.68.195 port 43084 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:32:52.286295 sshd[4920]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:32:52.296023 systemd-logind[1459]: New session 46 of user core. Jan 30 14:32:52.304435 systemd[1]: Started session-46.scope - Session 46 of User core. Jan 30 14:32:53.032614 sshd[4920]: pam_unix(sshd:session): session closed for user core Jan 30 14:32:53.039145 systemd[1]: sshd@86-49.13.124.2:22-139.178.68.195:43084.service: Deactivated successfully. Jan 30 14:32:53.041654 systemd[1]: session-46.scope: Deactivated successfully. Jan 30 14:32:53.043091 systemd-logind[1459]: Session 46 logged out. Waiting for processes to exit. Jan 30 14:32:53.044505 systemd-logind[1459]: Removed session 46. Jan 30 14:32:55.722681 systemd[1]: Started sshd@87-49.13.124.2:22-183.88.232.183:37618.service - OpenSSH per-connection server daemon (183.88.232.183:37618). Jan 30 14:32:56.974315 sshd[4933]: Received disconnect from 183.88.232.183 port 37618:11: Bye Bye [preauth] Jan 30 14:32:56.974315 sshd[4933]: Disconnected from authenticating user root 183.88.232.183 port 37618 [preauth] Jan 30 14:32:56.977526 systemd[1]: sshd@87-49.13.124.2:22-183.88.232.183:37618.service: Deactivated successfully. Jan 30 14:32:58.211539 systemd[1]: Started sshd@88-49.13.124.2:22-139.178.68.195:57262.service - OpenSSH per-connection server daemon (139.178.68.195:57262). Jan 30 14:32:59.215154 sshd[4938]: Accepted publickey for core from 139.178.68.195 port 57262 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:32:59.219174 sshd[4938]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:32:59.230421 systemd-logind[1459]: New session 47 of user core. Jan 30 14:32:59.239979 systemd[1]: Started session-47.scope - Session 47 of User core. Jan 30 14:32:59.976543 sshd[4938]: pam_unix(sshd:session): session closed for user core Jan 30 14:32:59.981622 systemd[1]: sshd@88-49.13.124.2:22-139.178.68.195:57262.service: Deactivated successfully. Jan 30 14:32:59.985226 systemd[1]: session-47.scope: Deactivated successfully. Jan 30 14:32:59.987910 systemd-logind[1459]: Session 47 logged out. Waiting for processes to exit. Jan 30 14:32:59.989389 systemd-logind[1459]: Removed session 47. Jan 30 14:33:02.925180 systemd[1]: Started sshd@89-49.13.124.2:22-5.250.188.211:52056.service - OpenSSH per-connection server daemon (5.250.188.211:52056). Jan 30 14:33:03.258695 sshd[4951]: Invalid user es from 5.250.188.211 port 52056 Jan 30 14:33:03.309801 sshd[4951]: Received disconnect from 5.250.188.211 port 52056:11: Bye Bye [preauth] Jan 30 14:33:03.309801 sshd[4951]: Disconnected from invalid user es 5.250.188.211 port 52056 [preauth] Jan 30 14:33:03.316050 systemd[1]: sshd@89-49.13.124.2:22-5.250.188.211:52056.service: Deactivated successfully. Jan 30 14:33:05.154547 systemd[1]: Started sshd@90-49.13.124.2:22-139.178.68.195:55474.service - OpenSSH per-connection server daemon (139.178.68.195:55474). Jan 30 14:33:06.151448 sshd[4956]: Accepted publickey for core from 139.178.68.195 port 55474 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:33:06.153711 sshd[4956]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:33:06.159055 systemd-logind[1459]: New session 48 of user core. Jan 30 14:33:06.165375 systemd[1]: Started session-48.scope - Session 48 of User core. Jan 30 14:33:06.923496 sshd[4956]: pam_unix(sshd:session): session closed for user core Jan 30 14:33:06.929790 systemd[1]: sshd@90-49.13.124.2:22-139.178.68.195:55474.service: Deactivated successfully. Jan 30 14:33:06.932904 systemd[1]: session-48.scope: Deactivated successfully. Jan 30 14:33:06.935468 systemd-logind[1459]: Session 48 logged out. Waiting for processes to exit. Jan 30 14:33:06.937550 systemd-logind[1459]: Removed session 48. Jan 30 14:33:09.059491 systemd[1]: Started sshd@91-49.13.124.2:22-140.206.168.98:41172.service - OpenSSH per-connection server daemon (140.206.168.98:41172). Jan 30 14:33:09.252509 systemd[1]: Started sshd@92-49.13.124.2:22-83.212.75.149:56810.service - OpenSSH per-connection server daemon (83.212.75.149:56810). Jan 30 14:33:09.616874 sshd[4973]: Invalid user ftpuser from 83.212.75.149 port 56810 Jan 30 14:33:09.672056 sshd[4973]: Received disconnect from 83.212.75.149 port 56810:11: Bye Bye [preauth] Jan 30 14:33:09.672056 sshd[4973]: Disconnected from invalid user ftpuser 83.212.75.149 port 56810 [preauth] Jan 30 14:33:09.673652 systemd[1]: sshd@92-49.13.124.2:22-83.212.75.149:56810.service: Deactivated successfully. Jan 30 14:33:12.099701 systemd[1]: Started sshd@93-49.13.124.2:22-139.178.68.195:55484.service - OpenSSH per-connection server daemon (139.178.68.195:55484). Jan 30 14:33:13.076588 sshd[4978]: Accepted publickey for core from 139.178.68.195 port 55484 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:33:13.078762 sshd[4978]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:33:13.083929 systemd-logind[1459]: New session 49 of user core. Jan 30 14:33:13.089339 systemd[1]: Started session-49.scope - Session 49 of User core. Jan 30 14:33:13.833350 sshd[4978]: pam_unix(sshd:session): session closed for user core Jan 30 14:33:13.839362 systemd[1]: sshd@93-49.13.124.2:22-139.178.68.195:55484.service: Deactivated successfully. Jan 30 14:33:13.842421 systemd[1]: session-49.scope: Deactivated successfully. Jan 30 14:33:13.844184 systemd-logind[1459]: Session 49 logged out. Waiting for processes to exit. Jan 30 14:33:13.845551 systemd-logind[1459]: Removed session 49. Jan 30 14:33:15.484822 sshd[4971]: Connection closed by 140.206.168.98 port 41172 [preauth] Jan 30 14:33:15.486632 systemd[1]: sshd@91-49.13.124.2:22-140.206.168.98:41172.service: Deactivated successfully. Jan 30 14:33:19.015696 systemd[1]: Started sshd@94-49.13.124.2:22-139.178.68.195:40380.service - OpenSSH per-connection server daemon (139.178.68.195:40380). Jan 30 14:33:19.995704 sshd[4994]: Accepted publickey for core from 139.178.68.195 port 40380 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:33:19.997463 sshd[4994]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:33:20.003310 systemd-logind[1459]: New session 50 of user core. Jan 30 14:33:20.006608 systemd[1]: Started session-50.scope - Session 50 of User core. Jan 30 14:33:20.749509 sshd[4994]: pam_unix(sshd:session): session closed for user core Jan 30 14:33:20.755726 systemd[1]: sshd@94-49.13.124.2:22-139.178.68.195:40380.service: Deactivated successfully. Jan 30 14:33:20.758841 systemd[1]: session-50.scope: Deactivated successfully. Jan 30 14:33:20.760498 systemd-logind[1459]: Session 50 logged out. Waiting for processes to exit. Jan 30 14:33:20.762228 systemd-logind[1459]: Removed session 50. Jan 30 14:33:25.922358 systemd[1]: Started sshd@95-49.13.124.2:22-139.178.68.195:33020.service - OpenSSH per-connection server daemon (139.178.68.195:33020). Jan 30 14:33:26.921685 sshd[5008]: Accepted publickey for core from 139.178.68.195 port 33020 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:33:26.922692 sshd[5008]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:33:26.927580 systemd-logind[1459]: New session 51 of user core. Jan 30 14:33:26.933327 systemd[1]: Started session-51.scope - Session 51 of User core. Jan 30 14:33:27.684515 sshd[5008]: pam_unix(sshd:session): session closed for user core Jan 30 14:33:27.690739 systemd-logind[1459]: Session 51 logged out. Waiting for processes to exit. Jan 30 14:33:27.691733 systemd[1]: sshd@95-49.13.124.2:22-139.178.68.195:33020.service: Deactivated successfully. Jan 30 14:33:27.694191 systemd[1]: session-51.scope: Deactivated successfully. Jan 30 14:33:27.695698 systemd-logind[1459]: Removed session 51. Jan 30 14:33:30.261552 systemd[1]: sshd@66-49.13.124.2:22-140.206.168.98:44010.service: Deactivated successfully. Jan 30 14:33:32.860657 systemd[1]: Started sshd@96-49.13.124.2:22-139.178.68.195:33034.service - OpenSSH per-connection server daemon (139.178.68.195:33034). Jan 30 14:33:33.827694 sshd[5023]: Accepted publickey for core from 139.178.68.195 port 33034 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:33:33.830247 sshd[5023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:33:33.836270 systemd-logind[1459]: New session 52 of user core. Jan 30 14:33:33.842377 systemd[1]: Started session-52.scope - Session 52 of User core. Jan 30 14:33:34.584329 sshd[5023]: pam_unix(sshd:session): session closed for user core Jan 30 14:33:34.590709 systemd[1]: sshd@96-49.13.124.2:22-139.178.68.195:33034.service: Deactivated successfully. Jan 30 14:33:34.595224 systemd[1]: session-52.scope: Deactivated successfully. Jan 30 14:33:34.597976 systemd-logind[1459]: Session 52 logged out. Waiting for processes to exit. Jan 30 14:33:34.601407 systemd-logind[1459]: Removed session 52. Jan 30 14:33:39.772595 systemd[1]: Started sshd@97-49.13.124.2:22-139.178.68.195:40424.service - OpenSSH per-connection server daemon (139.178.68.195:40424). Jan 30 14:33:40.760623 sshd[5038]: Accepted publickey for core from 139.178.68.195 port 40424 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:33:40.763469 sshd[5038]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:33:40.770703 systemd-logind[1459]: New session 53 of user core. Jan 30 14:33:40.777406 systemd[1]: Started session-53.scope - Session 53 of User core. Jan 30 14:33:41.526961 sshd[5038]: pam_unix(sshd:session): session closed for user core Jan 30 14:33:41.533138 systemd[1]: sshd@97-49.13.124.2:22-139.178.68.195:40424.service: Deactivated successfully. Jan 30 14:33:41.536303 systemd[1]: session-53.scope: Deactivated successfully. Jan 30 14:33:41.540343 systemd-logind[1459]: Session 53 logged out. Waiting for processes to exit. Jan 30 14:33:41.542554 systemd-logind[1459]: Removed session 53. Jan 30 14:33:46.703640 systemd[1]: Started sshd@98-49.13.124.2:22-139.178.68.195:49584.service - OpenSSH per-connection server daemon (139.178.68.195:49584). Jan 30 14:33:47.689456 sshd[5050]: Accepted publickey for core from 139.178.68.195 port 49584 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:33:47.691495 sshd[5050]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:33:47.697738 systemd-logind[1459]: New session 54 of user core. Jan 30 14:33:47.706466 systemd[1]: Started session-54.scope - Session 54 of User core. Jan 30 14:33:48.448819 sshd[5050]: pam_unix(sshd:session): session closed for user core Jan 30 14:33:48.454018 systemd[1]: sshd@98-49.13.124.2:22-139.178.68.195:49584.service: Deactivated successfully. Jan 30 14:33:48.456317 systemd[1]: session-54.scope: Deactivated successfully. Jan 30 14:33:48.458801 systemd-logind[1459]: Session 54 logged out. Waiting for processes to exit. Jan 30 14:33:48.460339 systemd-logind[1459]: Removed session 54. Jan 30 14:33:49.767158 systemd[1]: Started sshd@99-49.13.124.2:22-80.251.219.209:57598.service - OpenSSH per-connection server daemon (80.251.219.209:57598). Jan 30 14:33:50.954032 sshd[5065]: Received disconnect from 80.251.219.209 port 57598:11: Bye Bye [preauth] Jan 30 14:33:50.954032 sshd[5065]: Disconnected from authenticating user root 80.251.219.209 port 57598 [preauth] Jan 30 14:33:50.958042 systemd[1]: sshd@99-49.13.124.2:22-80.251.219.209:57598.service: Deactivated successfully. Jan 30 14:33:53.626478 systemd[1]: Started sshd@100-49.13.124.2:22-139.178.68.195:49600.service - OpenSSH per-connection server daemon (139.178.68.195:49600). Jan 30 14:33:54.607657 sshd[5070]: Accepted publickey for core from 139.178.68.195 port 49600 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:33:54.608542 sshd[5070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:33:54.618141 systemd-logind[1459]: New session 55 of user core. Jan 30 14:33:54.629698 systemd[1]: Started session-55.scope - Session 55 of User core. Jan 30 14:33:55.355440 sshd[5070]: pam_unix(sshd:session): session closed for user core Jan 30 14:33:55.361792 systemd[1]: sshd@100-49.13.124.2:22-139.178.68.195:49600.service: Deactivated successfully. Jan 30 14:33:55.365982 systemd[1]: session-55.scope: Deactivated successfully. Jan 30 14:33:55.367671 systemd-logind[1459]: Session 55 logged out. Waiting for processes to exit. Jan 30 14:33:55.369151 systemd-logind[1459]: Removed session 55. Jan 30 14:33:55.745538 systemd[1]: Started sshd@101-49.13.124.2:22-140.206.168.98:57784.service - OpenSSH per-connection server daemon (140.206.168.98:57784). Jan 30 14:34:00.531795 systemd[1]: Started sshd@102-49.13.124.2:22-139.178.68.195:42986.service - OpenSSH per-connection server daemon (139.178.68.195:42986). Jan 30 14:34:00.550549 systemd[1]: Started sshd@103-49.13.124.2:22-45.207.58.154:60108.service - OpenSSH per-connection server daemon (45.207.58.154:60108). Jan 30 14:34:01.508465 sshd[5083]: Connection closed by 140.206.168.98 port 57784 [preauth] Jan 30 14:34:01.510559 systemd[1]: sshd@101-49.13.124.2:22-140.206.168.98:57784.service: Deactivated successfully. Jan 30 14:34:01.514995 sshd[5085]: Accepted publickey for core from 139.178.68.195 port 42986 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:34:01.517312 sshd[5085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:34:01.524196 systemd-logind[1459]: New session 56 of user core. Jan 30 14:34:01.530373 systemd[1]: Started session-56.scope - Session 56 of User core. Jan 30 14:34:02.269425 sshd[5085]: pam_unix(sshd:session): session closed for user core Jan 30 14:34:02.274426 systemd[1]: sshd@102-49.13.124.2:22-139.178.68.195:42986.service: Deactivated successfully. Jan 30 14:34:02.278144 systemd[1]: session-56.scope: Deactivated successfully. Jan 30 14:34:02.279332 systemd-logind[1459]: Session 56 logged out. Waiting for processes to exit. Jan 30 14:34:02.280942 systemd-logind[1459]: Removed session 56. Jan 30 14:34:02.957648 sshd[5087]: Invalid user git from 45.207.58.154 port 60108 Jan 30 14:34:03.225766 sshd[5087]: Received disconnect from 45.207.58.154 port 60108:11: Bye Bye [preauth] Jan 30 14:34:03.225766 sshd[5087]: Disconnected from invalid user git 45.207.58.154 port 60108 [preauth] Jan 30 14:34:03.228418 systemd[1]: sshd@103-49.13.124.2:22-45.207.58.154:60108.service: Deactivated successfully. Jan 30 14:34:07.450532 systemd[1]: Started sshd@104-49.13.124.2:22-139.178.68.195:36466.service - OpenSSH per-connection server daemon (139.178.68.195:36466). Jan 30 14:34:08.439951 sshd[5107]: Accepted publickey for core from 139.178.68.195 port 36466 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:34:08.442251 sshd[5107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:34:08.447968 systemd-logind[1459]: New session 57 of user core. Jan 30 14:34:08.457472 systemd[1]: Started session-57.scope - Session 57 of User core. Jan 30 14:34:09.210001 sshd[5107]: pam_unix(sshd:session): session closed for user core Jan 30 14:34:09.215185 systemd-logind[1459]: Session 57 logged out. Waiting for processes to exit. Jan 30 14:34:09.215729 systemd[1]: sshd@104-49.13.124.2:22-139.178.68.195:36466.service: Deactivated successfully. Jan 30 14:34:09.220679 systemd[1]: session-57.scope: Deactivated successfully. Jan 30 14:34:09.223434 systemd-logind[1459]: Removed session 57. Jan 30 14:34:14.391140 systemd[1]: Started sshd@105-49.13.124.2:22-139.178.68.195:36474.service - OpenSSH per-connection server daemon (139.178.68.195:36474). Jan 30 14:34:15.378144 sshd[5120]: Accepted publickey for core from 139.178.68.195 port 36474 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:34:15.379656 sshd[5120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:34:15.386990 systemd-logind[1459]: New session 58 of user core. Jan 30 14:34:15.391712 systemd[1]: Started session-58.scope - Session 58 of User core. Jan 30 14:34:16.137277 sshd[5120]: pam_unix(sshd:session): session closed for user core Jan 30 14:34:16.143219 systemd[1]: sshd@105-49.13.124.2:22-139.178.68.195:36474.service: Deactivated successfully. Jan 30 14:34:16.147709 systemd[1]: session-58.scope: Deactivated successfully. Jan 30 14:34:16.149295 systemd-logind[1459]: Session 58 logged out. Waiting for processes to exit. Jan 30 14:34:16.150690 systemd-logind[1459]: Removed session 58. Jan 30 14:34:21.105002 systemd[1]: Started sshd@106-49.13.124.2:22-83.212.75.149:52808.service - OpenSSH per-connection server daemon (83.212.75.149:52808). Jan 30 14:34:21.314899 systemd[1]: Started sshd@107-49.13.124.2:22-139.178.68.195:48778.service - OpenSSH per-connection server daemon (139.178.68.195:48778). Jan 30 14:34:21.470853 sshd[5132]: Invalid user admin from 83.212.75.149 port 52808 Jan 30 14:34:21.526652 systemd[1]: sshd@79-49.13.124.2:22-140.206.168.98:51510.service: Deactivated successfully. Jan 30 14:34:21.531178 sshd[5132]: Received disconnect from 83.212.75.149 port 52808:11: Bye Bye [preauth] Jan 30 14:34:21.531178 sshd[5132]: Disconnected from invalid user admin 83.212.75.149 port 52808 [preauth] Jan 30 14:34:21.532371 systemd[1]: sshd@106-49.13.124.2:22-83.212.75.149:52808.service: Deactivated successfully. Jan 30 14:34:22.300999 sshd[5135]: Accepted publickey for core from 139.178.68.195 port 48778 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:34:22.303006 sshd[5135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:34:22.310023 systemd-logind[1459]: New session 59 of user core. Jan 30 14:34:22.314345 systemd[1]: Started session-59.scope - Session 59 of User core. Jan 30 14:34:22.347416 systemd[1]: Started sshd@108-49.13.124.2:22-183.88.232.183:36866.service - OpenSSH per-connection server daemon (183.88.232.183:36866). Jan 30 14:34:23.060888 sshd[5135]: pam_unix(sshd:session): session closed for user core Jan 30 14:34:23.065231 systemd[1]: sshd@107-49.13.124.2:22-139.178.68.195:48778.service: Deactivated successfully. Jan 30 14:34:23.068427 systemd[1]: session-59.scope: Deactivated successfully. Jan 30 14:34:23.070432 systemd-logind[1459]: Session 59 logged out. Waiting for processes to exit. Jan 30 14:34:23.071494 systemd-logind[1459]: Removed session 59. Jan 30 14:34:23.394500 sshd[5143]: Invalid user be from 183.88.232.183 port 36866 Jan 30 14:34:23.587826 sshd[5143]: Received disconnect from 183.88.232.183 port 36866:11: Bye Bye [preauth] Jan 30 14:34:23.587826 sshd[5143]: Disconnected from invalid user be 183.88.232.183 port 36866 [preauth] Jan 30 14:34:23.590654 systemd[1]: sshd@108-49.13.124.2:22-183.88.232.183:36866.service: Deactivated successfully. Jan 30 14:34:27.575490 systemd[1]: Started sshd@109-49.13.124.2:22-5.250.188.211:39294.service - OpenSSH per-connection server daemon (5.250.188.211:39294). Jan 30 14:34:27.903213 sshd[5158]: Invalid user dev from 5.250.188.211 port 39294 Jan 30 14:34:27.954246 sshd[5158]: Received disconnect from 5.250.188.211 port 39294:11: Bye Bye [preauth] Jan 30 14:34:27.954246 sshd[5158]: Disconnected from invalid user dev 5.250.188.211 port 39294 [preauth] Jan 30 14:34:27.956691 systemd[1]: sshd@109-49.13.124.2:22-5.250.188.211:39294.service: Deactivated successfully. Jan 30 14:34:28.233483 systemd[1]: Started sshd@110-49.13.124.2:22-139.178.68.195:57964.service - OpenSSH per-connection server daemon (139.178.68.195:57964). Jan 30 14:34:29.211822 sshd[5163]: Accepted publickey for core from 139.178.68.195 port 57964 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:34:29.214306 sshd[5163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:34:29.219280 systemd-logind[1459]: New session 60 of user core. Jan 30 14:34:29.225318 systemd[1]: Started session-60.scope - Session 60 of User core. Jan 30 14:34:29.955768 sshd[5163]: pam_unix(sshd:session): session closed for user core Jan 30 14:34:29.962964 systemd[1]: sshd@110-49.13.124.2:22-139.178.68.195:57964.service: Deactivated successfully. Jan 30 14:34:29.967249 systemd[1]: session-60.scope: Deactivated successfully. Jan 30 14:34:29.968654 systemd-logind[1459]: Session 60 logged out. Waiting for processes to exit. Jan 30 14:34:29.969723 systemd-logind[1459]: Removed session 60. Jan 30 14:34:35.131566 systemd[1]: Started sshd@111-49.13.124.2:22-139.178.68.195:46564.service - OpenSSH per-connection server daemon (139.178.68.195:46564). Jan 30 14:34:36.112507 sshd[5176]: Accepted publickey for core from 139.178.68.195 port 46564 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:34:36.115078 sshd[5176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:34:36.123829 systemd-logind[1459]: New session 61 of user core. Jan 30 14:34:36.129283 systemd[1]: Started session-61.scope - Session 61 of User core. Jan 30 14:34:36.865881 sshd[5176]: pam_unix(sshd:session): session closed for user core Jan 30 14:34:36.871774 systemd[1]: sshd@111-49.13.124.2:22-139.178.68.195:46564.service: Deactivated successfully. Jan 30 14:34:36.874137 systemd[1]: session-61.scope: Deactivated successfully. Jan 30 14:34:36.876629 systemd-logind[1459]: Session 61 logged out. Waiting for processes to exit. Jan 30 14:34:36.878685 systemd-logind[1459]: Removed session 61. Jan 30 14:34:42.049569 systemd[1]: Started sshd@112-49.13.124.2:22-139.178.68.195:46580.service - OpenSSH per-connection server daemon (139.178.68.195:46580). Jan 30 14:34:42.884186 systemd[1]: Started sshd@113-49.13.124.2:22-140.206.168.98:46954.service - OpenSSH per-connection server daemon (140.206.168.98:46954). Jan 30 14:34:43.039755 sshd[5192]: Accepted publickey for core from 139.178.68.195 port 46580 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:34:43.042318 sshd[5192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:34:43.051645 systemd-logind[1459]: New session 62 of user core. Jan 30 14:34:43.057678 systemd[1]: Started session-62.scope - Session 62 of User core. Jan 30 14:34:43.799238 sshd[5192]: pam_unix(sshd:session): session closed for user core Jan 30 14:34:43.803804 systemd[1]: sshd@112-49.13.124.2:22-139.178.68.195:46580.service: Deactivated successfully. Jan 30 14:34:43.806646 systemd[1]: session-62.scope: Deactivated successfully. Jan 30 14:34:43.809427 systemd-logind[1459]: Session 62 logged out. Waiting for processes to exit. Jan 30 14:34:43.810693 systemd-logind[1459]: Removed session 62. Jan 30 14:34:48.869514 sshd[5195]: Connection closed by 140.206.168.98 port 46954 [preauth] Jan 30 14:34:48.871143 systemd[1]: sshd@113-49.13.124.2:22-140.206.168.98:46954.service: Deactivated successfully. Jan 30 14:34:48.971603 systemd[1]: Started sshd@114-49.13.124.2:22-139.178.68.195:40082.service - OpenSSH per-connection server daemon (139.178.68.195:40082). Jan 30 14:34:49.960318 sshd[5211]: Accepted publickey for core from 139.178.68.195 port 40082 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:34:49.962395 sshd[5211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:34:49.968693 systemd-logind[1459]: New session 63 of user core. Jan 30 14:34:49.971057 systemd[1]: Started session-63.scope - Session 63 of User core. Jan 30 14:34:50.717391 sshd[5211]: pam_unix(sshd:session): session closed for user core Jan 30 14:34:50.722565 systemd-logind[1459]: Session 63 logged out. Waiting for processes to exit. Jan 30 14:34:50.722697 systemd[1]: sshd@114-49.13.124.2:22-139.178.68.195:40082.service: Deactivated successfully. Jan 30 14:34:50.725818 systemd[1]: session-63.scope: Deactivated successfully. Jan 30 14:34:50.728914 systemd-logind[1459]: Removed session 63. Jan 30 14:34:55.896490 systemd[1]: Started sshd@115-49.13.124.2:22-139.178.68.195:41722.service - OpenSSH per-connection server daemon (139.178.68.195:41722). Jan 30 14:34:56.877387 sshd[5226]: Accepted publickey for core from 139.178.68.195 port 41722 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:34:56.879687 sshd[5226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:34:56.887137 systemd-logind[1459]: New session 64 of user core. Jan 30 14:34:56.892348 systemd[1]: Started session-64.scope - Session 64 of User core. Jan 30 14:34:57.641180 sshd[5226]: pam_unix(sshd:session): session closed for user core Jan 30 14:34:57.645523 systemd[1]: sshd@115-49.13.124.2:22-139.178.68.195:41722.service: Deactivated successfully. Jan 30 14:34:57.651214 systemd[1]: session-64.scope: Deactivated successfully. Jan 30 14:34:57.652988 systemd-logind[1459]: Session 64 logged out. Waiting for processes to exit. Jan 30 14:34:57.654560 systemd-logind[1459]: Removed session 64. Jan 30 14:35:02.814717 systemd[1]: Started sshd@116-49.13.124.2:22-139.178.68.195:41730.service - OpenSSH per-connection server daemon (139.178.68.195:41730). Jan 30 14:35:03.801575 sshd[5240]: Accepted publickey for core from 139.178.68.195 port 41730 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:35:03.802443 sshd[5240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:35:03.812386 systemd-logind[1459]: New session 65 of user core. Jan 30 14:35:03.818438 systemd[1]: Started session-65.scope - Session 65 of User core. Jan 30 14:35:04.267840 systemd[1]: Started sshd@117-49.13.124.2:22-80.251.219.209:55944.service - OpenSSH per-connection server daemon (80.251.219.209:55944). Jan 30 14:35:04.554682 sshd[5240]: pam_unix(sshd:session): session closed for user core Jan 30 14:35:04.561184 systemd[1]: sshd@116-49.13.124.2:22-139.178.68.195:41730.service: Deactivated successfully. Jan 30 14:35:04.564418 systemd[1]: session-65.scope: Deactivated successfully. Jan 30 14:35:04.566558 systemd-logind[1459]: Session 65 logged out. Waiting for processes to exit. Jan 30 14:35:04.567877 systemd-logind[1459]: Removed session 65. Jan 30 14:35:05.256481 sshd[5245]: Invalid user jiaxuan from 80.251.219.209 port 55944 Jan 30 14:35:05.453840 sshd[5245]: Received disconnect from 80.251.219.209 port 55944:11: Bye Bye [preauth] Jan 30 14:35:05.453840 sshd[5245]: Disconnected from invalid user jiaxuan 80.251.219.209 port 55944 [preauth] Jan 30 14:35:05.455558 systemd[1]: sshd@117-49.13.124.2:22-80.251.219.209:55944.service: Deactivated successfully. Jan 30 14:35:09.725197 systemd[1]: Started sshd@118-49.13.124.2:22-139.178.68.195:48824.service - OpenSSH per-connection server daemon (139.178.68.195:48824). Jan 30 14:35:10.720860 sshd[5261]: Accepted publickey for core from 139.178.68.195 port 48824 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:35:10.723488 sshd[5261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:35:10.730177 systemd-logind[1459]: New session 66 of user core. Jan 30 14:35:10.734556 systemd[1]: Started session-66.scope - Session 66 of User core. Jan 30 14:35:11.487931 sshd[5261]: pam_unix(sshd:session): session closed for user core Jan 30 14:35:11.495880 systemd[1]: sshd@118-49.13.124.2:22-139.178.68.195:48824.service: Deactivated successfully. Jan 30 14:35:11.504227 systemd[1]: session-66.scope: Deactivated successfully. Jan 30 14:35:11.507403 systemd-logind[1459]: Session 66 logged out. Waiting for processes to exit. Jan 30 14:35:11.509889 systemd-logind[1459]: Removed session 66. Jan 30 14:35:16.672246 systemd[1]: Started sshd@119-49.13.124.2:22-139.178.68.195:60838.service - OpenSSH per-connection server daemon (139.178.68.195:60838). Jan 30 14:35:17.666502 sshd[5274]: Accepted publickey for core from 139.178.68.195 port 60838 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:35:17.670890 sshd[5274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:35:17.682805 systemd-logind[1459]: New session 67 of user core. Jan 30 14:35:17.692396 systemd[1]: Started session-67.scope - Session 67 of User core. Jan 30 14:35:18.430402 sshd[5274]: pam_unix(sshd:session): session closed for user core Jan 30 14:35:18.437700 systemd[1]: sshd@119-49.13.124.2:22-139.178.68.195:60838.service: Deactivated successfully. Jan 30 14:35:18.441390 systemd[1]: session-67.scope: Deactivated successfully. Jan 30 14:35:18.444153 systemd-logind[1459]: Session 67 logged out. Waiting for processes to exit. Jan 30 14:35:18.445295 systemd-logind[1459]: Removed session 67. Jan 30 14:35:23.611526 systemd[1]: Started sshd@120-49.13.124.2:22-139.178.68.195:60850.service - OpenSSH per-connection server daemon (139.178.68.195:60850). Jan 30 14:35:24.603145 sshd[5287]: Accepted publickey for core from 139.178.68.195 port 60850 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:35:24.603883 sshd[5287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:35:24.609123 systemd-logind[1459]: New session 68 of user core. Jan 30 14:35:24.617414 systemd[1]: Started session-68.scope - Session 68 of User core. Jan 30 14:35:25.380028 sshd[5287]: pam_unix(sshd:session): session closed for user core Jan 30 14:35:25.387271 systemd[1]: sshd@120-49.13.124.2:22-139.178.68.195:60850.service: Deactivated successfully. Jan 30 14:35:25.391405 systemd[1]: session-68.scope: Deactivated successfully. Jan 30 14:35:25.392769 systemd-logind[1459]: Session 68 logged out. Waiting for processes to exit. Jan 30 14:35:25.394276 systemd-logind[1459]: Removed session 68. Jan 30 14:35:30.180987 systemd[1]: Started sshd@121-49.13.124.2:22-140.206.168.98:39288.service - OpenSSH per-connection server daemon (140.206.168.98:39288). Jan 30 14:35:30.553463 systemd[1]: Started sshd@122-49.13.124.2:22-139.178.68.195:51018.service - OpenSSH per-connection server daemon (139.178.68.195:51018). Jan 30 14:35:31.544395 sshd[5302]: Accepted publickey for core from 139.178.68.195 port 51018 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:35:31.547401 sshd[5302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:35:31.553763 systemd-logind[1459]: New session 69 of user core. Jan 30 14:35:31.559718 systemd[1]: Started session-69.scope - Session 69 of User core. Jan 30 14:35:31.600539 systemd[1]: Started sshd@123-49.13.124.2:22-83.212.75.149:54362.service - OpenSSH per-connection server daemon (83.212.75.149:54362). Jan 30 14:35:31.960385 sshd[5306]: Invalid user server from 83.212.75.149 port 54362 Jan 30 14:35:32.015452 sshd[5306]: Received disconnect from 83.212.75.149 port 54362:11: Bye Bye [preauth] Jan 30 14:35:32.015452 sshd[5306]: Disconnected from invalid user server 83.212.75.149 port 54362 [preauth] Jan 30 14:35:32.019070 systemd[1]: sshd@123-49.13.124.2:22-83.212.75.149:54362.service: Deactivated successfully. Jan 30 14:35:32.301376 sshd[5302]: pam_unix(sshd:session): session closed for user core Jan 30 14:35:32.306406 systemd[1]: sshd@122-49.13.124.2:22-139.178.68.195:51018.service: Deactivated successfully. Jan 30 14:35:32.310017 systemd[1]: session-69.scope: Deactivated successfully. Jan 30 14:35:32.312773 systemd-logind[1459]: Session 69 logged out. Waiting for processes to exit. Jan 30 14:35:32.314347 systemd-logind[1459]: Removed session 69. Jan 30 14:35:36.161860 sshd[5300]: Connection closed by 140.206.168.98 port 39288 [preauth] Jan 30 14:35:36.163056 systemd[1]: sshd@121-49.13.124.2:22-140.206.168.98:39288.service: Deactivated successfully. Jan 30 14:35:37.472282 systemd[1]: Started sshd@124-49.13.124.2:22-139.178.68.195:43880.service - OpenSSH per-connection server daemon (139.178.68.195:43880). Jan 30 14:35:38.465692 sshd[5324]: Accepted publickey for core from 139.178.68.195 port 43880 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:35:38.467136 sshd[5324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:35:38.473432 systemd-logind[1459]: New session 70 of user core. Jan 30 14:35:38.478348 systemd[1]: Started session-70.scope - Session 70 of User core. Jan 30 14:35:39.225457 sshd[5324]: pam_unix(sshd:session): session closed for user core Jan 30 14:35:39.234702 systemd[1]: sshd@124-49.13.124.2:22-139.178.68.195:43880.service: Deactivated successfully. Jan 30 14:35:39.239907 systemd[1]: session-70.scope: Deactivated successfully. Jan 30 14:35:39.243892 systemd-logind[1459]: Session 70 logged out. Waiting for processes to exit. Jan 30 14:35:39.246954 systemd-logind[1459]: Removed session 70. Jan 30 14:35:44.395720 systemd[1]: Started sshd@125-49.13.124.2:22-139.178.68.195:43882.service - OpenSSH per-connection server daemon (139.178.68.195:43882). Jan 30 14:35:45.367771 sshd[5336]: Accepted publickey for core from 139.178.68.195 port 43882 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:35:45.370277 sshd[5336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:35:45.376243 systemd-logind[1459]: New session 71 of user core. Jan 30 14:35:45.383346 systemd[1]: Started session-71.scope - Session 71 of User core. Jan 30 14:35:46.119828 sshd[5336]: pam_unix(sshd:session): session closed for user core Jan 30 14:35:46.125667 systemd[1]: sshd@125-49.13.124.2:22-139.178.68.195:43882.service: Deactivated successfully. Jan 30 14:35:46.128943 systemd[1]: session-71.scope: Deactivated successfully. Jan 30 14:35:46.134185 systemd-logind[1459]: Session 71 logged out. Waiting for processes to exit. Jan 30 14:35:46.136594 systemd-logind[1459]: Removed session 71. Jan 30 14:35:48.114359 systemd[1]: Started sshd@126-49.13.124.2:22-183.88.232.183:36104.service - OpenSSH per-connection server daemon (183.88.232.183:36104). Jan 30 14:35:49.348002 sshd[5348]: Received disconnect from 183.88.232.183 port 36104:11: Bye Bye [preauth] Jan 30 14:35:49.348002 sshd[5348]: Disconnected from authenticating user root 183.88.232.183 port 36104 [preauth] Jan 30 14:35:49.348528 systemd[1]: sshd@126-49.13.124.2:22-183.88.232.183:36104.service: Deactivated successfully. Jan 30 14:35:51.301557 systemd[1]: Started sshd@127-49.13.124.2:22-139.178.68.195:37406.service - OpenSSH per-connection server daemon (139.178.68.195:37406). Jan 30 14:35:52.291991 sshd[5355]: Accepted publickey for core from 139.178.68.195 port 37406 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:35:52.294785 sshd[5355]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:35:52.303950 systemd-logind[1459]: New session 72 of user core. Jan 30 14:35:52.306310 systemd[1]: Started session-72.scope - Session 72 of User core. Jan 30 14:35:53.050701 sshd[5355]: pam_unix(sshd:session): session closed for user core Jan 30 14:35:53.054723 systemd[1]: sshd@127-49.13.124.2:22-139.178.68.195:37406.service: Deactivated successfully. Jan 30 14:35:53.058955 systemd[1]: session-72.scope: Deactivated successfully. Jan 30 14:35:53.067202 systemd-logind[1459]: Session 72 logged out. Waiting for processes to exit. Jan 30 14:35:53.070035 systemd-logind[1459]: Removed session 72. Jan 30 14:35:53.344577 systemd[1]: Started sshd@128-49.13.124.2:22-5.250.188.211:54978.service - OpenSSH per-connection server daemon (5.250.188.211:54978). Jan 30 14:35:53.670277 sshd[5368]: Invalid user admin from 5.250.188.211 port 54978 Jan 30 14:35:53.722928 sshd[5368]: Received disconnect from 5.250.188.211 port 54978:11: Bye Bye [preauth] Jan 30 14:35:53.722928 sshd[5368]: Disconnected from invalid user admin 5.250.188.211 port 54978 [preauth] Jan 30 14:35:53.725010 systemd[1]: sshd@128-49.13.124.2:22-5.250.188.211:54978.service: Deactivated successfully. Jan 30 14:35:58.179729 systemd[1]: Started sshd@129-49.13.124.2:22-45.207.58.154:47259.service - OpenSSH per-connection server daemon (45.207.58.154:47259). Jan 30 14:35:58.227505 systemd[1]: Started sshd@130-49.13.124.2:22-139.178.68.195:59078.service - OpenSSH per-connection server daemon (139.178.68.195:59078). Jan 30 14:35:59.212013 sshd[5376]: Accepted publickey for core from 139.178.68.195 port 59078 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:35:59.214187 sshd[5376]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:35:59.220601 systemd-logind[1459]: New session 73 of user core. Jan 30 14:35:59.224495 systemd[1]: Started session-73.scope - Session 73 of User core. Jan 30 14:35:59.988971 sshd[5376]: pam_unix(sshd:session): session closed for user core Jan 30 14:35:59.994433 systemd[1]: sshd@130-49.13.124.2:22-139.178.68.195:59078.service: Deactivated successfully. Jan 30 14:35:59.998559 systemd[1]: session-73.scope: Deactivated successfully. Jan 30 14:35:59.999788 systemd-logind[1459]: Session 73 logged out. Waiting for processes to exit. Jan 30 14:36:00.001001 systemd-logind[1459]: Removed session 73. Jan 30 14:36:00.722798 sshd[5373]: Invalid user user from 45.207.58.154 port 47259 Jan 30 14:36:01.018651 sshd[5373]: Received disconnect from 45.207.58.154 port 47259:11: Bye Bye [preauth] Jan 30 14:36:01.018651 sshd[5373]: Disconnected from invalid user user 45.207.58.154 port 47259 [preauth] Jan 30 14:36:01.019327 systemd[1]: sshd@129-49.13.124.2:22-45.207.58.154:47259.service: Deactivated successfully. Jan 30 14:36:05.173675 systemd[1]: Started sshd@131-49.13.124.2:22-139.178.68.195:38026.service - OpenSSH per-connection server daemon (139.178.68.195:38026). Jan 30 14:36:06.004569 systemd[1]: Starting systemd-tmpfiles-clean.service - Cleanup of Temporary Directories... Jan 30 14:36:06.028542 systemd-tmpfiles[5395]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 14:36:06.029887 systemd-tmpfiles[5395]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 14:36:06.030537 systemd-tmpfiles[5395]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 14:36:06.031783 systemd-tmpfiles[5395]: ACLs are not supported, ignoring. Jan 30 14:36:06.032273 systemd-tmpfiles[5395]: ACLs are not supported, ignoring. Jan 30 14:36:06.036061 systemd-tmpfiles[5395]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 14:36:06.036103 systemd-tmpfiles[5395]: Skipping /boot Jan 30 14:36:06.044356 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Jan 30 14:36:06.044666 systemd[1]: Finished systemd-tmpfiles-clean.service - Cleanup of Temporary Directories. Jan 30 14:36:06.164502 sshd[5391]: Accepted publickey for core from 139.178.68.195 port 38026 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:36:06.166679 sshd[5391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:36:06.171868 systemd-logind[1459]: New session 74 of user core. Jan 30 14:36:06.175296 systemd[1]: Started session-74.scope - Session 74 of User core. Jan 30 14:36:06.939757 sshd[5391]: pam_unix(sshd:session): session closed for user core Jan 30 14:36:06.945068 systemd[1]: sshd@131-49.13.124.2:22-139.178.68.195:38026.service: Deactivated successfully. Jan 30 14:36:06.948691 systemd[1]: session-74.scope: Deactivated successfully. Jan 30 14:36:06.949727 systemd-logind[1459]: Session 74 logged out. Waiting for processes to exit. Jan 30 14:36:06.951238 systemd-logind[1459]: Removed session 74. Jan 30 14:36:12.121559 systemd[1]: Started sshd@132-49.13.124.2:22-139.178.68.195:38036.service - OpenSSH per-connection server daemon (139.178.68.195:38036). Jan 30 14:36:13.093578 sshd[5409]: Accepted publickey for core from 139.178.68.195 port 38036 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:36:13.095698 sshd[5409]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:36:13.102024 systemd-logind[1459]: New session 75 of user core. Jan 30 14:36:13.108488 systemd[1]: Started session-75.scope - Session 75 of User core. Jan 30 14:36:13.842719 sshd[5409]: pam_unix(sshd:session): session closed for user core Jan 30 14:36:13.848821 systemd[1]: sshd@132-49.13.124.2:22-139.178.68.195:38036.service: Deactivated successfully. Jan 30 14:36:13.852525 systemd[1]: session-75.scope: Deactivated successfully. Jan 30 14:36:13.854756 systemd-logind[1459]: Session 75 logged out. Waiting for processes to exit. Jan 30 14:36:13.856430 systemd-logind[1459]: Removed session 75. Jan 30 14:36:18.143315 systemd[1]: Started sshd@133-49.13.124.2:22-140.206.168.98:34198.service - OpenSSH per-connection server daemon (140.206.168.98:34198). Jan 30 14:36:19.022455 systemd[1]: Started sshd@134-49.13.124.2:22-139.178.68.195:42560.service - OpenSSH per-connection server daemon (139.178.68.195:42560). Jan 30 14:36:20.011657 sshd[5426]: Accepted publickey for core from 139.178.68.195 port 42560 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:36:20.012954 sshd[5426]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:36:20.018786 systemd-logind[1459]: New session 76 of user core. Jan 30 14:36:20.034826 systemd[1]: Started session-76.scope - Session 76 of User core. Jan 30 14:36:20.762374 sshd[5426]: pam_unix(sshd:session): session closed for user core Jan 30 14:36:20.766376 systemd[1]: sshd@134-49.13.124.2:22-139.178.68.195:42560.service: Deactivated successfully. Jan 30 14:36:20.768840 systemd[1]: session-76.scope: Deactivated successfully. Jan 30 14:36:20.771063 systemd-logind[1459]: Session 76 logged out. Waiting for processes to exit. Jan 30 14:36:20.772892 systemd-logind[1459]: Removed session 76. Jan 30 14:36:25.940487 systemd[1]: Started sshd@135-49.13.124.2:22-139.178.68.195:60762.service - OpenSSH per-connection server daemon (139.178.68.195:60762). Jan 30 14:36:26.915397 sshd[5439]: Accepted publickey for core from 139.178.68.195 port 60762 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:36:26.920777 sshd[5439]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:36:26.931725 systemd-logind[1459]: New session 77 of user core. Jan 30 14:36:26.953425 systemd[1]: Started session-77.scope - Session 77 of User core. Jan 30 14:36:27.684841 sshd[5439]: pam_unix(sshd:session): session closed for user core Jan 30 14:36:27.695335 systemd[1]: sshd@135-49.13.124.2:22-139.178.68.195:60762.service: Deactivated successfully. Jan 30 14:36:27.699614 systemd[1]: session-77.scope: Deactivated successfully. Jan 30 14:36:27.706532 systemd-logind[1459]: Session 77 logged out. Waiting for processes to exit. Jan 30 14:36:27.708610 systemd-logind[1459]: Removed session 77. Jan 30 14:36:32.861145 systemd[1]: Started sshd@136-49.13.124.2:22-139.178.68.195:60776.service - OpenSSH per-connection server daemon (139.178.68.195:60776). Jan 30 14:36:33.842163 sshd[5451]: Accepted publickey for core from 139.178.68.195 port 60776 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:36:33.845260 sshd[5451]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:36:33.851045 systemd-logind[1459]: New session 78 of user core. Jan 30 14:36:33.858378 systemd[1]: Started session-78.scope - Session 78 of User core. Jan 30 14:36:34.617464 sshd[5451]: pam_unix(sshd:session): session closed for user core Jan 30 14:36:34.623306 systemd[1]: sshd@136-49.13.124.2:22-139.178.68.195:60776.service: Deactivated successfully. Jan 30 14:36:34.626316 systemd[1]: session-78.scope: Deactivated successfully. Jan 30 14:36:34.628720 systemd-logind[1459]: Session 78 logged out. Waiting for processes to exit. Jan 30 14:36:34.631597 systemd-logind[1459]: Removed session 78. Jan 30 14:36:35.070510 systemd[1]: Started sshd@137-49.13.124.2:22-80.251.219.209:54300.service - OpenSSH per-connection server daemon (80.251.219.209:54300). Jan 30 14:36:36.168970 sshd[5464]: Invalid user yudi from 80.251.219.209 port 54300 Jan 30 14:36:36.357463 sshd[5464]: Received disconnect from 80.251.219.209 port 54300:11: Bye Bye [preauth] Jan 30 14:36:36.357463 sshd[5464]: Disconnected from invalid user yudi 80.251.219.209 port 54300 [preauth] Jan 30 14:36:36.359372 systemd[1]: sshd@137-49.13.124.2:22-80.251.219.209:54300.service: Deactivated successfully. Jan 30 14:36:39.805609 systemd[1]: Started sshd@138-49.13.124.2:22-139.178.68.195:41676.service - OpenSSH per-connection server daemon (139.178.68.195:41676). Jan 30 14:36:40.796730 sshd[5471]: Accepted publickey for core from 139.178.68.195 port 41676 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:36:40.799244 sshd[5471]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:36:40.808496 systemd-logind[1459]: New session 79 of user core. Jan 30 14:36:40.814211 systemd[1]: Started session-79.scope - Session 79 of User core. Jan 30 14:36:41.559896 sshd[5471]: pam_unix(sshd:session): session closed for user core Jan 30 14:36:41.564816 systemd[1]: sshd@138-49.13.124.2:22-139.178.68.195:41676.service: Deactivated successfully. Jan 30 14:36:41.564891 systemd-logind[1459]: Session 79 logged out. Waiting for processes to exit. Jan 30 14:36:41.568381 systemd[1]: session-79.scope: Deactivated successfully. Jan 30 14:36:41.570654 systemd-logind[1459]: Removed session 79. Jan 30 14:36:46.381582 systemd[1]: Started sshd@139-49.13.124.2:22-83.212.75.149:56666.service - OpenSSH per-connection server daemon (83.212.75.149:56666). Jan 30 14:36:46.739462 systemd[1]: Started sshd@140-49.13.124.2:22-139.178.68.195:50498.service - OpenSSH per-connection server daemon (139.178.68.195:50498). Jan 30 14:36:46.757031 sshd[5484]: Invalid user server from 83.212.75.149 port 56666 Jan 30 14:36:46.817907 sshd[5484]: Received disconnect from 83.212.75.149 port 56666:11: Bye Bye [preauth] Jan 30 14:36:46.817907 sshd[5484]: Disconnected from invalid user server 83.212.75.149 port 56666 [preauth] Jan 30 14:36:46.821127 systemd[1]: sshd@139-49.13.124.2:22-83.212.75.149:56666.service: Deactivated successfully. Jan 30 14:36:47.733475 sshd[5487]: Accepted publickey for core from 139.178.68.195 port 50498 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:36:47.737209 sshd[5487]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:36:47.745609 systemd-logind[1459]: New session 80 of user core. Jan 30 14:36:47.750598 systemd[1]: Started session-80.scope - Session 80 of User core. Jan 30 14:36:48.485339 sshd[5487]: pam_unix(sshd:session): session closed for user core Jan 30 14:36:48.492236 systemd[1]: sshd@140-49.13.124.2:22-139.178.68.195:50498.service: Deactivated successfully. Jan 30 14:36:48.495863 systemd[1]: session-80.scope: Deactivated successfully. Jan 30 14:36:48.499044 systemd-logind[1459]: Session 80 logged out. Waiting for processes to exit. Jan 30 14:36:48.502801 systemd-logind[1459]: Removed session 80. Jan 30 14:36:48.663495 systemd[1]: Started sshd@141-49.13.124.2:22-139.178.68.195:50500.service - OpenSSH per-connection server daemon (139.178.68.195:50500). Jan 30 14:36:49.652962 sshd[5502]: Accepted publickey for core from 139.178.68.195 port 50500 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:36:49.655055 sshd[5502]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:36:49.664411 systemd-logind[1459]: New session 81 of user core. Jan 30 14:36:49.666723 systemd[1]: Started session-81.scope - Session 81 of User core. Jan 30 14:36:50.478754 sshd[5502]: pam_unix(sshd:session): session closed for user core Jan 30 14:36:50.482930 systemd-logind[1459]: Session 81 logged out. Waiting for processes to exit. Jan 30 14:36:50.483155 systemd[1]: sshd@141-49.13.124.2:22-139.178.68.195:50500.service: Deactivated successfully. Jan 30 14:36:50.485834 systemd[1]: session-81.scope: Deactivated successfully. Jan 30 14:36:50.489220 systemd-logind[1459]: Removed session 81. Jan 30 14:36:50.652663 systemd[1]: Started sshd@142-49.13.124.2:22-139.178.68.195:50506.service - OpenSSH per-connection server daemon (139.178.68.195:50506). Jan 30 14:36:51.628286 sshd[5515]: Accepted publickey for core from 139.178.68.195 port 50506 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:36:51.630544 sshd[5515]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:36:51.637016 systemd-logind[1459]: New session 82 of user core. Jan 30 14:36:51.643373 systemd[1]: Started session-82.scope - Session 82 of User core. Jan 30 14:36:56.683806 sshd[5515]: pam_unix(sshd:session): session closed for user core Jan 30 14:36:56.690664 systemd[1]: sshd@142-49.13.124.2:22-139.178.68.195:50506.service: Deactivated successfully. Jan 30 14:36:56.693796 systemd[1]: session-82.scope: Deactivated successfully. Jan 30 14:36:56.694828 systemd-logind[1459]: Session 82 logged out. Waiting for processes to exit. Jan 30 14:36:56.695991 systemd-logind[1459]: Removed session 82. Jan 30 14:36:56.859186 systemd[1]: Started sshd@143-49.13.124.2:22-139.178.68.195:53360.service - OpenSSH per-connection server daemon (139.178.68.195:53360). Jan 30 14:36:57.844009 sshd[5528]: Accepted publickey for core from 139.178.68.195 port 53360 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:36:57.846012 sshd[5528]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:36:57.853175 systemd-logind[1459]: New session 83 of user core. Jan 30 14:36:57.857414 systemd[1]: Started session-83.scope - Session 83 of User core. Jan 30 14:37:00.694797 containerd[1476]: time="2025-01-30T14:37:00.694591042Z" level=info msg="StopContainer for \"321e01b630620e96fdba9bedc375004e4cabce9cd71b8bb167d6135eac596318\" with timeout 30 (s)" Jan 30 14:37:00.699787 containerd[1476]: time="2025-01-30T14:37:00.698500507Z" level=info msg="Stop container \"321e01b630620e96fdba9bedc375004e4cabce9cd71b8bb167d6135eac596318\" with signal terminated" Jan 30 14:37:00.707916 systemd[1]: run-containerd-runc-k8s.io-83475a103c2b27fdc2c116b9ebff5b5648663dfae9650089eeccc9fbf20f03b5-runc.35d583.mount: Deactivated successfully. Jan 30 14:37:00.722749 containerd[1476]: time="2025-01-30T14:37:00.722663506Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 14:37:00.725998 systemd[1]: cri-containerd-321e01b630620e96fdba9bedc375004e4cabce9cd71b8bb167d6135eac596318.scope: Deactivated successfully. Jan 30 14:37:00.727510 systemd[1]: cri-containerd-321e01b630620e96fdba9bedc375004e4cabce9cd71b8bb167d6135eac596318.scope: Consumed 1.096s CPU time. Jan 30 14:37:00.736225 containerd[1476]: time="2025-01-30T14:37:00.736041953Z" level=info msg="StopContainer for \"83475a103c2b27fdc2c116b9ebff5b5648663dfae9650089eeccc9fbf20f03b5\" with timeout 2 (s)" Jan 30 14:37:00.736560 containerd[1476]: time="2025-01-30T14:37:00.736513356Z" level=info msg="Stop container \"83475a103c2b27fdc2c116b9ebff5b5648663dfae9650089eeccc9fbf20f03b5\" with signal terminated" Jan 30 14:37:00.750019 systemd-networkd[1376]: lxc_health: Link DOWN Jan 30 14:37:00.750028 systemd-networkd[1376]: lxc_health: Lost carrier Jan 30 14:37:00.765490 systemd[1]: cri-containerd-83475a103c2b27fdc2c116b9ebff5b5648663dfae9650089eeccc9fbf20f03b5.scope: Deactivated successfully. Jan 30 14:37:00.766192 systemd[1]: cri-containerd-83475a103c2b27fdc2c116b9ebff5b5648663dfae9650089eeccc9fbf20f03b5.scope: Consumed 9.558s CPU time. Jan 30 14:37:00.775664 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-321e01b630620e96fdba9bedc375004e4cabce9cd71b8bb167d6135eac596318-rootfs.mount: Deactivated successfully. Jan 30 14:37:00.798525 containerd[1476]: time="2025-01-30T14:37:00.798323402Z" level=info msg="shim disconnected" id=321e01b630620e96fdba9bedc375004e4cabce9cd71b8bb167d6135eac596318 namespace=k8s.io Jan 30 14:37:00.798950 containerd[1476]: time="2025-01-30T14:37:00.798827765Z" level=warning msg="cleaning up after shim disconnected" id=321e01b630620e96fdba9bedc375004e4cabce9cd71b8bb167d6135eac596318 namespace=k8s.io Jan 30 14:37:00.798950 containerd[1476]: time="2025-01-30T14:37:00.798850445Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:37:00.808417 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-83475a103c2b27fdc2c116b9ebff5b5648663dfae9650089eeccc9fbf20f03b5-rootfs.mount: Deactivated successfully. Jan 30 14:37:00.818530 containerd[1476]: time="2025-01-30T14:37:00.818213132Z" level=info msg="shim disconnected" id=83475a103c2b27fdc2c116b9ebff5b5648663dfae9650089eeccc9fbf20f03b5 namespace=k8s.io Jan 30 14:37:00.818530 containerd[1476]: time="2025-01-30T14:37:00.818301493Z" level=warning msg="cleaning up after shim disconnected" id=83475a103c2b27fdc2c116b9ebff5b5648663dfae9650089eeccc9fbf20f03b5 namespace=k8s.io Jan 30 14:37:00.818530 containerd[1476]: time="2025-01-30T14:37:00.818346453Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:37:00.829655 containerd[1476]: time="2025-01-30T14:37:00.829595527Z" level=info msg="StopContainer for \"321e01b630620e96fdba9bedc375004e4cabce9cd71b8bb167d6135eac596318\" returns successfully" Jan 30 14:37:00.830493 containerd[1476]: time="2025-01-30T14:37:00.830457132Z" level=info msg="StopPodSandbox for \"26b4f56b9dfeae764c932be776db03016938ececa36613c08b6a7eefa642692e\"" Jan 30 14:37:00.830619 containerd[1476]: time="2025-01-30T14:37:00.830510413Z" level=info msg="Container to stop \"321e01b630620e96fdba9bedc375004e4cabce9cd71b8bb167d6135eac596318\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 14:37:00.832545 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-26b4f56b9dfeae764c932be776db03016938ececa36613c08b6a7eefa642692e-shm.mount: Deactivated successfully. Jan 30 14:37:00.848028 containerd[1476]: time="2025-01-30T14:37:00.847881327Z" level=info msg="StopContainer for \"83475a103c2b27fdc2c116b9ebff5b5648663dfae9650089eeccc9fbf20f03b5\" returns successfully" Jan 30 14:37:00.848738 systemd[1]: cri-containerd-26b4f56b9dfeae764c932be776db03016938ececa36613c08b6a7eefa642692e.scope: Deactivated successfully. Jan 30 14:37:00.853568 containerd[1476]: time="2025-01-30T14:37:00.853335202Z" level=info msg="StopPodSandbox for \"5f92022ed7438e477159d8e973bab37f43a6023df1c7c1ba453fc91e605e7fe9\"" Jan 30 14:37:00.853568 containerd[1476]: time="2025-01-30T14:37:00.853386843Z" level=info msg="Container to stop \"1450ca36e757cab3b429976615caf16120656c6b79b22f89992856b3d8bced74\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 14:37:00.853568 containerd[1476]: time="2025-01-30T14:37:00.853406003Z" level=info msg="Container to stop \"032022e02b6d4bc9aac1938d320e7c00f848d02affff596f8940ed8cdfa0352c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 14:37:00.853568 containerd[1476]: time="2025-01-30T14:37:00.853416083Z" level=info msg="Container to stop \"83475a103c2b27fdc2c116b9ebff5b5648663dfae9650089eeccc9fbf20f03b5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 14:37:00.853568 containerd[1476]: time="2025-01-30T14:37:00.853426563Z" level=info msg="Container to stop \"8c8745d8376cdd09f82a9404364b9ebd4aeb0d0d03fd632ea90db53dae7c14ad\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 14:37:00.853568 containerd[1476]: time="2025-01-30T14:37:00.853437123Z" level=info msg="Container to stop \"a9611979419b2431db02edd9b5e0e1f91f60672b0c8f7e2d5e5acd8c3866ea19\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 14:37:00.863277 systemd[1]: cri-containerd-5f92022ed7438e477159d8e973bab37f43a6023df1c7c1ba453fc91e605e7fe9.scope: Deactivated successfully. Jan 30 14:37:00.886661 containerd[1476]: time="2025-01-30T14:37:00.885728415Z" level=info msg="shim disconnected" id=26b4f56b9dfeae764c932be776db03016938ececa36613c08b6a7eefa642692e namespace=k8s.io Jan 30 14:37:00.886661 containerd[1476]: time="2025-01-30T14:37:00.886452899Z" level=warning msg="cleaning up after shim disconnected" id=26b4f56b9dfeae764c932be776db03016938ececa36613c08b6a7eefa642692e namespace=k8s.io Jan 30 14:37:00.886661 containerd[1476]: time="2025-01-30T14:37:00.886466420Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:37:00.901011 containerd[1476]: time="2025-01-30T14:37:00.900935714Z" level=info msg="shim disconnected" id=5f92022ed7438e477159d8e973bab37f43a6023df1c7c1ba453fc91e605e7fe9 namespace=k8s.io Jan 30 14:37:00.901011 containerd[1476]: time="2025-01-30T14:37:00.900999595Z" level=warning msg="cleaning up after shim disconnected" id=5f92022ed7438e477159d8e973bab37f43a6023df1c7c1ba453fc91e605e7fe9 namespace=k8s.io Jan 30 14:37:00.901011 containerd[1476]: time="2025-01-30T14:37:00.901009315Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:37:00.916549 containerd[1476]: time="2025-01-30T14:37:00.916488656Z" level=info msg="TearDown network for sandbox \"26b4f56b9dfeae764c932be776db03016938ececa36613c08b6a7eefa642692e\" successfully" Jan 30 14:37:00.916549 containerd[1476]: time="2025-01-30T14:37:00.916536537Z" level=info msg="StopPodSandbox for \"26b4f56b9dfeae764c932be776db03016938ececa36613c08b6a7eefa642692e\" returns successfully" Jan 30 14:37:00.932060 containerd[1476]: time="2025-01-30T14:37:00.931970718Z" level=info msg="TearDown network for sandbox \"5f92022ed7438e477159d8e973bab37f43a6023df1c7c1ba453fc91e605e7fe9\" successfully" Jan 30 14:37:00.932060 containerd[1476]: time="2025-01-30T14:37:00.932011598Z" level=info msg="StopPodSandbox for \"5f92022ed7438e477159d8e973bab37f43a6023df1c7c1ba453fc91e605e7fe9\" returns successfully" Jan 30 14:37:00.974244 kubelet[2793]: I0130 14:37:00.973262 2793 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f7a69c7c-015f-47af-92f2-6cf24e22bf49-cilium-run\") pod \"f7a69c7c-015f-47af-92f2-6cf24e22bf49\" (UID: \"f7a69c7c-015f-47af-92f2-6cf24e22bf49\") " Jan 30 14:37:00.974244 kubelet[2793]: I0130 14:37:00.973333 2793 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f7a69c7c-015f-47af-92f2-6cf24e22bf49-cni-path\") pod \"f7a69c7c-015f-47af-92f2-6cf24e22bf49\" (UID: \"f7a69c7c-015f-47af-92f2-6cf24e22bf49\") " Jan 30 14:37:00.974244 kubelet[2793]: I0130 14:37:00.973412 2793 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2cj95\" (UniqueName: \"kubernetes.io/projected/f7a69c7c-015f-47af-92f2-6cf24e22bf49-kube-api-access-2cj95\") pod \"f7a69c7c-015f-47af-92f2-6cf24e22bf49\" (UID: \"f7a69c7c-015f-47af-92f2-6cf24e22bf49\") " Jan 30 14:37:00.974244 kubelet[2793]: I0130 14:37:00.973455 2793 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f7a69c7c-015f-47af-92f2-6cf24e22bf49-hubble-tls\") pod \"f7a69c7c-015f-47af-92f2-6cf24e22bf49\" (UID: \"f7a69c7c-015f-47af-92f2-6cf24e22bf49\") " Jan 30 14:37:00.974244 kubelet[2793]: I0130 14:37:00.973492 2793 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f7a69c7c-015f-47af-92f2-6cf24e22bf49-host-proc-sys-kernel\") pod \"f7a69c7c-015f-47af-92f2-6cf24e22bf49\" (UID: \"f7a69c7c-015f-47af-92f2-6cf24e22bf49\") " Jan 30 14:37:00.974244 kubelet[2793]: I0130 14:37:00.973523 2793 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f7a69c7c-015f-47af-92f2-6cf24e22bf49-etc-cni-netd\") pod \"f7a69c7c-015f-47af-92f2-6cf24e22bf49\" (UID: \"f7a69c7c-015f-47af-92f2-6cf24e22bf49\") " Jan 30 14:37:00.976829 kubelet[2793]: I0130 14:37:00.973554 2793 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f7a69c7c-015f-47af-92f2-6cf24e22bf49-host-proc-sys-net\") pod \"f7a69c7c-015f-47af-92f2-6cf24e22bf49\" (UID: \"f7a69c7c-015f-47af-92f2-6cf24e22bf49\") " Jan 30 14:37:00.976829 kubelet[2793]: I0130 14:37:00.973582 2793 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f7a69c7c-015f-47af-92f2-6cf24e22bf49-bpf-maps\") pod \"f7a69c7c-015f-47af-92f2-6cf24e22bf49\" (UID: \"f7a69c7c-015f-47af-92f2-6cf24e22bf49\") " Jan 30 14:37:00.976829 kubelet[2793]: I0130 14:37:00.973610 2793 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f7a69c7c-015f-47af-92f2-6cf24e22bf49-hostproc\") pod \"f7a69c7c-015f-47af-92f2-6cf24e22bf49\" (UID: \"f7a69c7c-015f-47af-92f2-6cf24e22bf49\") " Jan 30 14:37:00.976829 kubelet[2793]: I0130 14:37:00.973643 2793 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f7a69c7c-015f-47af-92f2-6cf24e22bf49-cilium-config-path\") pod \"f7a69c7c-015f-47af-92f2-6cf24e22bf49\" (UID: \"f7a69c7c-015f-47af-92f2-6cf24e22bf49\") " Jan 30 14:37:00.976829 kubelet[2793]: I0130 14:37:00.973694 2793 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f7a69c7c-015f-47af-92f2-6cf24e22bf49-clustermesh-secrets\") pod \"f7a69c7c-015f-47af-92f2-6cf24e22bf49\" (UID: \"f7a69c7c-015f-47af-92f2-6cf24e22bf49\") " Jan 30 14:37:00.976829 kubelet[2793]: I0130 14:37:00.973737 2793 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbmlg\" (UniqueName: \"kubernetes.io/projected/485eb47f-9a7e-4b8c-8647-e4da385f5f38-kube-api-access-dbmlg\") pod \"485eb47f-9a7e-4b8c-8647-e4da385f5f38\" (UID: \"485eb47f-9a7e-4b8c-8647-e4da385f5f38\") " Jan 30 14:37:00.976977 kubelet[2793]: I0130 14:37:00.973768 2793 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f7a69c7c-015f-47af-92f2-6cf24e22bf49-lib-modules\") pod \"f7a69c7c-015f-47af-92f2-6cf24e22bf49\" (UID: \"f7a69c7c-015f-47af-92f2-6cf24e22bf49\") " Jan 30 14:37:00.976977 kubelet[2793]: I0130 14:37:00.973799 2793 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f7a69c7c-015f-47af-92f2-6cf24e22bf49-cilium-cgroup\") pod \"f7a69c7c-015f-47af-92f2-6cf24e22bf49\" (UID: \"f7a69c7c-015f-47af-92f2-6cf24e22bf49\") " Jan 30 14:37:00.976977 kubelet[2793]: I0130 14:37:00.973831 2793 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f7a69c7c-015f-47af-92f2-6cf24e22bf49-xtables-lock\") pod \"f7a69c7c-015f-47af-92f2-6cf24e22bf49\" (UID: \"f7a69c7c-015f-47af-92f2-6cf24e22bf49\") " Jan 30 14:37:00.976977 kubelet[2793]: I0130 14:37:00.973863 2793 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/485eb47f-9a7e-4b8c-8647-e4da385f5f38-cilium-config-path\") pod \"485eb47f-9a7e-4b8c-8647-e4da385f5f38\" (UID: \"485eb47f-9a7e-4b8c-8647-e4da385f5f38\") " Jan 30 14:37:00.976977 kubelet[2793]: I0130 14:37:00.974068 2793 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7a69c7c-015f-47af-92f2-6cf24e22bf49-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f7a69c7c-015f-47af-92f2-6cf24e22bf49" (UID: "f7a69c7c-015f-47af-92f2-6cf24e22bf49"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:37:00.976977 kubelet[2793]: I0130 14:37:00.974149 2793 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7a69c7c-015f-47af-92f2-6cf24e22bf49-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f7a69c7c-015f-47af-92f2-6cf24e22bf49" (UID: "f7a69c7c-015f-47af-92f2-6cf24e22bf49"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:37:00.980375 kubelet[2793]: I0130 14:37:00.974165 2793 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7a69c7c-015f-47af-92f2-6cf24e22bf49-cni-path" (OuterVolumeSpecName: "cni-path") pod "f7a69c7c-015f-47af-92f2-6cf24e22bf49" (UID: "f7a69c7c-015f-47af-92f2-6cf24e22bf49"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:37:00.980375 kubelet[2793]: I0130 14:37:00.977209 2793 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7a69c7c-015f-47af-92f2-6cf24e22bf49-kube-api-access-2cj95" (OuterVolumeSpecName: "kube-api-access-2cj95") pod "f7a69c7c-015f-47af-92f2-6cf24e22bf49" (UID: "f7a69c7c-015f-47af-92f2-6cf24e22bf49"). InnerVolumeSpecName "kube-api-access-2cj95". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:37:00.981179 kubelet[2793]: I0130 14:37:00.980905 2793 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7a69c7c-015f-47af-92f2-6cf24e22bf49-hostproc" (OuterVolumeSpecName: "hostproc") pod "f7a69c7c-015f-47af-92f2-6cf24e22bf49" (UID: "f7a69c7c-015f-47af-92f2-6cf24e22bf49"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:37:00.981975 kubelet[2793]: I0130 14:37:00.981931 2793 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7a69c7c-015f-47af-92f2-6cf24e22bf49-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f7a69c7c-015f-47af-92f2-6cf24e22bf49" (UID: "f7a69c7c-015f-47af-92f2-6cf24e22bf49"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:37:00.982057 kubelet[2793]: I0130 14:37:00.981991 2793 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7a69c7c-015f-47af-92f2-6cf24e22bf49-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f7a69c7c-015f-47af-92f2-6cf24e22bf49" (UID: "f7a69c7c-015f-47af-92f2-6cf24e22bf49"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:37:00.982057 kubelet[2793]: I0130 14:37:00.982010 2793 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7a69c7c-015f-47af-92f2-6cf24e22bf49-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f7a69c7c-015f-47af-92f2-6cf24e22bf49" (UID: "f7a69c7c-015f-47af-92f2-6cf24e22bf49"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:37:00.988452 kubelet[2793]: I0130 14:37:00.988073 2793 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7a69c7c-015f-47af-92f2-6cf24e22bf49-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f7a69c7c-015f-47af-92f2-6cf24e22bf49" (UID: "f7a69c7c-015f-47af-92f2-6cf24e22bf49"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:37:00.988452 kubelet[2793]: I0130 14:37:00.988322 2793 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7a69c7c-015f-47af-92f2-6cf24e22bf49-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f7a69c7c-015f-47af-92f2-6cf24e22bf49" (UID: "f7a69c7c-015f-47af-92f2-6cf24e22bf49"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:37:00.988452 kubelet[2793]: I0130 14:37:00.988372 2793 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7a69c7c-015f-47af-92f2-6cf24e22bf49-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f7a69c7c-015f-47af-92f2-6cf24e22bf49" (UID: "f7a69c7c-015f-47af-92f2-6cf24e22bf49"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:37:00.989514 kubelet[2793]: I0130 14:37:00.989293 2793 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/485eb47f-9a7e-4b8c-8647-e4da385f5f38-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "485eb47f-9a7e-4b8c-8647-e4da385f5f38" (UID: "485eb47f-9a7e-4b8c-8647-e4da385f5f38"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:37:00.989624 kubelet[2793]: I0130 14:37:00.989559 2793 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7a69c7c-015f-47af-92f2-6cf24e22bf49-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f7a69c7c-015f-47af-92f2-6cf24e22bf49" (UID: "f7a69c7c-015f-47af-92f2-6cf24e22bf49"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:37:00.989661 kubelet[2793]: I0130 14:37:00.989632 2793 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7a69c7c-015f-47af-92f2-6cf24e22bf49-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f7a69c7c-015f-47af-92f2-6cf24e22bf49" (UID: "f7a69c7c-015f-47af-92f2-6cf24e22bf49"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:37:00.990586 kubelet[2793]: I0130 14:37:00.990404 2793 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/485eb47f-9a7e-4b8c-8647-e4da385f5f38-kube-api-access-dbmlg" (OuterVolumeSpecName: "kube-api-access-dbmlg") pod "485eb47f-9a7e-4b8c-8647-e4da385f5f38" (UID: "485eb47f-9a7e-4b8c-8647-e4da385f5f38"). InnerVolumeSpecName "kube-api-access-dbmlg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:37:00.992563 kubelet[2793]: I0130 14:37:00.992501 2793 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7a69c7c-015f-47af-92f2-6cf24e22bf49-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f7a69c7c-015f-47af-92f2-6cf24e22bf49" (UID: "f7a69c7c-015f-47af-92f2-6cf24e22bf49"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:37:01.075136 kubelet[2793]: I0130 14:37:01.075017 2793 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f7a69c7c-015f-47af-92f2-6cf24e22bf49-bpf-maps\") on node \"ci-4081-3-0-1-1410e96de7\" DevicePath \"\"" Jan 30 14:37:01.075136 kubelet[2793]: I0130 14:37:01.075068 2793 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f7a69c7c-015f-47af-92f2-6cf24e22bf49-host-proc-sys-kernel\") on node \"ci-4081-3-0-1-1410e96de7\" DevicePath \"\"" Jan 30 14:37:01.075136 kubelet[2793]: I0130 14:37:01.075099 2793 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f7a69c7c-015f-47af-92f2-6cf24e22bf49-etc-cni-netd\") on node \"ci-4081-3-0-1-1410e96de7\" DevicePath \"\"" Jan 30 14:37:01.075136 kubelet[2793]: I0130 14:37:01.075114 2793 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f7a69c7c-015f-47af-92f2-6cf24e22bf49-host-proc-sys-net\") on node \"ci-4081-3-0-1-1410e96de7\" DevicePath \"\"" Jan 30 14:37:01.075136 kubelet[2793]: I0130 14:37:01.075142 2793 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-dbmlg\" (UniqueName: \"kubernetes.io/projected/485eb47f-9a7e-4b8c-8647-e4da385f5f38-kube-api-access-dbmlg\") on node \"ci-4081-3-0-1-1410e96de7\" DevicePath \"\"" Jan 30 14:37:01.075136 kubelet[2793]: I0130 14:37:01.075155 2793 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f7a69c7c-015f-47af-92f2-6cf24e22bf49-hostproc\") on node \"ci-4081-3-0-1-1410e96de7\" DevicePath \"\"" Jan 30 14:37:01.075582 kubelet[2793]: I0130 14:37:01.075166 2793 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f7a69c7c-015f-47af-92f2-6cf24e22bf49-cilium-config-path\") on node \"ci-4081-3-0-1-1410e96de7\" DevicePath \"\"" Jan 30 14:37:01.075582 kubelet[2793]: I0130 14:37:01.075177 2793 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f7a69c7c-015f-47af-92f2-6cf24e22bf49-clustermesh-secrets\") on node \"ci-4081-3-0-1-1410e96de7\" DevicePath \"\"" Jan 30 14:37:01.075582 kubelet[2793]: I0130 14:37:01.075187 2793 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f7a69c7c-015f-47af-92f2-6cf24e22bf49-lib-modules\") on node \"ci-4081-3-0-1-1410e96de7\" DevicePath \"\"" Jan 30 14:37:01.075582 kubelet[2793]: I0130 14:37:01.075210 2793 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f7a69c7c-015f-47af-92f2-6cf24e22bf49-cilium-cgroup\") on node \"ci-4081-3-0-1-1410e96de7\" DevicePath \"\"" Jan 30 14:37:01.075582 kubelet[2793]: I0130 14:37:01.075222 2793 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f7a69c7c-015f-47af-92f2-6cf24e22bf49-xtables-lock\") on node \"ci-4081-3-0-1-1410e96de7\" DevicePath \"\"" Jan 30 14:37:01.075582 kubelet[2793]: I0130 14:37:01.075232 2793 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/485eb47f-9a7e-4b8c-8647-e4da385f5f38-cilium-config-path\") on node \"ci-4081-3-0-1-1410e96de7\" DevicePath \"\"" Jan 30 14:37:01.075582 kubelet[2793]: I0130 14:37:01.075243 2793 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f7a69c7c-015f-47af-92f2-6cf24e22bf49-cilium-run\") on node \"ci-4081-3-0-1-1410e96de7\" DevicePath \"\"" Jan 30 14:37:01.075582 kubelet[2793]: I0130 14:37:01.075255 2793 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f7a69c7c-015f-47af-92f2-6cf24e22bf49-cni-path\") on node \"ci-4081-3-0-1-1410e96de7\" DevicePath \"\"" Jan 30 14:37:01.075897 kubelet[2793]: I0130 14:37:01.075265 2793 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-2cj95\" (UniqueName: \"kubernetes.io/projected/f7a69c7c-015f-47af-92f2-6cf24e22bf49-kube-api-access-2cj95\") on node \"ci-4081-3-0-1-1410e96de7\" DevicePath \"\"" Jan 30 14:37:01.075897 kubelet[2793]: I0130 14:37:01.075277 2793 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f7a69c7c-015f-47af-92f2-6cf24e22bf49-hubble-tls\") on node \"ci-4081-3-0-1-1410e96de7\" DevicePath \"\"" Jan 30 14:37:01.204294 systemd[1]: Removed slice kubepods-besteffort-pod485eb47f_9a7e_4b8c_8647_e4da385f5f38.slice - libcontainer container kubepods-besteffort-pod485eb47f_9a7e_4b8c_8647_e4da385f5f38.slice. Jan 30 14:37:01.204404 systemd[1]: kubepods-besteffort-pod485eb47f_9a7e_4b8c_8647_e4da385f5f38.slice: Consumed 1.113s CPU time. Jan 30 14:37:01.207212 systemd[1]: Removed slice kubepods-burstable-podf7a69c7c_015f_47af_92f2_6cf24e22bf49.slice - libcontainer container kubepods-burstable-podf7a69c7c_015f_47af_92f2_6cf24e22bf49.slice. Jan 30 14:37:01.207353 systemd[1]: kubepods-burstable-podf7a69c7c_015f_47af_92f2_6cf24e22bf49.slice: Consumed 9.648s CPU time. Jan 30 14:37:01.276584 kubelet[2793]: I0130 14:37:01.276286 2793 scope.go:117] "RemoveContainer" containerID="83475a103c2b27fdc2c116b9ebff5b5648663dfae9650089eeccc9fbf20f03b5" Jan 30 14:37:01.291103 containerd[1476]: time="2025-01-30T14:37:01.290399228Z" level=info msg="RemoveContainer for \"83475a103c2b27fdc2c116b9ebff5b5648663dfae9650089eeccc9fbf20f03b5\"" Jan 30 14:37:01.308923 containerd[1476]: time="2025-01-30T14:37:01.308699668Z" level=info msg="RemoveContainer for \"83475a103c2b27fdc2c116b9ebff5b5648663dfae9650089eeccc9fbf20f03b5\" returns successfully" Jan 30 14:37:01.309889 kubelet[2793]: I0130 14:37:01.309829 2793 scope.go:117] "RemoveContainer" containerID="032022e02b6d4bc9aac1938d320e7c00f848d02affff596f8940ed8cdfa0352c" Jan 30 14:37:01.317900 containerd[1476]: time="2025-01-30T14:37:01.316628560Z" level=info msg="RemoveContainer for \"032022e02b6d4bc9aac1938d320e7c00f848d02affff596f8940ed8cdfa0352c\"" Jan 30 14:37:01.323134 containerd[1476]: time="2025-01-30T14:37:01.321880674Z" level=info msg="RemoveContainer for \"032022e02b6d4bc9aac1938d320e7c00f848d02affff596f8940ed8cdfa0352c\" returns successfully" Jan 30 14:37:01.323669 kubelet[2793]: I0130 14:37:01.323599 2793 scope.go:117] "RemoveContainer" containerID="a9611979419b2431db02edd9b5e0e1f91f60672b0c8f7e2d5e5acd8c3866ea19" Jan 30 14:37:01.325952 containerd[1476]: time="2025-01-30T14:37:01.325635019Z" level=info msg="RemoveContainer for \"a9611979419b2431db02edd9b5e0e1f91f60672b0c8f7e2d5e5acd8c3866ea19\"" Jan 30 14:37:01.331112 containerd[1476]: time="2025-01-30T14:37:01.330993894Z" level=info msg="RemoveContainer for \"a9611979419b2431db02edd9b5e0e1f91f60672b0c8f7e2d5e5acd8c3866ea19\" returns successfully" Jan 30 14:37:01.333195 kubelet[2793]: I0130 14:37:01.333155 2793 scope.go:117] "RemoveContainer" containerID="1450ca36e757cab3b429976615caf16120656c6b79b22f89992856b3d8bced74" Jan 30 14:37:01.336653 containerd[1476]: time="2025-01-30T14:37:01.336337569Z" level=info msg="RemoveContainer for \"1450ca36e757cab3b429976615caf16120656c6b79b22f89992856b3d8bced74\"" Jan 30 14:37:01.347111 containerd[1476]: time="2025-01-30T14:37:01.346923799Z" level=info msg="RemoveContainer for \"1450ca36e757cab3b429976615caf16120656c6b79b22f89992856b3d8bced74\" returns successfully" Jan 30 14:37:01.347411 kubelet[2793]: I0130 14:37:01.347378 2793 scope.go:117] "RemoveContainer" containerID="8c8745d8376cdd09f82a9404364b9ebd4aeb0d0d03fd632ea90db53dae7c14ad" Jan 30 14:37:01.349114 containerd[1476]: time="2025-01-30T14:37:01.348894051Z" level=info msg="RemoveContainer for \"8c8745d8376cdd09f82a9404364b9ebd4aeb0d0d03fd632ea90db53dae7c14ad\"" Jan 30 14:37:01.352332 containerd[1476]: time="2025-01-30T14:37:01.352276234Z" level=info msg="RemoveContainer for \"8c8745d8376cdd09f82a9404364b9ebd4aeb0d0d03fd632ea90db53dae7c14ad\" returns successfully" Jan 30 14:37:01.352664 kubelet[2793]: I0130 14:37:01.352619 2793 scope.go:117] "RemoveContainer" containerID="83475a103c2b27fdc2c116b9ebff5b5648663dfae9650089eeccc9fbf20f03b5" Jan 30 14:37:01.353235 containerd[1476]: time="2025-01-30T14:37:01.353153599Z" level=error msg="ContainerStatus for \"83475a103c2b27fdc2c116b9ebff5b5648663dfae9650089eeccc9fbf20f03b5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"83475a103c2b27fdc2c116b9ebff5b5648663dfae9650089eeccc9fbf20f03b5\": not found" Jan 30 14:37:01.353487 kubelet[2793]: E0130 14:37:01.353452 2793 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"83475a103c2b27fdc2c116b9ebff5b5648663dfae9650089eeccc9fbf20f03b5\": not found" containerID="83475a103c2b27fdc2c116b9ebff5b5648663dfae9650089eeccc9fbf20f03b5" Jan 30 14:37:01.353595 kubelet[2793]: I0130 14:37:01.353495 2793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"83475a103c2b27fdc2c116b9ebff5b5648663dfae9650089eeccc9fbf20f03b5"} err="failed to get container status \"83475a103c2b27fdc2c116b9ebff5b5648663dfae9650089eeccc9fbf20f03b5\": rpc error: code = NotFound desc = an error occurred when try to find container \"83475a103c2b27fdc2c116b9ebff5b5648663dfae9650089eeccc9fbf20f03b5\": not found" Jan 30 14:37:01.353641 kubelet[2793]: I0130 14:37:01.353597 2793 scope.go:117] "RemoveContainer" containerID="032022e02b6d4bc9aac1938d320e7c00f848d02affff596f8940ed8cdfa0352c" Jan 30 14:37:01.353992 containerd[1476]: time="2025-01-30T14:37:01.353882884Z" level=error msg="ContainerStatus for \"032022e02b6d4bc9aac1938d320e7c00f848d02affff596f8940ed8cdfa0352c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"032022e02b6d4bc9aac1938d320e7c00f848d02affff596f8940ed8cdfa0352c\": not found" Jan 30 14:37:01.354306 kubelet[2793]: E0130 14:37:01.354149 2793 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"032022e02b6d4bc9aac1938d320e7c00f848d02affff596f8940ed8cdfa0352c\": not found" containerID="032022e02b6d4bc9aac1938d320e7c00f848d02affff596f8940ed8cdfa0352c" Jan 30 14:37:01.354306 kubelet[2793]: I0130 14:37:01.354188 2793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"032022e02b6d4bc9aac1938d320e7c00f848d02affff596f8940ed8cdfa0352c"} err="failed to get container status \"032022e02b6d4bc9aac1938d320e7c00f848d02affff596f8940ed8cdfa0352c\": rpc error: code = NotFound desc = an error occurred when try to find container \"032022e02b6d4bc9aac1938d320e7c00f848d02affff596f8940ed8cdfa0352c\": not found" Jan 30 14:37:01.354306 kubelet[2793]: I0130 14:37:01.354214 2793 scope.go:117] "RemoveContainer" containerID="a9611979419b2431db02edd9b5e0e1f91f60672b0c8f7e2d5e5acd8c3866ea19" Jan 30 14:37:01.355026 containerd[1476]: time="2025-01-30T14:37:01.354628529Z" level=error msg="ContainerStatus for \"a9611979419b2431db02edd9b5e0e1f91f60672b0c8f7e2d5e5acd8c3866ea19\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a9611979419b2431db02edd9b5e0e1f91f60672b0c8f7e2d5e5acd8c3866ea19\": not found" Jan 30 14:37:01.355136 kubelet[2793]: E0130 14:37:01.354867 2793 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a9611979419b2431db02edd9b5e0e1f91f60672b0c8f7e2d5e5acd8c3866ea19\": not found" containerID="a9611979419b2431db02edd9b5e0e1f91f60672b0c8f7e2d5e5acd8c3866ea19" Jan 30 14:37:01.355136 kubelet[2793]: I0130 14:37:01.354917 2793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a9611979419b2431db02edd9b5e0e1f91f60672b0c8f7e2d5e5acd8c3866ea19"} err="failed to get container status \"a9611979419b2431db02edd9b5e0e1f91f60672b0c8f7e2d5e5acd8c3866ea19\": rpc error: code = NotFound desc = an error occurred when try to find container \"a9611979419b2431db02edd9b5e0e1f91f60672b0c8f7e2d5e5acd8c3866ea19\": not found" Jan 30 14:37:01.355136 kubelet[2793]: I0130 14:37:01.354939 2793 scope.go:117] "RemoveContainer" containerID="1450ca36e757cab3b429976615caf16120656c6b79b22f89992856b3d8bced74" Jan 30 14:37:01.355257 containerd[1476]: time="2025-01-30T14:37:01.355216493Z" level=error msg="ContainerStatus for \"1450ca36e757cab3b429976615caf16120656c6b79b22f89992856b3d8bced74\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1450ca36e757cab3b429976615caf16120656c6b79b22f89992856b3d8bced74\": not found" Jan 30 14:37:01.355571 kubelet[2793]: E0130 14:37:01.355497 2793 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1450ca36e757cab3b429976615caf16120656c6b79b22f89992856b3d8bced74\": not found" containerID="1450ca36e757cab3b429976615caf16120656c6b79b22f89992856b3d8bced74" Jan 30 14:37:01.355625 kubelet[2793]: I0130 14:37:01.355580 2793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1450ca36e757cab3b429976615caf16120656c6b79b22f89992856b3d8bced74"} err="failed to get container status \"1450ca36e757cab3b429976615caf16120656c6b79b22f89992856b3d8bced74\": rpc error: code = NotFound desc = an error occurred when try to find container \"1450ca36e757cab3b429976615caf16120656c6b79b22f89992856b3d8bced74\": not found" Jan 30 14:37:01.355625 kubelet[2793]: I0130 14:37:01.355604 2793 scope.go:117] "RemoveContainer" containerID="8c8745d8376cdd09f82a9404364b9ebd4aeb0d0d03fd632ea90db53dae7c14ad" Jan 30 14:37:01.355997 containerd[1476]: time="2025-01-30T14:37:01.355926978Z" level=error msg="ContainerStatus for \"8c8745d8376cdd09f82a9404364b9ebd4aeb0d0d03fd632ea90db53dae7c14ad\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8c8745d8376cdd09f82a9404364b9ebd4aeb0d0d03fd632ea90db53dae7c14ad\": not found" Jan 30 14:37:01.356442 kubelet[2793]: E0130 14:37:01.356237 2793 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8c8745d8376cdd09f82a9404364b9ebd4aeb0d0d03fd632ea90db53dae7c14ad\": not found" containerID="8c8745d8376cdd09f82a9404364b9ebd4aeb0d0d03fd632ea90db53dae7c14ad" Jan 30 14:37:01.356442 kubelet[2793]: I0130 14:37:01.356274 2793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8c8745d8376cdd09f82a9404364b9ebd4aeb0d0d03fd632ea90db53dae7c14ad"} err="failed to get container status \"8c8745d8376cdd09f82a9404364b9ebd4aeb0d0d03fd632ea90db53dae7c14ad\": rpc error: code = NotFound desc = an error occurred when try to find container \"8c8745d8376cdd09f82a9404364b9ebd4aeb0d0d03fd632ea90db53dae7c14ad\": not found" Jan 30 14:37:01.356442 kubelet[2793]: I0130 14:37:01.356347 2793 scope.go:117] "RemoveContainer" containerID="321e01b630620e96fdba9bedc375004e4cabce9cd71b8bb167d6135eac596318" Jan 30 14:37:01.357987 containerd[1476]: time="2025-01-30T14:37:01.357935231Z" level=info msg="RemoveContainer for \"321e01b630620e96fdba9bedc375004e4cabce9cd71b8bb167d6135eac596318\"" Jan 30 14:37:01.362626 containerd[1476]: time="2025-01-30T14:37:01.362568781Z" level=info msg="RemoveContainer for \"321e01b630620e96fdba9bedc375004e4cabce9cd71b8bb167d6135eac596318\" returns successfully" Jan 30 14:37:01.363134 kubelet[2793]: I0130 14:37:01.362839 2793 scope.go:117] "RemoveContainer" containerID="321e01b630620e96fdba9bedc375004e4cabce9cd71b8bb167d6135eac596318" Jan 30 14:37:01.363298 containerd[1476]: time="2025-01-30T14:37:01.363201945Z" level=error msg="ContainerStatus for \"321e01b630620e96fdba9bedc375004e4cabce9cd71b8bb167d6135eac596318\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"321e01b630620e96fdba9bedc375004e4cabce9cd71b8bb167d6135eac596318\": not found" Jan 30 14:37:01.363608 kubelet[2793]: E0130 14:37:01.363573 2793 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"321e01b630620e96fdba9bedc375004e4cabce9cd71b8bb167d6135eac596318\": not found" containerID="321e01b630620e96fdba9bedc375004e4cabce9cd71b8bb167d6135eac596318" Jan 30 14:37:01.363828 kubelet[2793]: I0130 14:37:01.363780 2793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"321e01b630620e96fdba9bedc375004e4cabce9cd71b8bb167d6135eac596318"} err="failed to get container status \"321e01b630620e96fdba9bedc375004e4cabce9cd71b8bb167d6135eac596318\": rpc error: code = NotFound desc = an error occurred when try to find container \"321e01b630620e96fdba9bedc375004e4cabce9cd71b8bb167d6135eac596318\": not found" Jan 30 14:37:01.701669 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-26b4f56b9dfeae764c932be776db03016938ececa36613c08b6a7eefa642692e-rootfs.mount: Deactivated successfully. Jan 30 14:37:01.702108 systemd[1]: var-lib-kubelet-pods-485eb47f\x2d9a7e\x2d4b8c\x2d8647\x2de4da385f5f38-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddbmlg.mount: Deactivated successfully. Jan 30 14:37:01.702206 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5f92022ed7438e477159d8e973bab37f43a6023df1c7c1ba453fc91e605e7fe9-rootfs.mount: Deactivated successfully. Jan 30 14:37:01.702276 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5f92022ed7438e477159d8e973bab37f43a6023df1c7c1ba453fc91e605e7fe9-shm.mount: Deactivated successfully. Jan 30 14:37:01.702341 systemd[1]: var-lib-kubelet-pods-f7a69c7c\x2d015f\x2d47af\x2d92f2\x2d6cf24e22bf49-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2cj95.mount: Deactivated successfully. Jan 30 14:37:01.702401 systemd[1]: var-lib-kubelet-pods-f7a69c7c\x2d015f\x2d47af\x2d92f2\x2d6cf24e22bf49-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 30 14:37:01.702464 systemd[1]: var-lib-kubelet-pods-f7a69c7c\x2d015f\x2d47af\x2d92f2\x2d6cf24e22bf49-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 30 14:37:02.757717 sshd[5528]: pam_unix(sshd:session): session closed for user core Jan 30 14:37:02.764414 systemd[1]: sshd@143-49.13.124.2:22-139.178.68.195:53360.service: Deactivated successfully. Jan 30 14:37:02.764461 systemd-logind[1459]: Session 83 logged out. Waiting for processes to exit. Jan 30 14:37:02.767318 systemd[1]: session-83.scope: Deactivated successfully. Jan 30 14:37:02.767524 systemd[1]: session-83.scope: Consumed 1.671s CPU time. Jan 30 14:37:02.768568 systemd-logind[1459]: Removed session 83. Jan 30 14:37:02.929219 systemd[1]: Started sshd@144-49.13.124.2:22-139.178.68.195:53372.service - OpenSSH per-connection server daemon (139.178.68.195:53372). Jan 30 14:37:03.195040 kubelet[2793]: I0130 14:37:03.194967 2793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="485eb47f-9a7e-4b8c-8647-e4da385f5f38" path="/var/lib/kubelet/pods/485eb47f-9a7e-4b8c-8647-e4da385f5f38/volumes" Jan 30 14:37:03.196584 kubelet[2793]: I0130 14:37:03.196072 2793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7a69c7c-015f-47af-92f2-6cf24e22bf49" path="/var/lib/kubelet/pods/f7a69c7c-015f-47af-92f2-6cf24e22bf49/volumes" Jan 30 14:37:03.918591 sshd[5687]: Accepted publickey for core from 139.178.68.195 port 53372 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:37:03.921874 sshd[5687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:37:03.928532 systemd-logind[1459]: New session 84 of user core. Jan 30 14:37:03.946488 systemd[1]: Started session-84.scope - Session 84 of User core. Jan 30 14:37:04.503412 kubelet[2793]: E0130 14:37:04.503283 2793 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 30 14:37:05.399707 kubelet[2793]: I0130 14:37:05.399648 2793 topology_manager.go:215] "Topology Admit Handler" podUID="57edd22f-2beb-4522-b360-18f7dea854f9" podNamespace="kube-system" podName="cilium-5767b" Jan 30 14:37:05.400104 kubelet[2793]: E0130 14:37:05.399921 2793 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f7a69c7c-015f-47af-92f2-6cf24e22bf49" containerName="apply-sysctl-overwrites" Jan 30 14:37:05.400104 kubelet[2793]: E0130 14:37:05.399939 2793 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f7a69c7c-015f-47af-92f2-6cf24e22bf49" containerName="mount-bpf-fs" Jan 30 14:37:05.400104 kubelet[2793]: E0130 14:37:05.399945 2793 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f7a69c7c-015f-47af-92f2-6cf24e22bf49" containerName="mount-cgroup" Jan 30 14:37:05.400104 kubelet[2793]: E0130 14:37:05.399963 2793 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f7a69c7c-015f-47af-92f2-6cf24e22bf49" containerName="clean-cilium-state" Jan 30 14:37:05.400104 kubelet[2793]: E0130 14:37:05.399970 2793 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="485eb47f-9a7e-4b8c-8647-e4da385f5f38" containerName="cilium-operator" Jan 30 14:37:05.400104 kubelet[2793]: E0130 14:37:05.399976 2793 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f7a69c7c-015f-47af-92f2-6cf24e22bf49" containerName="cilium-agent" Jan 30 14:37:05.400104 kubelet[2793]: I0130 14:37:05.400003 2793 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7a69c7c-015f-47af-92f2-6cf24e22bf49" containerName="cilium-agent" Jan 30 14:37:05.400104 kubelet[2793]: I0130 14:37:05.400011 2793 memory_manager.go:354] "RemoveStaleState removing state" podUID="485eb47f-9a7e-4b8c-8647-e4da385f5f38" containerName="cilium-operator" Jan 30 14:37:05.413933 kubelet[2793]: W0130 14:37:05.413824 2793 reflector.go:547] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4081-3-0-1-1410e96de7" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-0-1-1410e96de7' and this object Jan 30 14:37:05.413933 kubelet[2793]: E0130 14:37:05.413875 2793 reflector.go:150] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4081-3-0-1-1410e96de7" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-0-1-1410e96de7' and this object Jan 30 14:37:05.415625 systemd[1]: Created slice kubepods-burstable-pod57edd22f_2beb_4522_b360_18f7dea854f9.slice - libcontainer container kubepods-burstable-pod57edd22f_2beb_4522_b360_18f7dea854f9.slice. Jan 30 14:37:05.420503 kubelet[2793]: W0130 14:37:05.420476 2793 reflector.go:547] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4081-3-0-1-1410e96de7" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-0-1-1410e96de7' and this object Jan 30 14:37:05.420785 kubelet[2793]: E0130 14:37:05.420765 2793 reflector.go:150] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4081-3-0-1-1410e96de7" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-0-1-1410e96de7' and this object Jan 30 14:37:05.420906 kubelet[2793]: W0130 14:37:05.420621 2793 reflector.go:547] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-4081-3-0-1-1410e96de7" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-0-1-1410e96de7' and this object Jan 30 14:37:05.421003 kubelet[2793]: E0130 14:37:05.420991 2793 reflector.go:150] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-4081-3-0-1-1410e96de7" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-0-1-1410e96de7' and this object Jan 30 14:37:05.421101 kubelet[2793]: W0130 14:37:05.420661 2793 reflector.go:547] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4081-3-0-1-1410e96de7" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-0-1-1410e96de7' and this object Jan 30 14:37:05.421267 kubelet[2793]: E0130 14:37:05.421185 2793 reflector.go:150] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4081-3-0-1-1410e96de7" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-0-1-1410e96de7' and this object Jan 30 14:37:05.504853 kubelet[2793]: I0130 14:37:05.504697 2793 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/57edd22f-2beb-4522-b360-18f7dea854f9-cni-path\") pod \"cilium-5767b\" (UID: \"57edd22f-2beb-4522-b360-18f7dea854f9\") " pod="kube-system/cilium-5767b" Jan 30 14:37:05.504853 kubelet[2793]: I0130 14:37:05.504753 2793 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/57edd22f-2beb-4522-b360-18f7dea854f9-clustermesh-secrets\") pod \"cilium-5767b\" (UID: \"57edd22f-2beb-4522-b360-18f7dea854f9\") " pod="kube-system/cilium-5767b" Jan 30 14:37:05.504853 kubelet[2793]: I0130 14:37:05.504775 2793 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnd59\" (UniqueName: \"kubernetes.io/projected/57edd22f-2beb-4522-b360-18f7dea854f9-kube-api-access-dnd59\") pod \"cilium-5767b\" (UID: \"57edd22f-2beb-4522-b360-18f7dea854f9\") " pod="kube-system/cilium-5767b" Jan 30 14:37:05.504853 kubelet[2793]: I0130 14:37:05.504793 2793 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/57edd22f-2beb-4522-b360-18f7dea854f9-cilium-cgroup\") pod \"cilium-5767b\" (UID: \"57edd22f-2beb-4522-b360-18f7dea854f9\") " pod="kube-system/cilium-5767b" Jan 30 14:37:05.504853 kubelet[2793]: I0130 14:37:05.504808 2793 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/57edd22f-2beb-4522-b360-18f7dea854f9-cilium-ipsec-secrets\") pod \"cilium-5767b\" (UID: \"57edd22f-2beb-4522-b360-18f7dea854f9\") " pod="kube-system/cilium-5767b" Jan 30 14:37:05.506601 kubelet[2793]: I0130 14:37:05.506334 2793 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/57edd22f-2beb-4522-b360-18f7dea854f9-bpf-maps\") pod \"cilium-5767b\" (UID: \"57edd22f-2beb-4522-b360-18f7dea854f9\") " pod="kube-system/cilium-5767b" Jan 30 14:37:05.506601 kubelet[2793]: I0130 14:37:05.506556 2793 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/57edd22f-2beb-4522-b360-18f7dea854f9-host-proc-sys-kernel\") pod \"cilium-5767b\" (UID: \"57edd22f-2beb-4522-b360-18f7dea854f9\") " pod="kube-system/cilium-5767b" Jan 30 14:37:05.507092 kubelet[2793]: I0130 14:37:05.506587 2793 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/57edd22f-2beb-4522-b360-18f7dea854f9-hostproc\") pod \"cilium-5767b\" (UID: \"57edd22f-2beb-4522-b360-18f7dea854f9\") " pod="kube-system/cilium-5767b" Jan 30 14:37:05.507092 kubelet[2793]: I0130 14:37:05.506926 2793 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/57edd22f-2beb-4522-b360-18f7dea854f9-lib-modules\") pod \"cilium-5767b\" (UID: \"57edd22f-2beb-4522-b360-18f7dea854f9\") " pod="kube-system/cilium-5767b" Jan 30 14:37:05.507092 kubelet[2793]: I0130 14:37:05.507034 2793 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/57edd22f-2beb-4522-b360-18f7dea854f9-cilium-config-path\") pod \"cilium-5767b\" (UID: \"57edd22f-2beb-4522-b360-18f7dea854f9\") " pod="kube-system/cilium-5767b" Jan 30 14:37:05.507092 kubelet[2793]: I0130 14:37:05.507058 2793 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/57edd22f-2beb-4522-b360-18f7dea854f9-hubble-tls\") pod \"cilium-5767b\" (UID: \"57edd22f-2beb-4522-b360-18f7dea854f9\") " pod="kube-system/cilium-5767b" Jan 30 14:37:05.507453 kubelet[2793]: I0130 14:37:05.507303 2793 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/57edd22f-2beb-4522-b360-18f7dea854f9-xtables-lock\") pod \"cilium-5767b\" (UID: \"57edd22f-2beb-4522-b360-18f7dea854f9\") " pod="kube-system/cilium-5767b" Jan 30 14:37:05.507453 kubelet[2793]: I0130 14:37:05.507337 2793 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/57edd22f-2beb-4522-b360-18f7dea854f9-cilium-run\") pod \"cilium-5767b\" (UID: \"57edd22f-2beb-4522-b360-18f7dea854f9\") " pod="kube-system/cilium-5767b" Jan 30 14:37:05.507612 kubelet[2793]: I0130 14:37:05.507366 2793 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/57edd22f-2beb-4522-b360-18f7dea854f9-etc-cni-netd\") pod \"cilium-5767b\" (UID: \"57edd22f-2beb-4522-b360-18f7dea854f9\") " pod="kube-system/cilium-5767b" Jan 30 14:37:05.507612 kubelet[2793]: I0130 14:37:05.507564 2793 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/57edd22f-2beb-4522-b360-18f7dea854f9-host-proc-sys-net\") pod \"cilium-5767b\" (UID: \"57edd22f-2beb-4522-b360-18f7dea854f9\") " pod="kube-system/cilium-5767b" Jan 30 14:37:05.573372 sshd[5687]: pam_unix(sshd:session): session closed for user core Jan 30 14:37:05.579150 systemd[1]: sshd@144-49.13.124.2:22-139.178.68.195:53372.service: Deactivated successfully. Jan 30 14:37:05.581654 systemd[1]: session-84.scope: Deactivated successfully. Jan 30 14:37:05.584200 systemd-logind[1459]: Session 84 logged out. Waiting for processes to exit. Jan 30 14:37:05.585780 systemd-logind[1459]: Removed session 84. Jan 30 14:37:05.755132 systemd[1]: Started sshd@145-49.13.124.2:22-139.178.68.195:59450.service - OpenSSH per-connection server daemon (139.178.68.195:59450). Jan 30 14:37:06.461570 systemd[1]: Started sshd@146-49.13.124.2:22-140.206.168.98:45450.service - OpenSSH per-connection server daemon (140.206.168.98:45450). Jan 30 14:37:06.611674 kubelet[2793]: E0130 14:37:06.610042 2793 secret.go:194] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Jan 30 14:37:06.611674 kubelet[2793]: E0130 14:37:06.610188 2793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/57edd22f-2beb-4522-b360-18f7dea854f9-clustermesh-secrets podName:57edd22f-2beb-4522-b360-18f7dea854f9 nodeName:}" failed. No retries permitted until 2025-01-30 14:37:07.110162884 +0000 UTC m=+798.051560256 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/57edd22f-2beb-4522-b360-18f7dea854f9-clustermesh-secrets") pod "cilium-5767b" (UID: "57edd22f-2beb-4522-b360-18f7dea854f9") : failed to sync secret cache: timed out waiting for the condition Jan 30 14:37:06.612463 kubelet[2793]: E0130 14:37:06.612215 2793 projected.go:269] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Jan 30 14:37:06.612463 kubelet[2793]: E0130 14:37:06.612244 2793 projected.go:200] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-5767b: failed to sync secret cache: timed out waiting for the condition Jan 30 14:37:06.612463 kubelet[2793]: E0130 14:37:06.612339 2793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/57edd22f-2beb-4522-b360-18f7dea854f9-hubble-tls podName:57edd22f-2beb-4522-b360-18f7dea854f9 nodeName:}" failed. No retries permitted until 2025-01-30 14:37:07.112318578 +0000 UTC m=+798.053715870 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/57edd22f-2beb-4522-b360-18f7dea854f9-hubble-tls") pod "cilium-5767b" (UID: "57edd22f-2beb-4522-b360-18f7dea854f9") : failed to sync secret cache: timed out waiting for the condition Jan 30 14:37:06.617715 kubelet[2793]: I0130 14:37:06.617529 2793 setters.go:580] "Node became not ready" node="ci-4081-3-0-1-1410e96de7" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-30T14:37:06Z","lastTransitionTime":"2025-01-30T14:37:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 30 14:37:06.738915 sshd[5701]: Accepted publickey for core from 139.178.68.195 port 59450 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:37:06.740942 sshd[5701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:37:06.746549 systemd-logind[1459]: New session 85 of user core. Jan 30 14:37:06.758401 systemd[1]: Started session-85.scope - Session 85 of User core. Jan 30 14:37:07.224760 containerd[1476]: time="2025-01-30T14:37:07.224615957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5767b,Uid:57edd22f-2beb-4522-b360-18f7dea854f9,Namespace:kube-system,Attempt:0,}" Jan 30 14:37:07.265815 containerd[1476]: time="2025-01-30T14:37:07.263510572Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:37:07.265815 containerd[1476]: time="2025-01-30T14:37:07.263568733Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:37:07.265815 containerd[1476]: time="2025-01-30T14:37:07.263591373Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:37:07.265815 containerd[1476]: time="2025-01-30T14:37:07.263732414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:37:07.294350 systemd[1]: Started cri-containerd-1a8e997d6df31a52bc863b8be2feebdc8517792fda1e15d88125c97eab57fa0b.scope - libcontainer container 1a8e997d6df31a52bc863b8be2feebdc8517792fda1e15d88125c97eab57fa0b. Jan 30 14:37:07.325722 containerd[1476]: time="2025-01-30T14:37:07.325649060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5767b,Uid:57edd22f-2beb-4522-b360-18f7dea854f9,Namespace:kube-system,Attempt:0,} returns sandbox id \"1a8e997d6df31a52bc863b8be2feebdc8517792fda1e15d88125c97eab57fa0b\"" Jan 30 14:37:07.330669 containerd[1476]: time="2025-01-30T14:37:07.330622493Z" level=info msg="CreateContainer within sandbox \"1a8e997d6df31a52bc863b8be2feebdc8517792fda1e15d88125c97eab57fa0b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 14:37:07.343045 containerd[1476]: time="2025-01-30T14:37:07.342951374Z" level=info msg="CreateContainer within sandbox \"1a8e997d6df31a52bc863b8be2feebdc8517792fda1e15d88125c97eab57fa0b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0f82a905a6ecd44290ccfd008f7353a73e915284b390d6c236420ebc1653a011\"" Jan 30 14:37:07.343712 containerd[1476]: time="2025-01-30T14:37:07.343646258Z" level=info msg="StartContainer for \"0f82a905a6ecd44290ccfd008f7353a73e915284b390d6c236420ebc1653a011\"" Jan 30 14:37:07.368307 systemd[1]: Started cri-containerd-0f82a905a6ecd44290ccfd008f7353a73e915284b390d6c236420ebc1653a011.scope - libcontainer container 0f82a905a6ecd44290ccfd008f7353a73e915284b390d6c236420ebc1653a011. Jan 30 14:37:07.397531 containerd[1476]: time="2025-01-30T14:37:07.397447251Z" level=info msg="StartContainer for \"0f82a905a6ecd44290ccfd008f7353a73e915284b390d6c236420ebc1653a011\" returns successfully" Jan 30 14:37:07.408538 systemd[1]: cri-containerd-0f82a905a6ecd44290ccfd008f7353a73e915284b390d6c236420ebc1653a011.scope: Deactivated successfully. Jan 30 14:37:07.422489 sshd[5701]: pam_unix(sshd:session): session closed for user core Jan 30 14:37:07.428136 systemd[1]: session-85.scope: Deactivated successfully. Jan 30 14:37:07.429307 systemd[1]: sshd@145-49.13.124.2:22-139.178.68.195:59450.service: Deactivated successfully. Jan 30 14:37:07.434246 systemd-logind[1459]: Session 85 logged out. Waiting for processes to exit. Jan 30 14:37:07.436057 systemd-logind[1459]: Removed session 85. Jan 30 14:37:07.454667 containerd[1476]: time="2025-01-30T14:37:07.454568986Z" level=info msg="shim disconnected" id=0f82a905a6ecd44290ccfd008f7353a73e915284b390d6c236420ebc1653a011 namespace=k8s.io Jan 30 14:37:07.454667 containerd[1476]: time="2025-01-30T14:37:07.454640227Z" level=warning msg="cleaning up after shim disconnected" id=0f82a905a6ecd44290ccfd008f7353a73e915284b390d6c236420ebc1653a011 namespace=k8s.io Jan 30 14:37:07.454667 containerd[1476]: time="2025-01-30T14:37:07.454654267Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:37:07.602632 systemd[1]: Started sshd@147-49.13.124.2:22-139.178.68.195:59456.service - OpenSSH per-connection server daemon (139.178.68.195:59456). Jan 30 14:37:08.131499 systemd[1]: run-containerd-runc-k8s.io-1a8e997d6df31a52bc863b8be2feebdc8517792fda1e15d88125c97eab57fa0b-runc.8Zl40b.mount: Deactivated successfully. Jan 30 14:37:08.192106 kubelet[2793]: E0130 14:37:08.191992 2793 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-9ggzz" podUID="6ec2fbda-173d-4dfc-b238-2c1593590df3" Jan 30 14:37:08.313024 containerd[1476]: time="2025-01-30T14:37:08.312202936Z" level=info msg="CreateContainer within sandbox \"1a8e997d6df31a52bc863b8be2feebdc8517792fda1e15d88125c97eab57fa0b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 14:37:08.330505 containerd[1476]: time="2025-01-30T14:37:08.330215734Z" level=info msg="CreateContainer within sandbox \"1a8e997d6df31a52bc863b8be2feebdc8517792fda1e15d88125c97eab57fa0b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ce92cbf5463c22b070036d3a705e40617a53abfe9e1c3c34affc19cca666623f\"" Jan 30 14:37:08.332632 containerd[1476]: time="2025-01-30T14:37:08.331477143Z" level=info msg="StartContainer for \"ce92cbf5463c22b070036d3a705e40617a53abfe9e1c3c34affc19cca666623f\"" Jan 30 14:37:08.375422 systemd[1]: Started cri-containerd-ce92cbf5463c22b070036d3a705e40617a53abfe9e1c3c34affc19cca666623f.scope - libcontainer container ce92cbf5463c22b070036d3a705e40617a53abfe9e1c3c34affc19cca666623f. Jan 30 14:37:08.409489 containerd[1476]: time="2025-01-30T14:37:08.409367174Z" level=info msg="StartContainer for \"ce92cbf5463c22b070036d3a705e40617a53abfe9e1c3c34affc19cca666623f\" returns successfully" Jan 30 14:37:08.416048 systemd[1]: cri-containerd-ce92cbf5463c22b070036d3a705e40617a53abfe9e1c3c34affc19cca666623f.scope: Deactivated successfully. Jan 30 14:37:08.446625 containerd[1476]: time="2025-01-30T14:37:08.446505578Z" level=info msg="shim disconnected" id=ce92cbf5463c22b070036d3a705e40617a53abfe9e1c3c34affc19cca666623f namespace=k8s.io Jan 30 14:37:08.446625 containerd[1476]: time="2025-01-30T14:37:08.446557578Z" level=warning msg="cleaning up after shim disconnected" id=ce92cbf5463c22b070036d3a705e40617a53abfe9e1c3c34affc19cca666623f namespace=k8s.io Jan 30 14:37:08.446625 containerd[1476]: time="2025-01-30T14:37:08.446566058Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:37:08.584774 sshd[5816]: Accepted publickey for core from 139.178.68.195 port 59456 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:37:08.585671 sshd[5816]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:37:08.592992 systemd-logind[1459]: New session 86 of user core. Jan 30 14:37:08.598666 systemd[1]: Started session-86.scope - Session 86 of User core. Jan 30 14:37:09.128029 systemd[1]: run-containerd-runc-k8s.io-ce92cbf5463c22b070036d3a705e40617a53abfe9e1c3c34affc19cca666623f-runc.dqL4VB.mount: Deactivated successfully. Jan 30 14:37:09.128697 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ce92cbf5463c22b070036d3a705e40617a53abfe9e1c3c34affc19cca666623f-rootfs.mount: Deactivated successfully. Jan 30 14:37:09.316101 containerd[1476]: time="2025-01-30T14:37:09.316021886Z" level=info msg="CreateContainer within sandbox \"1a8e997d6df31a52bc863b8be2feebdc8517792fda1e15d88125c97eab57fa0b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 14:37:09.337415 containerd[1476]: time="2025-01-30T14:37:09.336830783Z" level=info msg="CreateContainer within sandbox \"1a8e997d6df31a52bc863b8be2feebdc8517792fda1e15d88125c97eab57fa0b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6a744e8fa418226b3a05d455c1be0be01b359f2b6d160ebbdf026a24518887ec\"" Jan 30 14:37:09.337880 containerd[1476]: time="2025-01-30T14:37:09.337847950Z" level=info msg="StartContainer for \"6a744e8fa418226b3a05d455c1be0be01b359f2b6d160ebbdf026a24518887ec\"" Jan 30 14:37:09.379797 systemd[1]: Started cri-containerd-6a744e8fa418226b3a05d455c1be0be01b359f2b6d160ebbdf026a24518887ec.scope - libcontainer container 6a744e8fa418226b3a05d455c1be0be01b359f2b6d160ebbdf026a24518887ec. Jan 30 14:37:09.413497 containerd[1476]: time="2025-01-30T14:37:09.413450246Z" level=info msg="StartContainer for \"6a744e8fa418226b3a05d455c1be0be01b359f2b6d160ebbdf026a24518887ec\" returns successfully" Jan 30 14:37:09.416523 systemd[1]: cri-containerd-6a744e8fa418226b3a05d455c1be0be01b359f2b6d160ebbdf026a24518887ec.scope: Deactivated successfully. Jan 30 14:37:09.449982 containerd[1476]: time="2025-01-30T14:37:09.449819365Z" level=info msg="shim disconnected" id=6a744e8fa418226b3a05d455c1be0be01b359f2b6d160ebbdf026a24518887ec namespace=k8s.io Jan 30 14:37:09.449982 containerd[1476]: time="2025-01-30T14:37:09.449943366Z" level=warning msg="cleaning up after shim disconnected" id=6a744e8fa418226b3a05d455c1be0be01b359f2b6d160ebbdf026a24518887ec namespace=k8s.io Jan 30 14:37:09.449982 containerd[1476]: time="2025-01-30T14:37:09.449965886Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:37:09.504675 kubelet[2793]: E0130 14:37:09.504593 2793 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 30 14:37:10.126133 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6a744e8fa418226b3a05d455c1be0be01b359f2b6d160ebbdf026a24518887ec-rootfs.mount: Deactivated successfully. Jan 30 14:37:10.191901 kubelet[2793]: E0130 14:37:10.191298 2793 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-9ggzz" podUID="6ec2fbda-173d-4dfc-b238-2c1593590df3" Jan 30 14:37:10.321470 containerd[1476]: time="2025-01-30T14:37:10.321421888Z" level=info msg="CreateContainer within sandbox \"1a8e997d6df31a52bc863b8be2feebdc8517792fda1e15d88125c97eab57fa0b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 14:37:10.340429 containerd[1476]: time="2025-01-30T14:37:10.340373333Z" level=info msg="CreateContainer within sandbox \"1a8e997d6df31a52bc863b8be2feebdc8517792fda1e15d88125c97eab57fa0b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"90c9deac983f86cba519829a459515aa4a869a1eb85cd3ef0d68a666d89f3983\"" Jan 30 14:37:10.342500 containerd[1476]: time="2025-01-30T14:37:10.341130018Z" level=info msg="StartContainer for \"90c9deac983f86cba519829a459515aa4a869a1eb85cd3ef0d68a666d89f3983\"" Jan 30 14:37:10.378326 systemd[1]: Started cri-containerd-90c9deac983f86cba519829a459515aa4a869a1eb85cd3ef0d68a666d89f3983.scope - libcontainer container 90c9deac983f86cba519829a459515aa4a869a1eb85cd3ef0d68a666d89f3983. Jan 30 14:37:10.414713 systemd[1]: cri-containerd-90c9deac983f86cba519829a459515aa4a869a1eb85cd3ef0d68a666d89f3983.scope: Deactivated successfully. Jan 30 14:37:10.420627 containerd[1476]: time="2025-01-30T14:37:10.420490819Z" level=info msg="StartContainer for \"90c9deac983f86cba519829a459515aa4a869a1eb85cd3ef0d68a666d89f3983\" returns successfully" Jan 30 14:37:10.447877 containerd[1476]: time="2025-01-30T14:37:10.447787078Z" level=info msg="shim disconnected" id=90c9deac983f86cba519829a459515aa4a869a1eb85cd3ef0d68a666d89f3983 namespace=k8s.io Jan 30 14:37:10.447877 containerd[1476]: time="2025-01-30T14:37:10.447881879Z" level=warning msg="cleaning up after shim disconnected" id=90c9deac983f86cba519829a459515aa4a869a1eb85cd3ef0d68a666d89f3983 namespace=k8s.io Jan 30 14:37:10.448239 containerd[1476]: time="2025-01-30T14:37:10.447899079Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:37:11.127680 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-90c9deac983f86cba519829a459515aa4a869a1eb85cd3ef0d68a666d89f3983-rootfs.mount: Deactivated successfully. Jan 30 14:37:11.329235 containerd[1476]: time="2025-01-30T14:37:11.329189187Z" level=info msg="CreateContainer within sandbox \"1a8e997d6df31a52bc863b8be2feebdc8517792fda1e15d88125c97eab57fa0b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 14:37:11.349485 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1530473366.mount: Deactivated successfully. Jan 30 14:37:11.364023 containerd[1476]: time="2025-01-30T14:37:11.363858534Z" level=info msg="CreateContainer within sandbox \"1a8e997d6df31a52bc863b8be2feebdc8517792fda1e15d88125c97eab57fa0b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1e274feaaa72cde45177954ca9e3c3a82678aa652575ec98a3602aa87f7ac6d0\"" Jan 30 14:37:11.364673 containerd[1476]: time="2025-01-30T14:37:11.364632419Z" level=info msg="StartContainer for \"1e274feaaa72cde45177954ca9e3c3a82678aa652575ec98a3602aa87f7ac6d0\"" Jan 30 14:37:11.399392 systemd[1]: Started cri-containerd-1e274feaaa72cde45177954ca9e3c3a82678aa652575ec98a3602aa87f7ac6d0.scope - libcontainer container 1e274feaaa72cde45177954ca9e3c3a82678aa652575ec98a3602aa87f7ac6d0. Jan 30 14:37:11.437839 containerd[1476]: time="2025-01-30T14:37:11.437667779Z" level=info msg="StartContainer for \"1e274feaaa72cde45177954ca9e3c3a82678aa652575ec98a3602aa87f7ac6d0\" returns successfully" Jan 30 14:37:11.750115 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 30 14:37:12.191856 kubelet[2793]: E0130 14:37:12.191755 2793 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-9ggzz" podUID="6ec2fbda-173d-4dfc-b238-2c1593590df3" Jan 30 14:37:12.360424 kubelet[2793]: I0130 14:37:12.360341 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-5767b" podStartSLOduration=7.360297839 podStartE2EDuration="7.360297839s" podCreationTimestamp="2025-01-30 14:37:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:37:12.357954464 +0000 UTC m=+803.299351836" watchObservedRunningTime="2025-01-30 14:37:12.360297839 +0000 UTC m=+803.301695211" Jan 30 14:37:12.452896 sshd[5705]: Connection closed by 140.206.168.98 port 45450 [preauth] Jan 30 14:37:12.455479 systemd[1]: sshd@146-49.13.124.2:22-140.206.168.98:45450.service: Deactivated successfully. Jan 30 14:37:14.192461 kubelet[2793]: E0130 14:37:14.191844 2793 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-9ggzz" podUID="6ec2fbda-173d-4dfc-b238-2c1593590df3" Jan 30 14:37:14.754711 systemd-networkd[1376]: lxc_health: Link UP Jan 30 14:37:14.763487 systemd-networkd[1376]: lxc_health: Gained carrier Jan 30 14:37:16.212409 systemd-networkd[1376]: lxc_health: Gained IPv6LL Jan 30 14:37:17.660619 systemd[1]: run-containerd-runc-k8s.io-1e274feaaa72cde45177954ca9e3c3a82678aa652575ec98a3602aa87f7ac6d0-runc.kNOANi.mount: Deactivated successfully. Jan 30 14:37:18.576495 systemd[1]: Started sshd@148-49.13.124.2:22-183.88.232.183:35328.service - OpenSSH per-connection server daemon (183.88.232.183:35328). Jan 30 14:37:19.681122 sshd[6623]: Invalid user babak from 183.88.232.183 port 35328 Jan 30 14:37:19.885012 sshd[6623]: Received disconnect from 183.88.232.183 port 35328:11: Bye Bye [preauth] Jan 30 14:37:19.885012 sshd[6623]: Disconnected from invalid user babak 183.88.232.183 port 35328 [preauth] Jan 30 14:37:19.887965 systemd[1]: sshd@148-49.13.124.2:22-183.88.232.183:35328.service: Deactivated successfully. Jan 30 14:37:21.894516 systemd[1]: Started sshd@149-49.13.124.2:22-5.250.188.211:41532.service - OpenSSH per-connection server daemon (5.250.188.211:41532). Jan 30 14:37:21.949303 systemd[1]: run-containerd-runc-k8s.io-1e274feaaa72cde45177954ca9e3c3a82678aa652575ec98a3602aa87f7ac6d0-runc.3nA25N.mount: Deactivated successfully. Jan 30 14:37:22.200143 sshd[6655]: Invalid user user1 from 5.250.188.211 port 41532 Jan 30 14:37:22.254513 sshd[6655]: Received disconnect from 5.250.188.211 port 41532:11: Bye Bye [preauth] Jan 30 14:37:22.254658 sshd[6655]: Disconnected from invalid user user1 5.250.188.211 port 41532 [preauth] Jan 30 14:37:22.256385 systemd[1]: sshd@149-49.13.124.2:22-5.250.188.211:41532.service: Deactivated successfully. Jan 30 14:37:34.695607 systemd[1]: run-containerd-runc-k8s.io-1e274feaaa72cde45177954ca9e3c3a82678aa652575ec98a3602aa87f7ac6d0-runc.PNsEFA.mount: Deactivated successfully. Jan 30 14:37:36.870740 kubelet[2793]: E0130 14:37:36.870644 2793 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:43270->127.0.0.1:37957: write tcp 127.0.0.1:43270->127.0.0.1:37957: write: broken pipe Jan 30 14:37:41.075757 systemd[1]: run-containerd-runc-k8s.io-1e274feaaa72cde45177954ca9e3c3a82678aa652575ec98a3602aa87f7ac6d0-runc.vC69Ih.mount: Deactivated successfully. Jan 30 14:37:43.197469 systemd[1]: run-containerd-runc-k8s.io-1e274feaaa72cde45177954ca9e3c3a82678aa652575ec98a3602aa87f7ac6d0-runc.aJPTEI.mount: Deactivated successfully. Jan 30 14:37:49.242673 containerd[1476]: time="2025-01-30T14:37:49.242577584Z" level=info msg="StopPodSandbox for \"26b4f56b9dfeae764c932be776db03016938ececa36613c08b6a7eefa642692e\"" Jan 30 14:37:49.243688 containerd[1476]: time="2025-01-30T14:37:49.243525550Z" level=info msg="TearDown network for sandbox \"26b4f56b9dfeae764c932be776db03016938ececa36613c08b6a7eefa642692e\" successfully" Jan 30 14:37:49.243688 containerd[1476]: time="2025-01-30T14:37:49.243564671Z" level=info msg="StopPodSandbox for \"26b4f56b9dfeae764c932be776db03016938ececa36613c08b6a7eefa642692e\" returns successfully" Jan 30 14:37:49.244160 containerd[1476]: time="2025-01-30T14:37:49.244131474Z" level=info msg="RemovePodSandbox for \"26b4f56b9dfeae764c932be776db03016938ececa36613c08b6a7eefa642692e\"" Jan 30 14:37:49.244224 containerd[1476]: time="2025-01-30T14:37:49.244167835Z" level=info msg="Forcibly stopping sandbox \"26b4f56b9dfeae764c932be776db03016938ececa36613c08b6a7eefa642692e\"" Jan 30 14:37:49.244251 containerd[1476]: time="2025-01-30T14:37:49.244227315Z" level=info msg="TearDown network for sandbox \"26b4f56b9dfeae764c932be776db03016938ececa36613c08b6a7eefa642692e\" successfully" Jan 30 14:37:49.248539 containerd[1476]: time="2025-01-30T14:37:49.248466783Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"26b4f56b9dfeae764c932be776db03016938ececa36613c08b6a7eefa642692e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:37:49.249011 containerd[1476]: time="2025-01-30T14:37:49.248545584Z" level=info msg="RemovePodSandbox \"26b4f56b9dfeae764c932be776db03016938ececa36613c08b6a7eefa642692e\" returns successfully" Jan 30 14:37:49.249140 containerd[1476]: time="2025-01-30T14:37:49.249112467Z" level=info msg="StopPodSandbox for \"5f92022ed7438e477159d8e973bab37f43a6023df1c7c1ba453fc91e605e7fe9\"" Jan 30 14:37:49.249217 containerd[1476]: time="2025-01-30T14:37:49.249186748Z" level=info msg="TearDown network for sandbox \"5f92022ed7438e477159d8e973bab37f43a6023df1c7c1ba453fc91e605e7fe9\" successfully" Jan 30 14:37:49.249217 containerd[1476]: time="2025-01-30T14:37:49.249198988Z" level=info msg="StopPodSandbox for \"5f92022ed7438e477159d8e973bab37f43a6023df1c7c1ba453fc91e605e7fe9\" returns successfully" Jan 30 14:37:49.249536 containerd[1476]: time="2025-01-30T14:37:49.249480430Z" level=info msg="RemovePodSandbox for \"5f92022ed7438e477159d8e973bab37f43a6023df1c7c1ba453fc91e605e7fe9\"" Jan 30 14:37:49.249536 containerd[1476]: time="2025-01-30T14:37:49.249513710Z" level=info msg="Forcibly stopping sandbox \"5f92022ed7438e477159d8e973bab37f43a6023df1c7c1ba453fc91e605e7fe9\"" Jan 30 14:37:49.249652 containerd[1476]: time="2025-01-30T14:37:49.249558070Z" level=info msg="TearDown network for sandbox \"5f92022ed7438e477159d8e973bab37f43a6023df1c7c1ba453fc91e605e7fe9\" successfully" Jan 30 14:37:49.252690 containerd[1476]: time="2025-01-30T14:37:49.252638611Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5f92022ed7438e477159d8e973bab37f43a6023df1c7c1ba453fc91e605e7fe9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:37:49.252786 containerd[1476]: time="2025-01-30T14:37:49.252702051Z" level=info msg="RemovePodSandbox \"5f92022ed7438e477159d8e973bab37f43a6023df1c7c1ba453fc91e605e7fe9\" returns successfully" Jan 30 14:37:53.842515 systemd[1]: run-containerd-runc-k8s.io-1e274feaaa72cde45177954ca9e3c3a82678aa652575ec98a3602aa87f7ac6d0-runc.hBnbQJ.mount: Deactivated successfully. Jan 30 14:37:53.961604 systemd[1]: Started sshd@150-49.13.124.2:22-80.251.219.209:52644.service - OpenSSH per-connection server daemon (80.251.219.209:52644). Jan 30 14:37:54.252954 systemd[1]: Started sshd@151-49.13.124.2:22-140.206.168.98:37348.service - OpenSSH per-connection server daemon (140.206.168.98:37348). Jan 30 14:37:55.056456 sshd[6988]: Invalid user ubuntu from 80.251.219.209 port 52644 Jan 30 14:37:55.253120 sshd[6988]: Received disconnect from 80.251.219.209 port 52644:11: Bye Bye [preauth] Jan 30 14:37:55.253120 sshd[6988]: Disconnected from invalid user ubuntu 80.251.219.209 port 52644 [preauth] Jan 30 14:37:55.256502 systemd[1]: sshd@150-49.13.124.2:22-80.251.219.209:52644.service: Deactivated successfully. Jan 30 14:37:57.495391 systemd[1]: Started sshd@152-49.13.124.2:22-45.207.58.154:34410.service - OpenSSH per-connection server daemon (45.207.58.154:34410). Jan 30 14:37:58.100846 systemd[1]: run-containerd-runc-k8s.io-1e274feaaa72cde45177954ca9e3c3a82678aa652575ec98a3602aa87f7ac6d0-runc.EKmcLs.mount: Deactivated successfully. Jan 30 14:37:59.836912 sshd[7015]: Invalid user alex from 45.207.58.154 port 34410 Jan 30 14:38:00.123329 sshd[7015]: Received disconnect from 45.207.58.154 port 34410:11: Bye Bye [preauth] Jan 30 14:38:00.123329 sshd[7015]: Disconnected from invalid user alex 45.207.58.154 port 34410 [preauth] Jan 30 14:38:00.126338 systemd[1]: sshd@152-49.13.124.2:22-45.207.58.154:34410.service: Deactivated successfully. Jan 30 14:38:04.762720 systemd[1]: Started sshd@153-49.13.124.2:22-83.212.75.149:50078.service - OpenSSH per-connection server daemon (83.212.75.149:50078). Jan 30 14:38:05.124564 sshd[7103]: Invalid user dev from 83.212.75.149 port 50078 Jan 30 14:38:05.179289 sshd[7103]: Received disconnect from 83.212.75.149 port 50078:11: Bye Bye [preauth] Jan 30 14:38:05.179289 sshd[7103]: Disconnected from invalid user dev 83.212.75.149 port 50078 [preauth] Jan 30 14:38:05.181955 systemd[1]: sshd@153-49.13.124.2:22-83.212.75.149:50078.service: Deactivated successfully. Jan 30 14:38:10.946325 sshd[5816]: pam_unix(sshd:session): session closed for user core Jan 30 14:38:10.951916 systemd[1]: sshd@147-49.13.124.2:22-139.178.68.195:59456.service: Deactivated successfully. Jan 30 14:38:10.955660 systemd[1]: session-86.scope: Deactivated successfully. Jan 30 14:38:10.957311 systemd-logind[1459]: Session 86 logged out. Waiting for processes to exit. Jan 30 14:38:10.960567 systemd-logind[1459]: Removed session 86. Jan 30 14:38:18.167383 systemd[1]: sshd@133-49.13.124.2:22-140.206.168.98:34198.service: Deactivated successfully.