Feb 13 15:57:20.900610 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 15:57:20.900655 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Thu Feb 13 13:57:00 -00 2025 Feb 13 15:57:20.900694 kernel: KASLR enabled Feb 13 15:57:20.900712 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Feb 13 15:57:20.900725 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390c1018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d98 Feb 13 15:57:20.900739 kernel: random: crng init done Feb 13 15:57:20.900755 kernel: secureboot: Secure boot disabled Feb 13 15:57:20.900768 kernel: ACPI: Early table checksum verification disabled Feb 13 15:57:20.900781 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Feb 13 15:57:20.900798 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Feb 13 15:57:20.900813 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:57:20.900827 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:57:20.900840 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:57:20.900854 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:57:20.900870 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:57:20.900887 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:57:20.900901 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:57:20.900915 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:57:20.900930 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:57:20.900945 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Feb 13 15:57:20.900959 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Feb 13 15:57:20.900974 kernel: NUMA: Failed to initialise from firmware Feb 13 15:57:20.900989 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Feb 13 15:57:20.901004 kernel: NUMA: NODE_DATA [mem 0x13966e800-0x139673fff] Feb 13 15:57:20.901018 kernel: Zone ranges: Feb 13 15:57:20.901036 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Feb 13 15:57:20.901050 kernel: DMA32 empty Feb 13 15:57:20.901066 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Feb 13 15:57:20.901080 kernel: Movable zone start for each node Feb 13 15:57:20.901095 kernel: Early memory node ranges Feb 13 15:57:20.901109 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Feb 13 15:57:20.901124 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Feb 13 15:57:20.901139 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Feb 13 15:57:20.902321 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Feb 13 15:57:20.902337 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Feb 13 15:57:20.902344 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Feb 13 15:57:20.902351 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Feb 13 15:57:20.902363 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Feb 13 15:57:20.902370 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Feb 13 15:57:20.902377 kernel: psci: probing for conduit method from ACPI. Feb 13 15:57:20.902387 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 15:57:20.902393 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 15:57:20.902400 kernel: psci: Trusted OS migration not required Feb 13 15:57:20.902409 kernel: psci: SMC Calling Convention v1.1 Feb 13 15:57:20.902416 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 13 15:57:20.902423 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 15:57:20.902430 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 15:57:20.902437 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 13 15:57:20.902444 kernel: Detected PIPT I-cache on CPU0 Feb 13 15:57:20.902451 kernel: CPU features: detected: GIC system register CPU interface Feb 13 15:57:20.902458 kernel: CPU features: detected: Hardware dirty bit management Feb 13 15:57:20.902465 kernel: CPU features: detected: Spectre-v4 Feb 13 15:57:20.902472 kernel: CPU features: detected: Spectre-BHB Feb 13 15:57:20.902480 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 15:57:20.902487 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 15:57:20.902494 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 15:57:20.902501 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 15:57:20.902508 kernel: alternatives: applying boot alternatives Feb 13 15:57:20.902517 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=07e9b8867aadd0b2e77ba5338d18cdd10706c658e0d745a78e129bcae9a0e4c6 Feb 13 15:57:20.902524 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 15:57:20.902531 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 15:57:20.902537 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 15:57:20.902544 kernel: Fallback order for Node 0: 0 Feb 13 15:57:20.902551 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Feb 13 15:57:20.902577 kernel: Policy zone: Normal Feb 13 15:57:20.902585 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 15:57:20.902592 kernel: software IO TLB: area num 2. Feb 13 15:57:20.902599 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Feb 13 15:57:20.902606 kernel: Memory: 3882676K/4096000K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39680K init, 897K bss, 213324K reserved, 0K cma-reserved) Feb 13 15:57:20.902613 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 15:57:20.902620 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 15:57:20.902628 kernel: rcu: RCU event tracing is enabled. Feb 13 15:57:20.902634 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 15:57:20.902641 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 15:57:20.902648 kernel: Tracing variant of Tasks RCU enabled. Feb 13 15:57:20.902655 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 15:57:20.902665 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 15:57:20.902672 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 15:57:20.902678 kernel: GICv3: 256 SPIs implemented Feb 13 15:57:20.902685 kernel: GICv3: 0 Extended SPIs implemented Feb 13 15:57:20.902691 kernel: Root IRQ handler: gic_handle_irq Feb 13 15:57:20.902698 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 15:57:20.902705 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 13 15:57:20.902711 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 13 15:57:20.902718 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 15:57:20.902725 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Feb 13 15:57:20.902732 kernel: GICv3: using LPI property table @0x00000001000e0000 Feb 13 15:57:20.902741 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Feb 13 15:57:20.902748 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 15:57:20.902754 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:57:20.902761 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 15:57:20.902768 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 15:57:20.902775 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 15:57:20.902781 kernel: Console: colour dummy device 80x25 Feb 13 15:57:20.902789 kernel: ACPI: Core revision 20230628 Feb 13 15:57:20.902796 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 15:57:20.902803 kernel: pid_max: default: 32768 minimum: 301 Feb 13 15:57:20.902812 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 15:57:20.902819 kernel: landlock: Up and running. Feb 13 15:57:20.902825 kernel: SELinux: Initializing. Feb 13 15:57:20.902832 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:57:20.902840 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:57:20.902847 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:57:20.902854 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:57:20.902862 kernel: rcu: Hierarchical SRCU implementation. Feb 13 15:57:20.902869 kernel: rcu: Max phase no-delay instances is 400. Feb 13 15:57:20.902875 kernel: Platform MSI: ITS@0x8080000 domain created Feb 13 15:57:20.902884 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 13 15:57:20.902891 kernel: Remapping and enabling EFI services. Feb 13 15:57:20.902898 kernel: smp: Bringing up secondary CPUs ... Feb 13 15:57:20.902905 kernel: Detected PIPT I-cache on CPU1 Feb 13 15:57:20.902913 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 13 15:57:20.902920 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Feb 13 15:57:20.902927 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:57:20.902934 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 15:57:20.902941 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 15:57:20.902950 kernel: SMP: Total of 2 processors activated. Feb 13 15:57:20.902957 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 15:57:20.902971 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 15:57:20.902980 kernel: CPU features: detected: Common not Private translations Feb 13 15:57:20.902987 kernel: CPU features: detected: CRC32 instructions Feb 13 15:57:20.902995 kernel: CPU features: detected: Enhanced Virtualization Traps Feb 13 15:57:20.903003 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 15:57:20.903011 kernel: CPU features: detected: LSE atomic instructions Feb 13 15:57:20.903018 kernel: CPU features: detected: Privileged Access Never Feb 13 15:57:20.903027 kernel: CPU features: detected: RAS Extension Support Feb 13 15:57:20.903035 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 13 15:57:20.903042 kernel: CPU: All CPU(s) started at EL1 Feb 13 15:57:20.903050 kernel: alternatives: applying system-wide alternatives Feb 13 15:57:20.903057 kernel: devtmpfs: initialized Feb 13 15:57:20.903065 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 15:57:20.903073 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 15:57:20.903080 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 15:57:20.903089 kernel: SMBIOS 3.0.0 present. Feb 13 15:57:20.903096 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Feb 13 15:57:20.903104 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 15:57:20.903112 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 15:57:20.903119 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 15:57:20.903127 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 15:57:20.903134 kernel: audit: initializing netlink subsys (disabled) Feb 13 15:57:20.903141 kernel: audit: type=2000 audit(0.014:1): state=initialized audit_enabled=0 res=1 Feb 13 15:57:20.903162 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 15:57:20.903171 kernel: cpuidle: using governor menu Feb 13 15:57:20.904277 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 15:57:20.904286 kernel: ASID allocator initialised with 32768 entries Feb 13 15:57:20.904294 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 15:57:20.904302 kernel: Serial: AMBA PL011 UART driver Feb 13 15:57:20.904309 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 15:57:20.904317 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 15:57:20.904324 kernel: Modules: 508960 pages in range for PLT usage Feb 13 15:57:20.904332 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 15:57:20.904347 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 15:57:20.904354 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 15:57:20.904362 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 15:57:20.904370 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 15:57:20.904377 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 15:57:20.904385 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 15:57:20.904392 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 15:57:20.904399 kernel: ACPI: Added _OSI(Module Device) Feb 13 15:57:20.904407 kernel: ACPI: Added _OSI(Processor Device) Feb 13 15:57:20.904417 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 15:57:20.904424 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 15:57:20.904431 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 15:57:20.904439 kernel: ACPI: Interpreter enabled Feb 13 15:57:20.904446 kernel: ACPI: Using GIC for interrupt routing Feb 13 15:57:20.904454 kernel: ACPI: MCFG table detected, 1 entries Feb 13 15:57:20.904461 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 13 15:57:20.904469 kernel: printk: console [ttyAMA0] enabled Feb 13 15:57:20.904476 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 15:57:20.904729 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 15:57:20.904819 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 15:57:20.904895 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 15:57:20.904958 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 13 15:57:20.905613 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 13 15:57:20.905634 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 13 15:57:20.905642 kernel: PCI host bridge to bus 0000:00 Feb 13 15:57:20.905734 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 13 15:57:20.905799 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 15:57:20.907688 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 13 15:57:20.908148 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 15:57:20.908411 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 13 15:57:20.908506 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Feb 13 15:57:20.908642 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Feb 13 15:57:20.908736 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Feb 13 15:57:20.908839 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Feb 13 15:57:20.908930 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Feb 13 15:57:20.909024 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Feb 13 15:57:20.909107 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Feb 13 15:57:20.909218 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Feb 13 15:57:20.909307 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Feb 13 15:57:20.909396 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Feb 13 15:57:20.909480 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Feb 13 15:57:20.909587 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Feb 13 15:57:20.909677 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Feb 13 15:57:20.909773 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Feb 13 15:57:20.909861 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Feb 13 15:57:20.909951 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Feb 13 15:57:20.910034 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Feb 13 15:57:20.910123 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Feb 13 15:57:20.910295 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Feb 13 15:57:20.910389 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Feb 13 15:57:20.910475 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Feb 13 15:57:20.910604 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Feb 13 15:57:20.910706 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Feb 13 15:57:20.910800 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Feb 13 15:57:20.910883 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Feb 13 15:57:20.910963 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 15:57:20.911059 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Feb 13 15:57:20.911209 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Feb 13 15:57:20.911299 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Feb 13 15:57:20.911389 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Feb 13 15:57:20.911471 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Feb 13 15:57:20.911551 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Feb 13 15:57:20.911703 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Feb 13 15:57:20.911798 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Feb 13 15:57:20.911887 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Feb 13 15:57:20.911969 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Feb 13 15:57:20.912051 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Feb 13 15:57:20.912140 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Feb 13 15:57:20.914433 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Feb 13 15:57:20.914540 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Feb 13 15:57:20.914657 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Feb 13 15:57:20.914730 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Feb 13 15:57:20.914799 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Feb 13 15:57:20.914960 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Feb 13 15:57:20.915038 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Feb 13 15:57:20.915113 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Feb 13 15:57:20.915195 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Feb 13 15:57:20.915267 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Feb 13 15:57:20.915333 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Feb 13 15:57:20.915398 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Feb 13 15:57:20.915468 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Feb 13 15:57:20.915534 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Feb 13 15:57:20.915610 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Feb 13 15:57:20.915697 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Feb 13 15:57:20.915766 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Feb 13 15:57:20.915831 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Feb 13 15:57:20.915900 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Feb 13 15:57:20.915966 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Feb 13 15:57:20.916031 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Feb 13 15:57:20.916100 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Feb 13 15:57:20.918281 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Feb 13 15:57:20.918390 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Feb 13 15:57:20.918465 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Feb 13 15:57:20.918531 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Feb 13 15:57:20.918618 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Feb 13 15:57:20.918692 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Feb 13 15:57:20.918761 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Feb 13 15:57:20.918829 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Feb 13 15:57:20.918909 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Feb 13 15:57:20.918977 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Feb 13 15:57:20.919042 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Feb 13 15:57:20.919112 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Feb 13 15:57:20.919196 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Feb 13 15:57:20.919276 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Feb 13 15:57:20.919343 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Feb 13 15:57:20.919425 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Feb 13 15:57:20.919493 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Feb 13 15:57:20.919609 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Feb 13 15:57:20.919693 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Feb 13 15:57:20.919766 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Feb 13 15:57:20.919833 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Feb 13 15:57:20.919901 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Feb 13 15:57:20.919972 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Feb 13 15:57:20.920039 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Feb 13 15:57:20.920105 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Feb 13 15:57:20.922334 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Feb 13 15:57:20.922441 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Feb 13 15:57:20.922512 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Feb 13 15:57:20.922611 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Feb 13 15:57:20.922687 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Feb 13 15:57:20.922753 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Feb 13 15:57:20.922822 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Feb 13 15:57:20.922888 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Feb 13 15:57:20.922956 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Feb 13 15:57:20.923022 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Feb 13 15:57:20.923089 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Feb 13 15:57:20.923246 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Feb 13 15:57:20.923323 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Feb 13 15:57:20.923392 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Feb 13 15:57:20.923461 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Feb 13 15:57:20.923527 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Feb 13 15:57:20.923656 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Feb 13 15:57:20.923728 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Feb 13 15:57:20.923800 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Feb 13 15:57:20.923873 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Feb 13 15:57:20.923944 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Feb 13 15:57:20.924010 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Feb 13 15:57:20.924078 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Feb 13 15:57:20.924147 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Feb 13 15:57:20.924239 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Feb 13 15:57:20.924316 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Feb 13 15:57:20.924386 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 15:57:20.924458 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Feb 13 15:57:20.924526 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Feb 13 15:57:20.924603 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Feb 13 15:57:20.924670 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Feb 13 15:57:20.924737 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Feb 13 15:57:20.924809 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Feb 13 15:57:20.924882 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Feb 13 15:57:20.924948 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Feb 13 15:57:20.925011 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Feb 13 15:57:20.925077 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Feb 13 15:57:20.925161 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Feb 13 15:57:20.925235 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Feb 13 15:57:20.925309 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Feb 13 15:57:20.925377 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Feb 13 15:57:20.925442 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Feb 13 15:57:20.925523 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Feb 13 15:57:20.925613 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Feb 13 15:57:20.925684 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Feb 13 15:57:20.925759 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Feb 13 15:57:20.925827 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Feb 13 15:57:20.925896 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Feb 13 15:57:20.925985 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Feb 13 15:57:20.926059 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Feb 13 15:57:20.926126 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Feb 13 15:57:20.926209 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Feb 13 15:57:20.928295 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Feb 13 15:57:20.928397 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Feb 13 15:57:20.928476 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Feb 13 15:57:20.928557 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Feb 13 15:57:20.928685 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Feb 13 15:57:20.928820 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Feb 13 15:57:20.928895 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Feb 13 15:57:20.928962 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Feb 13 15:57:20.929040 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Feb 13 15:57:20.929111 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Feb 13 15:57:20.929408 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Feb 13 15:57:20.929494 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Feb 13 15:57:20.929577 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Feb 13 15:57:20.929652 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Feb 13 15:57:20.929719 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Feb 13 15:57:20.929787 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Feb 13 15:57:20.929853 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Feb 13 15:57:20.929918 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Feb 13 15:57:20.929986 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Feb 13 15:57:20.930060 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Feb 13 15:57:20.930126 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Feb 13 15:57:20.930205 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Feb 13 15:57:20.930287 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Feb 13 15:57:20.930369 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 13 15:57:20.930443 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 15:57:20.930502 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 13 15:57:20.930611 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Feb 13 15:57:20.930678 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Feb 13 15:57:20.930737 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Feb 13 15:57:20.930810 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Feb 13 15:57:20.930887 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Feb 13 15:57:20.930962 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Feb 13 15:57:20.931038 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Feb 13 15:57:20.931103 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Feb 13 15:57:20.933280 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Feb 13 15:57:20.933387 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Feb 13 15:57:20.933451 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Feb 13 15:57:20.933517 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Feb 13 15:57:20.933636 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Feb 13 15:57:20.933717 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Feb 13 15:57:20.933795 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Feb 13 15:57:20.933879 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Feb 13 15:57:20.933951 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Feb 13 15:57:20.934011 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Feb 13 15:57:20.934098 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Feb 13 15:57:20.934216 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Feb 13 15:57:20.934297 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Feb 13 15:57:20.934369 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Feb 13 15:57:20.934432 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Feb 13 15:57:20.934508 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Feb 13 15:57:20.934654 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Feb 13 15:57:20.934734 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Feb 13 15:57:20.934809 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Feb 13 15:57:20.934821 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 15:57:20.934830 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 15:57:20.934840 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 15:57:20.934849 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 15:57:20.934859 kernel: iommu: Default domain type: Translated Feb 13 15:57:20.934872 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 15:57:20.934881 kernel: efivars: Registered efivars operations Feb 13 15:57:20.934890 kernel: vgaarb: loaded Feb 13 15:57:20.934899 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 15:57:20.934908 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 15:57:20.934916 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 15:57:20.934924 kernel: pnp: PnP ACPI init Feb 13 15:57:20.935004 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 13 15:57:20.935016 kernel: pnp: PnP ACPI: found 1 devices Feb 13 15:57:20.935026 kernel: NET: Registered PF_INET protocol family Feb 13 15:57:20.935035 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 15:57:20.935043 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 15:57:20.935051 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 15:57:20.935059 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 15:57:20.935067 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 15:57:20.935075 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 15:57:20.935083 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:57:20.935093 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:57:20.935101 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 15:57:20.935846 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Feb 13 15:57:20.935871 kernel: PCI: CLS 0 bytes, default 64 Feb 13 15:57:20.935879 kernel: kvm [1]: HYP mode not available Feb 13 15:57:20.935887 kernel: Initialise system trusted keyrings Feb 13 15:57:20.935895 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 15:57:20.935903 kernel: Key type asymmetric registered Feb 13 15:57:20.935910 kernel: Asymmetric key parser 'x509' registered Feb 13 15:57:20.935918 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 15:57:20.935933 kernel: io scheduler mq-deadline registered Feb 13 15:57:20.935941 kernel: io scheduler kyber registered Feb 13 15:57:20.935949 kernel: io scheduler bfq registered Feb 13 15:57:20.935957 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Feb 13 15:57:20.936035 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Feb 13 15:57:20.936103 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Feb 13 15:57:20.936192 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:57:20.936270 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Feb 13 15:57:20.936350 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Feb 13 15:57:20.936417 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:57:20.936489 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Feb 13 15:57:20.936557 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Feb 13 15:57:20.936644 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:57:20.936725 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Feb 13 15:57:20.936794 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Feb 13 15:57:20.936860 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:57:20.936930 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Feb 13 15:57:20.936997 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Feb 13 15:57:20.937063 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:57:20.937137 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Feb 13 15:57:20.937300 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Feb 13 15:57:20.937371 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:57:20.937442 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Feb 13 15:57:20.937506 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Feb 13 15:57:20.937615 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:57:20.937709 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Feb 13 15:57:20.937785 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Feb 13 15:57:20.937851 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:57:20.937863 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Feb 13 15:57:20.937931 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Feb 13 15:57:20.938009 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Feb 13 15:57:20.938080 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:57:20.938092 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 15:57:20.938100 kernel: ACPI: button: Power Button [PWRB] Feb 13 15:57:20.938108 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 15:57:20.938209 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Feb 13 15:57:20.940089 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Feb 13 15:57:20.940118 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 15:57:20.940127 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Feb 13 15:57:20.940233 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Feb 13 15:57:20.940256 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Feb 13 15:57:20.940265 kernel: thunder_xcv, ver 1.0 Feb 13 15:57:20.940273 kernel: thunder_bgx, ver 1.0 Feb 13 15:57:20.940281 kernel: nicpf, ver 1.0 Feb 13 15:57:20.940288 kernel: nicvf, ver 1.0 Feb 13 15:57:20.940379 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 15:57:20.940445 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T15:57:20 UTC (1739462240) Feb 13 15:57:20.940456 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 15:57:20.940466 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 13 15:57:20.940474 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 15:57:20.940483 kernel: watchdog: Hard watchdog permanently disabled Feb 13 15:57:20.940491 kernel: NET: Registered PF_INET6 protocol family Feb 13 15:57:20.940498 kernel: Segment Routing with IPv6 Feb 13 15:57:20.940506 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 15:57:20.940514 kernel: NET: Registered PF_PACKET protocol family Feb 13 15:57:20.940522 kernel: Key type dns_resolver registered Feb 13 15:57:20.940531 kernel: registered taskstats version 1 Feb 13 15:57:20.940541 kernel: Loading compiled-in X.509 certificates Feb 13 15:57:20.940549 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 4531cdb19689f90a81e7969ac7d8e25a95254f51' Feb 13 15:57:20.940556 kernel: Key type .fscrypt registered Feb 13 15:57:20.940579 kernel: Key type fscrypt-provisioning registered Feb 13 15:57:20.940587 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 15:57:20.940595 kernel: ima: Allocated hash algorithm: sha1 Feb 13 15:57:20.940603 kernel: ima: No architecture policies found Feb 13 15:57:20.940611 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 15:57:20.940619 kernel: clk: Disabling unused clocks Feb 13 15:57:20.940629 kernel: Freeing unused kernel memory: 39680K Feb 13 15:57:20.940639 kernel: Run /init as init process Feb 13 15:57:20.940647 kernel: with arguments: Feb 13 15:57:20.940655 kernel: /init Feb 13 15:57:20.940662 kernel: with environment: Feb 13 15:57:20.940670 kernel: HOME=/ Feb 13 15:57:20.940677 kernel: TERM=linux Feb 13 15:57:20.940685 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 15:57:20.940695 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:57:20.940706 systemd[1]: Detected virtualization kvm. Feb 13 15:57:20.940715 systemd[1]: Detected architecture arm64. Feb 13 15:57:20.940723 systemd[1]: Running in initrd. Feb 13 15:57:20.940731 systemd[1]: No hostname configured, using default hostname. Feb 13 15:57:20.940739 systemd[1]: Hostname set to . Feb 13 15:57:20.940748 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:57:20.940756 systemd[1]: Queued start job for default target initrd.target. Feb 13 15:57:20.940766 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:57:20.940775 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:57:20.940785 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 15:57:20.940793 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:57:20.940802 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 15:57:20.940810 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 15:57:20.940820 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 15:57:20.940831 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 15:57:20.940839 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:57:20.940848 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:57:20.940856 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:57:20.940864 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:57:20.940873 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:57:20.940881 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:57:20.940889 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:57:20.940899 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:57:20.940908 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 15:57:20.940916 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 15:57:20.940925 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:57:20.940933 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:57:20.940941 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:57:20.940950 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:57:20.940958 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 15:57:20.940968 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:57:20.940976 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 15:57:20.940984 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 15:57:20.940992 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:57:20.941000 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:57:20.941009 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:57:20.941017 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 15:57:20.941025 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:57:20.941059 systemd-journald[237]: Collecting audit messages is disabled. Feb 13 15:57:20.941082 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 15:57:20.941093 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:57:20.941101 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 15:57:20.941110 kernel: Bridge firewalling registered Feb 13 15:57:20.941117 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:57:20.941126 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:57:20.941134 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:57:20.941143 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:57:20.941179 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:57:20.941188 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:57:20.941198 systemd-journald[237]: Journal started Feb 13 15:57:20.941217 systemd-journald[237]: Runtime Journal (/run/log/journal/5af9178474934a83beae637592e89a73) is 8.0M, max 76.6M, 68.6M free. Feb 13 15:57:20.889956 systemd-modules-load[238]: Inserted module 'overlay' Feb 13 15:57:20.913271 systemd-modules-load[238]: Inserted module 'br_netfilter' Feb 13 15:57:20.945211 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:57:20.945270 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:57:20.970498 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:57:20.972688 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:57:20.975577 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:57:20.981392 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 15:57:20.985260 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:57:20.996627 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:57:21.011402 dracut-cmdline[270]: dracut-dracut-053 Feb 13 15:57:21.015099 dracut-cmdline[270]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=07e9b8867aadd0b2e77ba5338d18cdd10706c658e0d745a78e129bcae9a0e4c6 Feb 13 15:57:21.042373 systemd-resolved[272]: Positive Trust Anchors: Feb 13 15:57:21.042474 systemd-resolved[272]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:57:21.042507 systemd-resolved[272]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:57:21.048357 systemd-resolved[272]: Defaulting to hostname 'linux'. Feb 13 15:57:21.050120 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:57:21.050818 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:57:21.131226 kernel: SCSI subsystem initialized Feb 13 15:57:21.136219 kernel: Loading iSCSI transport class v2.0-870. Feb 13 15:57:21.144199 kernel: iscsi: registered transport (tcp) Feb 13 15:57:21.159247 kernel: iscsi: registered transport (qla4xxx) Feb 13 15:57:21.159378 kernel: QLogic iSCSI HBA Driver Feb 13 15:57:21.206533 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 15:57:21.219477 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 15:57:21.240603 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 15:57:21.240722 kernel: device-mapper: uevent: version 1.0.3 Feb 13 15:57:21.240752 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 15:57:21.290235 kernel: raid6: neonx8 gen() 15577 MB/s Feb 13 15:57:21.307203 kernel: raid6: neonx4 gen() 15588 MB/s Feb 13 15:57:21.324193 kernel: raid6: neonx2 gen() 13163 MB/s Feb 13 15:57:21.341210 kernel: raid6: neonx1 gen() 10451 MB/s Feb 13 15:57:21.358199 kernel: raid6: int64x8 gen() 6922 MB/s Feb 13 15:57:21.375231 kernel: raid6: int64x4 gen() 7246 MB/s Feb 13 15:57:21.392291 kernel: raid6: int64x2 gen() 6084 MB/s Feb 13 15:57:21.409292 kernel: raid6: int64x1 gen() 5022 MB/s Feb 13 15:57:21.409368 kernel: raid6: using algorithm neonx4 gen() 15588 MB/s Feb 13 15:57:21.426239 kernel: raid6: .... xor() 12380 MB/s, rmw enabled Feb 13 15:57:21.426341 kernel: raid6: using neon recovery algorithm Feb 13 15:57:21.431203 kernel: xor: measuring software checksum speed Feb 13 15:57:21.431270 kernel: 8regs : 19759 MB/sec Feb 13 15:57:21.432371 kernel: 32regs : 19631 MB/sec Feb 13 15:57:21.432413 kernel: arm64_neon : 26989 MB/sec Feb 13 15:57:21.432424 kernel: xor: using function: arm64_neon (26989 MB/sec) Feb 13 15:57:21.484303 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 15:57:21.502197 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:57:21.509433 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:57:21.530937 systemd-udevd[455]: Using default interface naming scheme 'v255'. Feb 13 15:57:21.534381 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:57:21.546237 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 15:57:21.558719 dracut-pre-trigger[461]: rd.md=0: removing MD RAID activation Feb 13 15:57:21.591356 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:57:21.597375 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:57:21.643811 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:57:21.649534 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 15:57:21.666076 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 15:57:21.668630 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:57:21.669492 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:57:21.670040 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:57:21.681347 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 15:57:21.695082 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:57:21.755410 kernel: scsi host0: Virtio SCSI HBA Feb 13 15:57:21.760388 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 13 15:57:21.760464 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Feb 13 15:57:21.760986 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:57:21.761095 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:57:21.763302 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:57:21.763846 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:57:21.764000 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:57:21.769336 kernel: ACPI: bus type USB registered Feb 13 15:57:21.769357 kernel: usbcore: registered new interface driver usbfs Feb 13 15:57:21.769368 kernel: usbcore: registered new interface driver hub Feb 13 15:57:21.769378 kernel: usbcore: registered new device driver usb Feb 13 15:57:21.764613 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:57:21.778385 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:57:21.794194 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:57:21.801326 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:57:21.808689 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Feb 13 15:57:21.822764 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Feb 13 15:57:21.822873 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Feb 13 15:57:21.822953 kernel: sr 0:0:0:0: Power-on or device reset occurred Feb 13 15:57:21.831051 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Feb 13 15:57:21.831236 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Feb 13 15:57:21.831333 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Feb 13 15:57:21.831415 kernel: hub 1-0:1.0: USB hub found Feb 13 15:57:21.831533 kernel: hub 1-0:1.0: 4 ports detected Feb 13 15:57:21.831687 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Feb 13 15:57:21.831853 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Feb 13 15:57:21.831960 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 15:57:21.831973 kernel: hub 2-0:1.0: USB hub found Feb 13 15:57:21.832091 kernel: hub 2-0:1.0: 4 ports detected Feb 13 15:57:21.832210 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Feb 13 15:57:21.833562 kernel: sd 0:0:0:1: Power-on or device reset occurred Feb 13 15:57:21.842399 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Feb 13 15:57:21.842976 kernel: sd 0:0:0:1: [sda] Write Protect is off Feb 13 15:57:21.843135 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Feb 13 15:57:21.843272 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 13 15:57:21.843373 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 15:57:21.843386 kernel: GPT:17805311 != 80003071 Feb 13 15:57:21.843396 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 15:57:21.843407 kernel: GPT:17805311 != 80003071 Feb 13 15:57:21.843417 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 15:57:21.843427 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:57:21.843439 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Feb 13 15:57:21.834185 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:57:21.879185 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (513) Feb 13 15:57:21.882184 kernel: BTRFS: device fsid 27ad543d-6fdb-4ace-b8f1-8f50b124bd06 devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (510) Feb 13 15:57:21.889426 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Feb 13 15:57:21.894094 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Feb 13 15:57:21.898927 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Feb 13 15:57:21.906027 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Feb 13 15:57:21.906706 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Feb 13 15:57:21.913353 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 15:57:21.919071 disk-uuid[570]: Primary Header is updated. Feb 13 15:57:21.919071 disk-uuid[570]: Secondary Entries is updated. Feb 13 15:57:21.919071 disk-uuid[570]: Secondary Header is updated. Feb 13 15:57:21.934180 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:57:22.056334 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Feb 13 15:57:22.299259 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Feb 13 15:57:22.433776 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Feb 13 15:57:22.433834 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Feb 13 15:57:22.435178 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Feb 13 15:57:22.489316 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Feb 13 15:57:22.489706 kernel: usbcore: registered new interface driver usbhid Feb 13 15:57:22.489733 kernel: usbhid: USB HID core driver Feb 13 15:57:22.943177 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:57:22.944291 disk-uuid[572]: The operation has completed successfully. Feb 13 15:57:23.003846 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 15:57:23.003951 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 15:57:23.019406 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 15:57:23.036388 sh[583]: Success Feb 13 15:57:23.049174 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 15:57:23.109928 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 15:57:23.120562 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 15:57:23.124258 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 15:57:23.147318 kernel: BTRFS info (device dm-0): first mount of filesystem 27ad543d-6fdb-4ace-b8f1-8f50b124bd06 Feb 13 15:57:23.147384 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:57:23.147407 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 15:57:23.148214 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 15:57:23.148249 kernel: BTRFS info (device dm-0): using free space tree Feb 13 15:57:23.156176 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 15:57:23.158440 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 15:57:23.161929 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 15:57:23.169638 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 15:57:23.174379 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 15:57:23.185437 kernel: BTRFS info (device sda6): first mount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:57:23.185488 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:57:23.185507 kernel: BTRFS info (device sda6): using free space tree Feb 13 15:57:23.190922 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 15:57:23.191006 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 15:57:23.204228 kernel: BTRFS info (device sda6): last unmount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:57:23.204615 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 15:57:23.212727 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 15:57:23.222348 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 15:57:23.301121 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:57:23.307418 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:57:23.339331 ignition[677]: Ignition 2.20.0 Feb 13 15:57:23.339349 ignition[677]: Stage: fetch-offline Feb 13 15:57:23.340385 ignition[677]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:57:23.340401 ignition[677]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 15:57:23.340590 ignition[677]: parsed url from cmdline: "" Feb 13 15:57:23.340594 ignition[677]: no config URL provided Feb 13 15:57:23.340599 ignition[677]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:57:23.340606 ignition[677]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:57:23.340611 ignition[677]: failed to fetch config: resource requires networking Feb 13 15:57:23.342431 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:57:23.340798 ignition[677]: Ignition finished successfully Feb 13 15:57:23.345212 systemd-networkd[770]: lo: Link UP Feb 13 15:57:23.345216 systemd-networkd[770]: lo: Gained carrier Feb 13 15:57:23.347284 systemd-networkd[770]: Enumeration completed Feb 13 15:57:23.347604 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:57:23.347966 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:57:23.347970 systemd-networkd[770]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:57:23.349368 systemd[1]: Reached target network.target - Network. Feb 13 15:57:23.349399 systemd-networkd[770]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:57:23.349403 systemd-networkd[770]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:57:23.350269 systemd-networkd[770]: eth0: Link UP Feb 13 15:57:23.350273 systemd-networkd[770]: eth0: Gained carrier Feb 13 15:57:23.350280 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:57:23.356466 systemd-networkd[770]: eth1: Link UP Feb 13 15:57:23.356470 systemd-networkd[770]: eth1: Gained carrier Feb 13 15:57:23.356480 systemd-networkd[770]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:57:23.357412 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 15:57:23.374817 ignition[774]: Ignition 2.20.0 Feb 13 15:57:23.374860 ignition[774]: Stage: fetch Feb 13 15:57:23.375351 ignition[774]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:57:23.375373 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 15:57:23.375674 ignition[774]: parsed url from cmdline: "" Feb 13 15:57:23.375683 ignition[774]: no config URL provided Feb 13 15:57:23.375697 ignition[774]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:57:23.375714 ignition[774]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:57:23.375835 ignition[774]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Feb 13 15:57:23.377316 ignition[774]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Feb 13 15:57:23.391253 systemd-networkd[770]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:57:23.418251 systemd-networkd[770]: eth0: DHCPv4 address 157.90.248.142/32, gateway 172.31.1.1 acquired from 172.31.1.1 Feb 13 15:57:23.577510 ignition[774]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Feb 13 15:57:23.583272 ignition[774]: GET result: OK Feb 13 15:57:23.583461 ignition[774]: parsing config with SHA512: 7dbc5cdd58658fb614f50e189f1ca6ce44b30125f2dd2d3c470dff7bf1aaf26218f699fe5a76acda05c89b74cc51f94abfe6c4856ca7097f688e074da9f0fc6e Feb 13 15:57:23.589212 unknown[774]: fetched base config from "system" Feb 13 15:57:23.590703 ignition[774]: fetch: fetch complete Feb 13 15:57:23.589228 unknown[774]: fetched base config from "system" Feb 13 15:57:23.590711 ignition[774]: fetch: fetch passed Feb 13 15:57:23.589234 unknown[774]: fetched user config from "hetzner" Feb 13 15:57:23.590786 ignition[774]: Ignition finished successfully Feb 13 15:57:23.592830 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 15:57:23.599332 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 15:57:23.613380 ignition[781]: Ignition 2.20.0 Feb 13 15:57:23.614234 ignition[781]: Stage: kargs Feb 13 15:57:23.614493 ignition[781]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:57:23.614509 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 15:57:23.618728 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 15:57:23.615645 ignition[781]: kargs: kargs passed Feb 13 15:57:23.615700 ignition[781]: Ignition finished successfully Feb 13 15:57:23.629280 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 15:57:23.642828 ignition[788]: Ignition 2.20.0 Feb 13 15:57:23.642837 ignition[788]: Stage: disks Feb 13 15:57:23.645861 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 15:57:23.643022 ignition[788]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:57:23.643032 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 15:57:23.647985 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 15:57:23.644243 ignition[788]: disks: disks passed Feb 13 15:57:23.649365 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 15:57:23.644298 ignition[788]: Ignition finished successfully Feb 13 15:57:23.651094 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:57:23.652112 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:57:23.652896 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:57:23.658395 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 15:57:23.676368 systemd-fsck[796]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Feb 13 15:57:23.682214 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 15:57:23.688781 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 15:57:23.730184 kernel: EXT4-fs (sda9): mounted filesystem b8d8a7c2-9667-48db-9266-035fd118dfdf r/w with ordered data mode. Quota mode: none. Feb 13 15:57:23.730652 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 15:57:23.731615 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 15:57:23.736252 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:57:23.739312 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 15:57:23.743302 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Feb 13 15:57:23.743914 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 15:57:23.743942 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:57:23.750683 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 15:57:23.760779 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (804) Feb 13 15:57:23.759001 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 15:57:23.765977 kernel: BTRFS info (device sda6): first mount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:57:23.766007 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:57:23.766020 kernel: BTRFS info (device sda6): using free space tree Feb 13 15:57:23.772325 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 15:57:23.772402 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 15:57:23.778352 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:57:23.817016 coreos-metadata[806]: Feb 13 15:57:23.816 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Feb 13 15:57:23.820321 coreos-metadata[806]: Feb 13 15:57:23.818 INFO Fetch successful Feb 13 15:57:23.820321 coreos-metadata[806]: Feb 13 15:57:23.818 INFO wrote hostname ci-4152-2-1-f-29672fd7f0 to /sysroot/etc/hostname Feb 13 15:57:23.822497 initrd-setup-root[831]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 15:57:23.820999 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 15:57:23.827482 initrd-setup-root[839]: cut: /sysroot/etc/group: No such file or directory Feb 13 15:57:23.832590 initrd-setup-root[846]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 15:57:23.837001 initrd-setup-root[853]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 15:57:23.947209 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 15:57:23.953338 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 15:57:23.955501 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 15:57:23.964192 kernel: BTRFS info (device sda6): last unmount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:57:23.986900 ignition[921]: INFO : Ignition 2.20.0 Feb 13 15:57:23.988186 ignition[921]: INFO : Stage: mount Feb 13 15:57:23.988186 ignition[921]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:57:23.988186 ignition[921]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 15:57:23.991649 ignition[921]: INFO : mount: mount passed Feb 13 15:57:23.991649 ignition[921]: INFO : Ignition finished successfully Feb 13 15:57:23.990285 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 15:57:23.992449 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 15:57:24.004469 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 15:57:24.148605 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 15:57:24.154346 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:57:24.165196 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (932) Feb 13 15:57:24.167404 kernel: BTRFS info (device sda6): first mount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:57:24.167449 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:57:24.167474 kernel: BTRFS info (device sda6): using free space tree Feb 13 15:57:24.170345 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 15:57:24.170405 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 15:57:24.173625 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:57:24.193055 ignition[949]: INFO : Ignition 2.20.0 Feb 13 15:57:24.193750 ignition[949]: INFO : Stage: files Feb 13 15:57:24.195266 ignition[949]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:57:24.195266 ignition[949]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 15:57:24.196477 ignition[949]: DEBUG : files: compiled without relabeling support, skipping Feb 13 15:57:24.197707 ignition[949]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 15:57:24.197707 ignition[949]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 15:57:24.201256 ignition[949]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 15:57:24.202792 ignition[949]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 15:57:24.202792 ignition[949]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 15:57:24.201737 unknown[949]: wrote ssh authorized keys file for user: core Feb 13 15:57:24.204982 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 13 15:57:24.204982 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 13 15:57:24.204982 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 15:57:24.204982 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 15:57:24.293095 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 15:57:24.420250 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 15:57:24.420250 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 15:57:24.422729 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Feb 13 15:57:24.893715 systemd-networkd[770]: eth0: Gained IPv6LL Feb 13 15:57:24.957347 systemd-networkd[770]: eth1: Gained IPv6LL Feb 13 15:57:25.002085 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 13 15:57:25.113484 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 15:57:25.113484 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Feb 13 15:57:25.115915 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 15:57:25.115915 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:57:25.115915 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:57:25.115915 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:57:25.115915 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:57:25.115915 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:57:25.115915 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:57:25.115915 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:57:25.115915 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:57:25.115915 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Feb 13 15:57:25.115915 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Feb 13 15:57:25.115915 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Feb 13 15:57:25.115915 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Feb 13 15:57:25.634396 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Feb 13 15:57:25.940806 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Feb 13 15:57:25.940806 ignition[949]: INFO : files: op(d): [started] processing unit "containerd.service" Feb 13 15:57:25.944797 ignition[949]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 13 15:57:25.944797 ignition[949]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 13 15:57:25.944797 ignition[949]: INFO : files: op(d): [finished] processing unit "containerd.service" Feb 13 15:57:25.944797 ignition[949]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Feb 13 15:57:25.954250 ignition[949]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:57:25.954250 ignition[949]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:57:25.954250 ignition[949]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Feb 13 15:57:25.954250 ignition[949]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Feb 13 15:57:25.954250 ignition[949]: INFO : files: op(11): op(12): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Feb 13 15:57:25.954250 ignition[949]: INFO : files: op(11): op(12): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Feb 13 15:57:25.954250 ignition[949]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Feb 13 15:57:25.954250 ignition[949]: INFO : files: op(13): [started] setting preset to enabled for "prepare-helm.service" Feb 13 15:57:25.954250 ignition[949]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 15:57:25.954250 ignition[949]: INFO : files: createResultFile: createFiles: op(14): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:57:25.954250 ignition[949]: INFO : files: createResultFile: createFiles: op(14): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:57:25.954250 ignition[949]: INFO : files: files passed Feb 13 15:57:25.954250 ignition[949]: INFO : Ignition finished successfully Feb 13 15:57:25.950600 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 15:57:25.959338 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 15:57:25.963791 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 15:57:25.967793 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 15:57:25.967880 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 15:57:25.982276 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:57:25.982276 initrd-setup-root-after-ignition[978]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:57:25.984799 initrd-setup-root-after-ignition[982]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:57:25.987599 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:57:25.988402 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 15:57:25.996399 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 15:57:26.040020 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 15:57:26.040248 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 15:57:26.043011 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 15:57:26.045228 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 15:57:26.047114 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 15:57:26.052388 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 15:57:26.070058 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:57:26.080354 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 15:57:26.090632 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:57:26.091642 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:57:26.092975 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 15:57:26.094276 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 15:57:26.094395 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:57:26.096199 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 15:57:26.096818 systemd[1]: Stopped target basic.target - Basic System. Feb 13 15:57:26.098177 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 15:57:26.099461 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:57:26.100986 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 15:57:26.102639 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 15:57:26.104016 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:57:26.105465 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 15:57:26.106443 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 15:57:26.107459 systemd[1]: Stopped target swap.target - Swaps. Feb 13 15:57:26.108293 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 15:57:26.108476 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:57:26.109588 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:57:26.110192 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:57:26.111141 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 15:57:26.111590 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:57:26.112227 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 15:57:26.112337 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 15:57:26.113834 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 15:57:26.113954 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:57:26.116037 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 15:57:26.116256 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 15:57:26.117674 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 13 15:57:26.117866 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 15:57:26.127464 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 15:57:26.130730 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 15:57:26.131209 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 15:57:26.131334 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:57:26.136883 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 15:57:26.137012 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:57:26.143426 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 15:57:26.143588 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 15:57:26.152932 ignition[1002]: INFO : Ignition 2.20.0 Feb 13 15:57:26.152932 ignition[1002]: INFO : Stage: umount Feb 13 15:57:26.152932 ignition[1002]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:57:26.152932 ignition[1002]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 15:57:26.152932 ignition[1002]: INFO : umount: umount passed Feb 13 15:57:26.152932 ignition[1002]: INFO : Ignition finished successfully Feb 13 15:57:26.156770 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 15:57:26.157975 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 15:57:26.158736 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 15:57:26.160485 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 15:57:26.160615 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 15:57:26.163967 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 15:57:26.164021 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 15:57:26.165275 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 15:57:26.165311 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 15:57:26.169416 systemd[1]: Stopped target network.target - Network. Feb 13 15:57:26.170700 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 15:57:26.170785 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:57:26.174121 systemd[1]: Stopped target paths.target - Path Units. Feb 13 15:57:26.175448 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 15:57:26.183287 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:57:26.189035 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 15:57:26.189751 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 15:57:26.191431 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 15:57:26.191509 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:57:26.192930 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 15:57:26.192985 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:57:26.195084 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 15:57:26.195173 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 15:57:26.201041 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 15:57:26.201104 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 15:57:26.202416 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 15:57:26.205529 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 15:57:26.213327 systemd-networkd[770]: eth1: DHCPv6 lease lost Feb 13 15:57:26.215047 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 15:57:26.215332 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 15:57:26.216949 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 15:57:26.217031 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 15:57:26.218399 systemd-networkd[770]: eth0: DHCPv6 lease lost Feb 13 15:57:26.222088 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 15:57:26.222730 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 15:57:26.225429 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 15:57:26.225628 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 15:57:26.227913 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 15:57:26.228018 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:57:26.233362 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 15:57:26.233954 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 15:57:26.234022 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:57:26.235046 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:57:26.235083 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:57:26.237334 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 15:57:26.237382 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 15:57:26.238939 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 15:57:26.238977 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:57:26.240825 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:57:26.256379 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 15:57:26.256847 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 15:57:26.259583 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 15:57:26.260299 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:57:26.261503 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 15:57:26.261576 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 15:57:26.262306 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 15:57:26.262339 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:57:26.264344 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 15:57:26.264412 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:57:26.265719 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 15:57:26.265774 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 15:57:26.267355 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:57:26.267422 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:57:26.275402 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 15:57:26.275979 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 15:57:26.276039 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:57:26.277970 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 15:57:26.278010 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:57:26.279946 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 15:57:26.279989 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:57:26.280644 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:57:26.280681 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:57:26.287582 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 15:57:26.287690 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 15:57:26.288744 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 15:57:26.299554 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 15:57:26.312605 systemd[1]: Switching root. Feb 13 15:57:26.338456 systemd-journald[237]: Journal stopped Feb 13 15:57:27.254962 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Feb 13 15:57:27.255031 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 15:57:27.255044 kernel: SELinux: policy capability open_perms=1 Feb 13 15:57:27.255060 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 15:57:27.255075 kernel: SELinux: policy capability always_check_network=0 Feb 13 15:57:27.255086 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 15:57:27.255095 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 15:57:27.255105 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 15:57:27.255121 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 15:57:27.255132 kernel: audit: type=1403 audit(1739462246.515:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 15:57:27.255144 systemd[1]: Successfully loaded SELinux policy in 35.491ms. Feb 13 15:57:27.255178 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.068ms. Feb 13 15:57:27.255193 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:57:27.255206 systemd[1]: Detected virtualization kvm. Feb 13 15:57:27.255218 systemd[1]: Detected architecture arm64. Feb 13 15:57:27.255228 systemd[1]: Detected first boot. Feb 13 15:57:27.255238 systemd[1]: Hostname set to . Feb 13 15:57:27.257226 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:57:27.257256 zram_generator::config[1067]: No configuration found. Feb 13 15:57:27.257268 systemd[1]: Populated /etc with preset unit settings. Feb 13 15:57:27.257278 systemd[1]: Queued start job for default target multi-user.target. Feb 13 15:57:27.257289 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Feb 13 15:57:27.257303 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 15:57:27.257316 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 15:57:27.257328 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 15:57:27.257339 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 15:57:27.257351 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 15:57:27.257362 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 15:57:27.257377 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 15:57:27.257387 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 15:57:27.257397 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:57:27.257408 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:57:27.257420 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 15:57:27.257434 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 15:57:27.257447 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 15:57:27.257460 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:57:27.257470 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 13 15:57:27.257517 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:57:27.257534 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 15:57:27.257545 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:57:27.257555 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:57:27.257568 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:57:27.257581 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:57:27.257594 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 15:57:27.257605 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 15:57:27.257616 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 15:57:27.257627 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 15:57:27.257637 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:57:27.257647 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:57:27.257658 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:57:27.257670 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 15:57:27.257682 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 15:57:27.257694 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 15:57:27.257708 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 15:57:27.257721 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 15:57:27.257733 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 15:57:27.257745 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 15:57:27.257755 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 15:57:27.257766 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:57:27.257777 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:57:27.257787 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 15:57:27.257798 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:57:27.257808 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:57:27.257819 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:57:27.257829 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 15:57:27.257840 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:57:27.257854 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 15:57:27.257866 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 13 15:57:27.257878 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Feb 13 15:57:27.257888 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:57:27.257899 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:57:27.257909 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 15:57:27.257940 kernel: fuse: init (API version 7.39) Feb 13 15:57:27.257988 systemd-journald[1155]: Collecting audit messages is disabled. Feb 13 15:57:27.258016 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 15:57:27.258030 systemd-journald[1155]: Journal started Feb 13 15:57:27.258054 systemd-journald[1155]: Runtime Journal (/run/log/journal/5af9178474934a83beae637592e89a73) is 8.0M, max 76.6M, 68.6M free. Feb 13 15:57:27.266201 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:57:27.271564 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:57:27.275379 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 15:57:27.277380 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 15:57:27.278121 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 15:57:27.278862 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 15:57:27.282290 kernel: loop: module loaded Feb 13 15:57:27.280957 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 15:57:27.283383 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 15:57:27.284818 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:57:27.287125 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 15:57:27.287391 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 15:57:27.288669 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:57:27.288829 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:57:27.290626 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:57:27.290785 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:57:27.297432 kernel: ACPI: bus type drm_connector registered Feb 13 15:57:27.292771 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 15:57:27.292945 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 15:57:27.293896 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:57:27.294041 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:57:27.297871 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 15:57:27.298894 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:57:27.299103 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:57:27.300564 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 15:57:27.305009 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:57:27.315786 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 15:57:27.323293 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 15:57:27.337470 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 15:57:27.343404 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 15:57:27.351441 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 15:57:27.357712 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 15:57:27.358667 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:57:27.367312 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 15:57:27.368032 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:57:27.373977 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:57:27.390370 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:57:27.395407 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 15:57:27.398977 systemd-journald[1155]: Time spent on flushing to /var/log/journal/5af9178474934a83beae637592e89a73 is 29.081ms for 1118 entries. Feb 13 15:57:27.398977 systemd-journald[1155]: System Journal (/var/log/journal/5af9178474934a83beae637592e89a73) is 8.0M, max 584.8M, 576.8M free. Feb 13 15:57:27.449365 systemd-journald[1155]: Received client request to flush runtime journal. Feb 13 15:57:27.400682 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 15:57:27.403265 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 15:57:27.404667 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:57:27.413651 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 15:57:27.417794 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 15:57:27.432418 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 15:57:27.434688 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:57:27.454281 systemd-tmpfiles[1200]: ACLs are not supported, ignoring. Feb 13 15:57:27.454298 systemd-tmpfiles[1200]: ACLs are not supported, ignoring. Feb 13 15:57:27.462542 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 15:57:27.464718 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:57:27.473546 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 15:57:27.476541 udevadm[1212]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 15:57:27.500667 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 15:57:27.510498 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:57:27.524435 systemd-tmpfiles[1223]: ACLs are not supported, ignoring. Feb 13 15:57:27.524449 systemd-tmpfiles[1223]: ACLs are not supported, ignoring. Feb 13 15:57:27.531681 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:57:27.958923 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 15:57:27.968443 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:57:27.989327 systemd-udevd[1229]: Using default interface naming scheme 'v255'. Feb 13 15:57:28.009617 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:57:28.025215 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:57:28.039427 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 15:57:28.063356 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Feb 13 15:57:28.115894 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 15:57:28.206052 systemd-networkd[1236]: lo: Link UP Feb 13 15:57:28.207097 systemd-networkd[1236]: lo: Gained carrier Feb 13 15:57:28.208779 systemd-networkd[1236]: Enumeration completed Feb 13 15:57:28.209291 systemd-networkd[1236]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:57:28.209302 systemd-networkd[1236]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:57:28.210020 systemd-networkd[1236]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:57:28.210032 systemd-networkd[1236]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:57:28.210664 systemd-networkd[1236]: eth0: Link UP Feb 13 15:57:28.210676 systemd-networkd[1236]: eth0: Gained carrier Feb 13 15:57:28.210690 systemd-networkd[1236]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:57:28.211914 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:57:28.214618 systemd-networkd[1236]: eth1: Link UP Feb 13 15:57:28.214631 systemd-networkd[1236]: eth1: Gained carrier Feb 13 15:57:28.214650 systemd-networkd[1236]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:57:28.220394 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 15:57:28.229217 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 15:57:28.238126 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:57:28.240241 systemd-networkd[1236]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:57:28.243052 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:57:28.253314 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:57:28.258338 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:57:28.259859 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 15:57:28.259903 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 15:57:28.269386 systemd-networkd[1236]: eth0: DHCPv4 address 157.90.248.142/32, gateway 172.31.1.1 acquired from 172.31.1.1 Feb 13 15:57:28.273045 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:57:28.275368 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:57:28.279212 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1243) Feb 13 15:57:28.280622 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:57:28.280796 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:57:28.283850 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:57:28.290490 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:57:28.290874 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:57:28.292980 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:57:28.354108 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Feb 13 15:57:28.357444 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Feb 13 15:57:28.357548 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Feb 13 15:57:28.357564 kernel: [drm] features: -context_init Feb 13 15:57:28.358259 kernel: [drm] number of scanouts: 1 Feb 13 15:57:28.360450 kernel: [drm] number of cap sets: 0 Feb 13 15:57:28.360550 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Feb 13 15:57:28.367460 kernel: Console: switching to colour frame buffer device 160x50 Feb 13 15:57:28.368441 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:57:28.370675 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Feb 13 15:57:28.377330 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:57:28.377619 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:57:28.382364 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:57:28.464693 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:57:28.521764 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 15:57:28.529496 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 15:57:28.543312 lvm[1299]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:57:28.573384 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 15:57:28.575179 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:57:28.581409 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 15:57:28.601181 lvm[1302]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:57:28.632797 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 15:57:28.635100 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 15:57:28.636969 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 15:57:28.637506 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:57:28.638135 systemd[1]: Reached target machines.target - Containers. Feb 13 15:57:28.639948 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 15:57:28.646386 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 15:57:28.649350 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 15:57:28.652399 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:57:28.653636 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 15:57:28.658428 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 15:57:28.662881 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 15:57:28.667127 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 15:57:28.681415 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 15:57:28.693229 kernel: loop0: detected capacity change from 0 to 8 Feb 13 15:57:28.700875 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 15:57:28.707668 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 15:57:28.709691 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 15:57:28.715194 kernel: loop1: detected capacity change from 0 to 116808 Feb 13 15:57:28.749194 kernel: loop2: detected capacity change from 0 to 194512 Feb 13 15:57:28.778193 kernel: loop3: detected capacity change from 0 to 113536 Feb 13 15:57:28.817297 kernel: loop4: detected capacity change from 0 to 8 Feb 13 15:57:28.820255 kernel: loop5: detected capacity change from 0 to 116808 Feb 13 15:57:28.830180 kernel: loop6: detected capacity change from 0 to 194512 Feb 13 15:57:28.851291 kernel: loop7: detected capacity change from 0 to 113536 Feb 13 15:57:28.861495 (sd-merge)[1323]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Feb 13 15:57:28.862065 (sd-merge)[1323]: Merged extensions into '/usr'. Feb 13 15:57:28.870585 systemd[1]: Reloading requested from client PID 1310 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 15:57:28.870607 systemd[1]: Reloading... Feb 13 15:57:28.962521 zram_generator::config[1354]: No configuration found. Feb 13 15:57:29.084199 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:57:29.093187 ldconfig[1306]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 15:57:29.146929 systemd[1]: Reloading finished in 275 ms. Feb 13 15:57:29.166102 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 15:57:29.167141 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 15:57:29.174507 systemd[1]: Starting ensure-sysext.service... Feb 13 15:57:29.178363 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:57:29.193371 systemd[1]: Reloading requested from client PID 1396 ('systemctl') (unit ensure-sysext.service)... Feb 13 15:57:29.193408 systemd[1]: Reloading... Feb 13 15:57:29.223019 systemd-tmpfiles[1397]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 15:57:29.223935 systemd-tmpfiles[1397]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 15:57:29.224918 systemd-tmpfiles[1397]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 15:57:29.225376 systemd-tmpfiles[1397]: ACLs are not supported, ignoring. Feb 13 15:57:29.225566 systemd-tmpfiles[1397]: ACLs are not supported, ignoring. Feb 13 15:57:29.229123 systemd-tmpfiles[1397]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:57:29.229293 systemd-tmpfiles[1397]: Skipping /boot Feb 13 15:57:29.237956 systemd-tmpfiles[1397]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:57:29.238292 systemd-tmpfiles[1397]: Skipping /boot Feb 13 15:57:29.273189 zram_generator::config[1428]: No configuration found. Feb 13 15:57:29.378759 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:57:29.440681 systemd[1]: Reloading finished in 246 ms. Feb 13 15:57:29.459341 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:57:29.479445 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:57:29.498521 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 15:57:29.503259 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 15:57:29.509775 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:57:29.520332 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 15:57:29.535984 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:57:29.544542 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:57:29.547325 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:57:29.554756 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:57:29.556414 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:57:29.560596 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 15:57:29.565694 systemd-networkd[1236]: eth0: Gained IPv6LL Feb 13 15:57:29.579857 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 15:57:29.582575 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 15:57:29.583684 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:57:29.583849 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:57:29.593642 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:57:29.594014 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:57:29.596336 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:57:29.597782 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:57:29.611268 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:57:29.618444 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:57:29.622212 augenrules[1512]: No rules Feb 13 15:57:29.629934 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:57:29.636328 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:57:29.642843 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:57:29.645080 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:57:29.645398 systemd-resolved[1473]: Positive Trust Anchors: Feb 13 15:57:29.645498 systemd-resolved[1473]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:57:29.645531 systemd-resolved[1473]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:57:29.652578 systemd-resolved[1473]: Using system hostname 'ci-4152-2-1-f-29672fd7f0'. Feb 13 15:57:29.657107 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 15:57:29.662320 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:57:29.664486 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:57:29.664713 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:57:29.666771 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 15:57:29.672222 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:57:29.672397 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:57:29.673542 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:57:29.673697 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:57:29.674839 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:57:29.674998 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:57:29.676179 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:57:29.678433 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:57:29.680122 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 15:57:29.683552 systemd[1]: Finished ensure-sysext.service. Feb 13 15:57:29.691143 systemd[1]: Reached target network.target - Network. Feb 13 15:57:29.691698 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 15:57:29.692270 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:57:29.692868 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:57:29.692944 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:57:29.693615 systemd-networkd[1236]: eth1: Gained IPv6LL Feb 13 15:57:29.698352 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 15:57:29.699190 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 15:57:29.757811 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 15:57:29.759957 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:57:29.761520 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 15:57:29.763229 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 15:57:29.764727 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 15:57:29.766419 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 15:57:29.766540 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:57:29.767755 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 15:57:29.768554 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 15:57:29.769209 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 15:57:29.769847 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:57:29.770902 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 15:57:29.773099 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 15:57:29.774956 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 15:57:29.779583 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 15:57:29.780331 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:57:29.780952 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:57:29.782009 systemd[1]: System is tainted: cgroupsv1 Feb 13 15:57:29.782081 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:57:29.782120 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:57:29.784332 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 15:57:29.788321 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 15:57:29.795345 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 15:57:29.799418 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 15:57:29.805369 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 15:57:29.806169 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 15:57:29.816683 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:57:29.821552 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 15:57:29.831203 jq[1546]: false Feb 13 15:57:29.830311 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 15:57:29.836377 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 15:57:29.841629 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Feb 13 15:57:29.851444 dbus-daemon[1544]: [system] SELinux support is enabled Feb 13 15:57:29.856657 coreos-metadata[1543]: Feb 13 15:57:29.855 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Feb 13 15:57:29.853826 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 15:57:29.856950 coreos-metadata[1543]: Feb 13 15:57:29.856 INFO Fetch successful Feb 13 15:57:29.865285 coreos-metadata[1543]: Feb 13 15:57:29.859 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Feb 13 15:57:29.865285 coreos-metadata[1543]: Feb 13 15:57:29.860 INFO Fetch successful Feb 13 15:57:29.875838 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 15:57:29.880763 extend-filesystems[1547]: Found loop4 Feb 13 15:57:29.880763 extend-filesystems[1547]: Found loop5 Feb 13 15:57:29.898986 extend-filesystems[1547]: Found loop6 Feb 13 15:57:29.898986 extend-filesystems[1547]: Found loop7 Feb 13 15:57:29.898986 extend-filesystems[1547]: Found sda Feb 13 15:57:29.898986 extend-filesystems[1547]: Found sda1 Feb 13 15:57:29.898986 extend-filesystems[1547]: Found sda2 Feb 13 15:57:29.898986 extend-filesystems[1547]: Found sda3 Feb 13 15:57:29.898986 extend-filesystems[1547]: Found usr Feb 13 15:57:29.898986 extend-filesystems[1547]: Found sda4 Feb 13 15:57:29.898986 extend-filesystems[1547]: Found sda6 Feb 13 15:57:29.898986 extend-filesystems[1547]: Found sda7 Feb 13 15:57:29.898986 extend-filesystems[1547]: Found sda9 Feb 13 15:57:29.898986 extend-filesystems[1547]: Checking size of /dev/sda9 Feb 13 15:57:29.887357 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 15:57:29.892033 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 15:57:29.917999 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 15:57:29.932730 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 15:57:29.938522 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 15:57:29.941182 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1248) Feb 13 15:57:29.947395 extend-filesystems[1547]: Resized partition /dev/sda9 Feb 13 15:57:29.952572 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 15:57:29.952817 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 15:57:29.956790 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 15:57:29.961352 jq[1583]: true Feb 13 15:57:29.963517 extend-filesystems[1590]: resize2fs 1.47.1 (20-May-2024) Feb 13 15:57:29.957031 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 15:57:29.970563 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 15:57:29.976575 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 15:57:29.976821 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 15:57:29.989090 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Feb 13 15:57:30.011165 (ntainerd)[1597]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 15:57:30.031952 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 15:57:30.031979 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 15:57:30.035383 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 15:57:30.035402 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 15:57:30.041050 jq[1596]: true Feb 13 15:57:30.044088 update_engine[1577]: I20250213 15:57:30.038017 1577 main.cc:92] Flatcar Update Engine starting Feb 13 15:57:30.060581 systemd[1]: Started update-engine.service - Update Engine. Feb 13 15:57:30.062847 tar[1594]: linux-arm64/helm Feb 13 15:57:30.068271 update_engine[1577]: I20250213 15:57:30.064409 1577 update_check_scheduler.cc:74] Next update check in 11m7s Feb 13 15:57:30.069598 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 15:57:30.071138 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 15:57:30.151526 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Feb 13 15:57:30.132404 systemd-timesyncd[1538]: Contacted time server 85.214.83.151:123 (0.flatcar.pool.ntp.org). Feb 13 15:57:30.132479 systemd-timesyncd[1538]: Initial clock synchronization to Thu 2025-02-13 15:57:30.247640 UTC. Feb 13 15:57:30.148061 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 15:57:30.150898 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 15:57:30.159793 extend-filesystems[1590]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Feb 13 15:57:30.159793 extend-filesystems[1590]: old_desc_blocks = 1, new_desc_blocks = 5 Feb 13 15:57:30.159793 extend-filesystems[1590]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Feb 13 15:57:30.158640 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 15:57:30.168382 extend-filesystems[1547]: Resized filesystem in /dev/sda9 Feb 13 15:57:30.168382 extend-filesystems[1547]: Found sr0 Feb 13 15:57:30.158898 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 15:57:30.166103 systemd-logind[1567]: New seat seat0. Feb 13 15:57:30.176561 systemd-logind[1567]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 15:57:30.176578 systemd-logind[1567]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Feb 13 15:57:30.176855 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 15:57:30.211182 bash[1645]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:57:30.216011 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 15:57:30.235680 systemd[1]: Starting sshkeys.service... Feb 13 15:57:30.254204 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 15:57:30.264101 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 15:57:30.315487 coreos-metadata[1649]: Feb 13 15:57:30.315 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Feb 13 15:57:30.320786 coreos-metadata[1649]: Feb 13 15:57:30.318 INFO Fetch successful Feb 13 15:57:30.324055 unknown[1649]: wrote ssh authorized keys file for user: core Feb 13 15:57:30.362438 update-ssh-keys[1657]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:57:30.367664 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 15:57:30.377081 systemd[1]: Finished sshkeys.service. Feb 13 15:57:30.396535 locksmithd[1620]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 15:57:30.432633 containerd[1597]: time="2025-02-13T15:57:30.431650280Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 15:57:30.497308 containerd[1597]: time="2025-02-13T15:57:30.497249240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:57:30.500859 containerd[1597]: time="2025-02-13T15:57:30.500794960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:57:30.500859 containerd[1597]: time="2025-02-13T15:57:30.500844040Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 15:57:30.500859 containerd[1597]: time="2025-02-13T15:57:30.500862880Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 15:57:30.501578 containerd[1597]: time="2025-02-13T15:57:30.501023680Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 15:57:30.501578 containerd[1597]: time="2025-02-13T15:57:30.501050560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 15:57:30.501578 containerd[1597]: time="2025-02-13T15:57:30.501109080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:57:30.501578 containerd[1597]: time="2025-02-13T15:57:30.501121320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:57:30.501578 containerd[1597]: time="2025-02-13T15:57:30.501357680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:57:30.501578 containerd[1597]: time="2025-02-13T15:57:30.501372960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 15:57:30.501578 containerd[1597]: time="2025-02-13T15:57:30.501385240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:57:30.501578 containerd[1597]: time="2025-02-13T15:57:30.501393640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 15:57:30.501578 containerd[1597]: time="2025-02-13T15:57:30.501482000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:57:30.501796 containerd[1597]: time="2025-02-13T15:57:30.501675520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:57:30.501817 containerd[1597]: time="2025-02-13T15:57:30.501794520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:57:30.501817 containerd[1597]: time="2025-02-13T15:57:30.501808760Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 15:57:30.502522 containerd[1597]: time="2025-02-13T15:57:30.501874400Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 15:57:30.502522 containerd[1597]: time="2025-02-13T15:57:30.501924360Z" level=info msg="metadata content store policy set" policy=shared Feb 13 15:57:30.510165 containerd[1597]: time="2025-02-13T15:57:30.509672800Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 15:57:30.510165 containerd[1597]: time="2025-02-13T15:57:30.509729040Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 15:57:30.510165 containerd[1597]: time="2025-02-13T15:57:30.509744400Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 15:57:30.510165 containerd[1597]: time="2025-02-13T15:57:30.509761560Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 15:57:30.510165 containerd[1597]: time="2025-02-13T15:57:30.509778360Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 15:57:30.510165 containerd[1597]: time="2025-02-13T15:57:30.509936760Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 15:57:30.510360 containerd[1597]: time="2025-02-13T15:57:30.510283840Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 15:57:30.510629 containerd[1597]: time="2025-02-13T15:57:30.510383240Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 15:57:30.510629 containerd[1597]: time="2025-02-13T15:57:30.510404480Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 15:57:30.510629 containerd[1597]: time="2025-02-13T15:57:30.510420320Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 15:57:30.510629 containerd[1597]: time="2025-02-13T15:57:30.510433440Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 15:57:30.510629 containerd[1597]: time="2025-02-13T15:57:30.510458680Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 15:57:30.510629 containerd[1597]: time="2025-02-13T15:57:30.510472480Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 15:57:30.510629 containerd[1597]: time="2025-02-13T15:57:30.510485760Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 15:57:30.510629 containerd[1597]: time="2025-02-13T15:57:30.510501240Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 15:57:30.510629 containerd[1597]: time="2025-02-13T15:57:30.510513920Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 15:57:30.510629 containerd[1597]: time="2025-02-13T15:57:30.510525960Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 15:57:30.510629 containerd[1597]: time="2025-02-13T15:57:30.510539240Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 15:57:30.510629 containerd[1597]: time="2025-02-13T15:57:30.510565320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 15:57:30.510629 containerd[1597]: time="2025-02-13T15:57:30.510579840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 15:57:30.510629 containerd[1597]: time="2025-02-13T15:57:30.510591200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 15:57:30.511602 containerd[1597]: time="2025-02-13T15:57:30.510603960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 15:57:30.511602 containerd[1597]: time="2025-02-13T15:57:30.510615880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 15:57:30.511602 containerd[1597]: time="2025-02-13T15:57:30.510631000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 15:57:30.511602 containerd[1597]: time="2025-02-13T15:57:30.510643600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 15:57:30.511602 containerd[1597]: time="2025-02-13T15:57:30.510656240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 15:57:30.511602 containerd[1597]: time="2025-02-13T15:57:30.510673240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 15:57:30.511602 containerd[1597]: time="2025-02-13T15:57:30.510689800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 15:57:30.511602 containerd[1597]: time="2025-02-13T15:57:30.510704880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 15:57:30.511602 containerd[1597]: time="2025-02-13T15:57:30.510716440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 15:57:30.511602 containerd[1597]: time="2025-02-13T15:57:30.510728560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 15:57:30.511602 containerd[1597]: time="2025-02-13T15:57:30.510743440Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 15:57:30.511602 containerd[1597]: time="2025-02-13T15:57:30.510769840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 15:57:30.511602 containerd[1597]: time="2025-02-13T15:57:30.510784040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 15:57:30.511602 containerd[1597]: time="2025-02-13T15:57:30.510794040Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 15:57:30.512848 containerd[1597]: time="2025-02-13T15:57:30.510953480Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 15:57:30.512848 containerd[1597]: time="2025-02-13T15:57:30.510970600Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 15:57:30.512848 containerd[1597]: time="2025-02-13T15:57:30.510984080Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 15:57:30.512848 containerd[1597]: time="2025-02-13T15:57:30.510995720Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 15:57:30.512848 containerd[1597]: time="2025-02-13T15:57:30.511004600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 15:57:30.512848 containerd[1597]: time="2025-02-13T15:57:30.511017080Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 15:57:30.512848 containerd[1597]: time="2025-02-13T15:57:30.511027360Z" level=info msg="NRI interface is disabled by configuration." Feb 13 15:57:30.512848 containerd[1597]: time="2025-02-13T15:57:30.511037720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 15:57:30.513540 containerd[1597]: time="2025-02-13T15:57:30.513398960Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 15:57:30.513540 containerd[1597]: time="2025-02-13T15:57:30.513479080Z" level=info msg="Connect containerd service" Feb 13 15:57:30.513540 containerd[1597]: time="2025-02-13T15:57:30.513526120Z" level=info msg="using legacy CRI server" Feb 13 15:57:30.513540 containerd[1597]: time="2025-02-13T15:57:30.513534200Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 15:57:30.514162 containerd[1597]: time="2025-02-13T15:57:30.513766000Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 15:57:30.514503 containerd[1597]: time="2025-02-13T15:57:30.514474200Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:57:30.515682 containerd[1597]: time="2025-02-13T15:57:30.515428920Z" level=info msg="Start subscribing containerd event" Feb 13 15:57:30.515682 containerd[1597]: time="2025-02-13T15:57:30.515515160Z" level=info msg="Start recovering state" Feb 13 15:57:30.515682 containerd[1597]: time="2025-02-13T15:57:30.515576960Z" level=info msg="Start event monitor" Feb 13 15:57:30.515682 containerd[1597]: time="2025-02-13T15:57:30.515587760Z" level=info msg="Start snapshots syncer" Feb 13 15:57:30.515682 containerd[1597]: time="2025-02-13T15:57:30.515596760Z" level=info msg="Start cni network conf syncer for default" Feb 13 15:57:30.515682 containerd[1597]: time="2025-02-13T15:57:30.515605320Z" level=info msg="Start streaming server" Feb 13 15:57:30.518341 containerd[1597]: time="2025-02-13T15:57:30.518316840Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 15:57:30.518414 containerd[1597]: time="2025-02-13T15:57:30.518372400Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 15:57:30.518586 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 15:57:30.519380 containerd[1597]: time="2025-02-13T15:57:30.519298600Z" level=info msg="containerd successfully booted in 0.092246s" Feb 13 15:57:30.880240 tar[1594]: linux-arm64/LICENSE Feb 13 15:57:30.880240 tar[1594]: linux-arm64/README.md Feb 13 15:57:30.894817 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 15:57:31.034342 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:57:31.044867 (kubelet)[1683]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:57:31.136819 sshd_keygen[1593]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 15:57:31.175069 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 15:57:31.183511 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 15:57:31.195104 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 15:57:31.195457 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 15:57:31.203582 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 15:57:31.217691 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 15:57:31.226517 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 15:57:31.229822 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 13 15:57:31.231220 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 15:57:31.234046 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 15:57:31.236592 systemd[1]: Startup finished in 6.610s (kernel) + 4.756s (userspace) = 11.367s. Feb 13 15:57:31.616786 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 15:57:31.623729 systemd[1]: Started sshd@0-157.90.248.142:22-118.193.38.84:48544.service - OpenSSH per-connection server daemon (118.193.38.84:48544). Feb 13 15:57:31.678770 kubelet[1683]: E0213 15:57:31.678617 1683 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:57:31.683199 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:57:31.683614 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:57:33.106829 sshd[1714]: Invalid user neo4j from 118.193.38.84 port 48544 Feb 13 15:57:33.394358 sshd[1714]: Received disconnect from 118.193.38.84 port 48544:11: Bye Bye [preauth] Feb 13 15:57:33.394358 sshd[1714]: Disconnected from invalid user neo4j 118.193.38.84 port 48544 [preauth] Feb 13 15:57:33.395380 systemd[1]: sshd@0-157.90.248.142:22-118.193.38.84:48544.service: Deactivated successfully. Feb 13 15:57:41.933910 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 15:57:41.942444 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:57:42.060351 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:57:42.069948 (kubelet)[1733]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:57:42.133879 kubelet[1733]: E0213 15:57:42.133681 1733 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:57:42.137475 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:57:42.137637 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:57:44.870680 systemd[1]: Started sshd@1-157.90.248.142:22-125.94.71.207:55540.service - OpenSSH per-connection server daemon (125.94.71.207:55540). Feb 13 15:57:45.583620 systemd[1]: Started sshd@2-157.90.248.142:22-186.124.22.55:45314.service - OpenSSH per-connection server daemon (186.124.22.55:45314). Feb 13 15:57:46.245595 sshd[1743]: Invalid user runner from 125.94.71.207 port 55540 Feb 13 15:57:46.507086 sshd[1743]: Received disconnect from 125.94.71.207 port 55540:11: Bye Bye [preauth] Feb 13 15:57:46.507086 sshd[1743]: Disconnected from invalid user runner 125.94.71.207 port 55540 [preauth] Feb 13 15:57:46.509344 systemd[1]: sshd@1-157.90.248.142:22-125.94.71.207:55540.service: Deactivated successfully. Feb 13 15:57:46.947528 sshd[1745]: Invalid user gitlab-runner from 186.124.22.55 port 45314 Feb 13 15:57:47.198951 sshd[1745]: Received disconnect from 186.124.22.55 port 45314:11: Bye Bye [preauth] Feb 13 15:57:47.198951 sshd[1745]: Disconnected from invalid user gitlab-runner 186.124.22.55 port 45314 [preauth] Feb 13 15:57:47.202922 systemd[1]: sshd@2-157.90.248.142:22-186.124.22.55:45314.service: Deactivated successfully. Feb 13 15:57:50.255638 systemd[1]: Started sshd@3-157.90.248.142:22-27.111.32.174:59816.service - OpenSSH per-connection server daemon (27.111.32.174:59816). Feb 13 15:57:51.240464 sshd[1754]: Invalid user minecraft from 27.111.32.174 port 59816 Feb 13 15:57:51.418292 sshd[1754]: Received disconnect from 27.111.32.174 port 59816:11: Bye Bye [preauth] Feb 13 15:57:51.418292 sshd[1754]: Disconnected from invalid user minecraft 27.111.32.174 port 59816 [preauth] Feb 13 15:57:51.421636 systemd[1]: sshd@3-157.90.248.142:22-27.111.32.174:59816.service: Deactivated successfully. Feb 13 15:57:52.388744 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 15:57:52.397489 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:57:52.512388 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:57:52.517322 (kubelet)[1772]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:57:52.570423 kubelet[1772]: E0213 15:57:52.570355 1772 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:57:52.573193 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:57:52.573416 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:58:02.824366 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Feb 13 15:58:02.832908 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:58:02.941781 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:58:02.952148 (kubelet)[1793]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:58:03.015220 kubelet[1793]: E0213 15:58:03.015143 1793 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:58:03.017708 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:58:03.017849 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:58:13.047034 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Feb 13 15:58:13.054567 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:58:13.163354 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:58:13.167382 (kubelet)[1815]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:58:13.216753 kubelet[1815]: E0213 15:58:13.216654 1815 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:58:13.220014 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:58:13.220326 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:58:14.915106 update_engine[1577]: I20250213 15:58:14.914144 1577 update_attempter.cc:509] Updating boot flags... Feb 13 15:58:14.967217 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1833) Feb 13 15:58:20.666538 systemd[1]: Started sshd@4-157.90.248.142:22-139.178.89.65:60150.service - OpenSSH per-connection server daemon (139.178.89.65:60150). Feb 13 15:58:21.665210 sshd[1839]: Accepted publickey for core from 139.178.89.65 port 60150 ssh2: RSA SHA256:KBbJQhuYoAcoXq4F5fzIreCcUwPw2deXowxxtv7qw9g Feb 13 15:58:21.669740 sshd-session[1839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:58:21.680889 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 15:58:21.691923 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 15:58:21.695766 systemd-logind[1567]: New session 1 of user core. Feb 13 15:58:21.707501 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 15:58:21.713649 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 15:58:21.718690 (systemd)[1845]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 15:58:21.821502 systemd[1845]: Queued start job for default target default.target. Feb 13 15:58:21.821947 systemd[1845]: Created slice app.slice - User Application Slice. Feb 13 15:58:21.821970 systemd[1845]: Reached target paths.target - Paths. Feb 13 15:58:21.821981 systemd[1845]: Reached target timers.target - Timers. Feb 13 15:58:21.836404 systemd[1845]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 15:58:21.846855 systemd[1845]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 15:58:21.846921 systemd[1845]: Reached target sockets.target - Sockets. Feb 13 15:58:21.846933 systemd[1845]: Reached target basic.target - Basic System. Feb 13 15:58:21.846979 systemd[1845]: Reached target default.target - Main User Target. Feb 13 15:58:21.847004 systemd[1845]: Startup finished in 121ms. Feb 13 15:58:21.847543 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 15:58:21.853640 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 15:58:22.550676 systemd[1]: Started sshd@5-157.90.248.142:22-139.178.89.65:60164.service - OpenSSH per-connection server daemon (139.178.89.65:60164). Feb 13 15:58:23.296866 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Feb 13 15:58:23.303441 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:58:23.430481 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:58:23.443912 (kubelet)[1871]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:58:23.493107 kubelet[1871]: E0213 15:58:23.493038 1871 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:58:23.496374 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:58:23.497215 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:58:23.544236 sshd[1857]: Accepted publickey for core from 139.178.89.65 port 60164 ssh2: RSA SHA256:KBbJQhuYoAcoXq4F5fzIreCcUwPw2deXowxxtv7qw9g Feb 13 15:58:23.546197 sshd-session[1857]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:58:23.552735 systemd-logind[1567]: New session 2 of user core. Feb 13 15:58:23.562722 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 15:58:24.229759 sshd[1881]: Connection closed by 139.178.89.65 port 60164 Feb 13 15:58:24.230588 sshd-session[1857]: pam_unix(sshd:session): session closed for user core Feb 13 15:58:24.235215 systemd[1]: sshd@5-157.90.248.142:22-139.178.89.65:60164.service: Deactivated successfully. Feb 13 15:58:24.238328 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 15:58:24.239038 systemd-logind[1567]: Session 2 logged out. Waiting for processes to exit. Feb 13 15:58:24.240472 systemd-logind[1567]: Removed session 2. Feb 13 15:58:24.397607 systemd[1]: Started sshd@6-157.90.248.142:22-139.178.89.65:60180.service - OpenSSH per-connection server daemon (139.178.89.65:60180). Feb 13 15:58:25.384269 sshd[1886]: Accepted publickey for core from 139.178.89.65 port 60180 ssh2: RSA SHA256:KBbJQhuYoAcoXq4F5fzIreCcUwPw2deXowxxtv7qw9g Feb 13 15:58:25.386497 sshd-session[1886]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:58:25.393212 systemd-logind[1567]: New session 3 of user core. Feb 13 15:58:25.399675 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 15:58:26.062313 sshd[1889]: Connection closed by 139.178.89.65 port 60180 Feb 13 15:58:26.063360 sshd-session[1886]: pam_unix(sshd:session): session closed for user core Feb 13 15:58:26.067578 systemd[1]: sshd@6-157.90.248.142:22-139.178.89.65:60180.service: Deactivated successfully. Feb 13 15:58:26.071110 systemd-logind[1567]: Session 3 logged out. Waiting for processes to exit. Feb 13 15:58:26.071963 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 15:58:26.072906 systemd-logind[1567]: Removed session 3. Feb 13 15:58:26.229512 systemd[1]: Started sshd@7-157.90.248.142:22-139.178.89.65:35566.service - OpenSSH per-connection server daemon (139.178.89.65:35566). Feb 13 15:58:26.991546 systemd[1]: Started sshd@8-157.90.248.142:22-119.159.234.131:19871.service - OpenSSH per-connection server daemon (119.159.234.131:19871). Feb 13 15:58:27.224765 sshd[1894]: Accepted publickey for core from 139.178.89.65 port 35566 ssh2: RSA SHA256:KBbJQhuYoAcoXq4F5fzIreCcUwPw2deXowxxtv7qw9g Feb 13 15:58:27.226689 sshd-session[1894]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:58:27.233742 systemd-logind[1567]: New session 4 of user core. Feb 13 15:58:27.240684 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 15:58:27.856969 sshd[1896]: Invalid user gmodserver from 119.159.234.131 port 19871 Feb 13 15:58:27.911767 sshd[1899]: Connection closed by 139.178.89.65 port 35566 Feb 13 15:58:27.913267 sshd-session[1894]: pam_unix(sshd:session): session closed for user core Feb 13 15:58:27.916773 systemd[1]: sshd@7-157.90.248.142:22-139.178.89.65:35566.service: Deactivated successfully. Feb 13 15:58:27.920954 systemd-logind[1567]: Session 4 logged out. Waiting for processes to exit. Feb 13 15:58:27.921220 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 15:58:27.923282 systemd-logind[1567]: Removed session 4. Feb 13 15:58:28.014139 sshd[1896]: Received disconnect from 119.159.234.131 port 19871:11: Bye Bye [preauth] Feb 13 15:58:28.014139 sshd[1896]: Disconnected from invalid user gmodserver 119.159.234.131 port 19871 [preauth] Feb 13 15:58:28.019474 systemd[1]: sshd@8-157.90.248.142:22-119.159.234.131:19871.service: Deactivated successfully. Feb 13 15:58:28.081603 systemd[1]: Started sshd@9-157.90.248.142:22-139.178.89.65:35578.service - OpenSSH per-connection server daemon (139.178.89.65:35578). Feb 13 15:58:29.068212 sshd[1907]: Accepted publickey for core from 139.178.89.65 port 35578 ssh2: RSA SHA256:KBbJQhuYoAcoXq4F5fzIreCcUwPw2deXowxxtv7qw9g Feb 13 15:58:29.070553 sshd-session[1907]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:58:29.076845 systemd-logind[1567]: New session 5 of user core. Feb 13 15:58:29.083790 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 15:58:29.600461 sudo[1911]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 15:58:29.600738 sudo[1911]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:58:29.618086 sudo[1911]: pam_unix(sudo:session): session closed for user root Feb 13 15:58:29.778248 sshd[1910]: Connection closed by 139.178.89.65 port 35578 Feb 13 15:58:29.779245 sshd-session[1907]: pam_unix(sshd:session): session closed for user core Feb 13 15:58:29.783599 systemd[1]: sshd@9-157.90.248.142:22-139.178.89.65:35578.service: Deactivated successfully. Feb 13 15:58:29.785608 systemd-logind[1567]: Session 5 logged out. Waiting for processes to exit. Feb 13 15:58:29.787335 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 15:58:29.788668 systemd-logind[1567]: Removed session 5. Feb 13 15:58:29.942597 systemd[1]: Started sshd@10-157.90.248.142:22-139.178.89.65:35588.service - OpenSSH per-connection server daemon (139.178.89.65:35588). Feb 13 15:58:30.931441 sshd[1916]: Accepted publickey for core from 139.178.89.65 port 35588 ssh2: RSA SHA256:KBbJQhuYoAcoXq4F5fzIreCcUwPw2deXowxxtv7qw9g Feb 13 15:58:30.932959 sshd-session[1916]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:58:30.938514 systemd-logind[1567]: New session 6 of user core. Feb 13 15:58:30.944678 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 15:58:31.453391 sudo[1921]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 15:58:31.453657 sudo[1921]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:58:31.458604 sudo[1921]: pam_unix(sudo:session): session closed for user root Feb 13 15:58:31.464083 sudo[1920]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 15:58:31.464532 sudo[1920]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:58:31.482837 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:58:31.515777 augenrules[1943]: No rules Feb 13 15:58:31.516642 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:58:31.517089 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:58:31.519625 sudo[1920]: pam_unix(sudo:session): session closed for user root Feb 13 15:58:31.680311 sshd[1919]: Connection closed by 139.178.89.65 port 35588 Feb 13 15:58:31.681310 sshd-session[1916]: pam_unix(sshd:session): session closed for user core Feb 13 15:58:31.686781 systemd[1]: sshd@10-157.90.248.142:22-139.178.89.65:35588.service: Deactivated successfully. Feb 13 15:58:31.691662 systemd-logind[1567]: Session 6 logged out. Waiting for processes to exit. Feb 13 15:58:31.692911 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 15:58:31.694377 systemd-logind[1567]: Removed session 6. Feb 13 15:58:31.854705 systemd[1]: Started sshd@11-157.90.248.142:22-139.178.89.65:35598.service - OpenSSH per-connection server daemon (139.178.89.65:35598). Feb 13 15:58:32.850384 sshd[1952]: Accepted publickey for core from 139.178.89.65 port 35598 ssh2: RSA SHA256:KBbJQhuYoAcoXq4F5fzIreCcUwPw2deXowxxtv7qw9g Feb 13 15:58:32.852461 sshd-session[1952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:58:32.859940 systemd-logind[1567]: New session 7 of user core. Feb 13 15:58:32.866747 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 15:58:33.379024 sudo[1956]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 15:58:33.379360 sudo[1956]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:58:33.546419 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Feb 13 15:58:33.560621 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:58:33.725262 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:58:33.734606 (kubelet)[1987]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:58:33.746887 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 15:58:33.747568 (dockerd)[1992]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 15:58:33.791334 kubelet[1987]: E0213 15:58:33.788130 1987 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:58:33.797612 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:58:33.797835 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:58:33.994199 dockerd[1992]: time="2025-02-13T15:58:33.992485134Z" level=info msg="Starting up" Feb 13 15:58:34.073887 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2894336576-merged.mount: Deactivated successfully. Feb 13 15:58:34.086323 systemd[1]: var-lib-docker-metacopy\x2dcheck2765412293-merged.mount: Deactivated successfully. Feb 13 15:58:34.096403 dockerd[1992]: time="2025-02-13T15:58:34.096340613Z" level=info msg="Loading containers: start." Feb 13 15:58:34.244196 kernel: Initializing XFRM netlink socket Feb 13 15:58:34.336791 systemd-networkd[1236]: docker0: Link UP Feb 13 15:58:34.365281 dockerd[1992]: time="2025-02-13T15:58:34.365211977Z" level=info msg="Loading containers: done." Feb 13 15:58:34.384713 dockerd[1992]: time="2025-02-13T15:58:34.384276933Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 15:58:34.384713 dockerd[1992]: time="2025-02-13T15:58:34.384391936Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Feb 13 15:58:34.384713 dockerd[1992]: time="2025-02-13T15:58:34.384509779Z" level=info msg="Daemon has completed initialization" Feb 13 15:58:34.431479 dockerd[1992]: time="2025-02-13T15:58:34.431336866Z" level=info msg="API listen on /run/docker.sock" Feb 13 15:58:34.431816 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 15:58:35.071246 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1617811959-merged.mount: Deactivated successfully. Feb 13 15:58:35.588455 containerd[1597]: time="2025-02-13T15:58:35.588378897Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.14\"" Feb 13 15:58:36.232830 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount443942718.mount: Deactivated successfully. Feb 13 15:58:38.865378 containerd[1597]: time="2025-02-13T15:58:38.865297510Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:58:38.867372 containerd[1597]: time="2025-02-13T15:58:38.867306362Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.14: active requests=0, bytes read=32205953" Feb 13 15:58:38.868368 containerd[1597]: time="2025-02-13T15:58:38.867827096Z" level=info msg="ImageCreate event name:\"sha256:c136612236eb39fcac4abea395de985f019cf87f72cc1afd828fb78de88a649f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:58:38.873589 containerd[1597]: time="2025-02-13T15:58:38.873542003Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1432b456b21015c99783d2b3a2010873fb67bf946c89d45e6d356449e083dcfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:58:38.875848 containerd[1597]: time="2025-02-13T15:58:38.875805141Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.14\" with image id \"sha256:c136612236eb39fcac4abea395de985f019cf87f72cc1afd828fb78de88a649f\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.14\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1432b456b21015c99783d2b3a2010873fb67bf946c89d45e6d356449e083dcfb\", size \"32202661\" in 3.287358162s" Feb 13 15:58:38.875848 containerd[1597]: time="2025-02-13T15:58:38.875838662Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.14\" returns image reference \"sha256:c136612236eb39fcac4abea395de985f019cf87f72cc1afd828fb78de88a649f\"" Feb 13 15:58:38.901013 containerd[1597]: time="2025-02-13T15:58:38.900909907Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.14\"" Feb 13 15:58:41.486050 containerd[1597]: time="2025-02-13T15:58:41.485998744Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:58:41.488637 containerd[1597]: time="2025-02-13T15:58:41.488591445Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.14: active requests=0, bytes read=29383111" Feb 13 15:58:41.490109 containerd[1597]: time="2025-02-13T15:58:41.490073440Z" level=info msg="ImageCreate event name:\"sha256:582085ec6cd04751293bebad40e35d6b2066b81f6e5868a9db60b8127ca7921d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:58:41.495356 containerd[1597]: time="2025-02-13T15:58:41.495287964Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:23ccdb5e7e2c317f5727652ef7e64ef91ead34a3c73dfa9c3ab23b3a5028e280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:58:41.496894 containerd[1597]: time="2025-02-13T15:58:41.496838320Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.14\" with image id \"sha256:582085ec6cd04751293bebad40e35d6b2066b81f6e5868a9db60b8127ca7921d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.14\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:23ccdb5e7e2c317f5727652ef7e64ef91ead34a3c73dfa9c3ab23b3a5028e280\", size \"30786820\" in 2.595834931s" Feb 13 15:58:41.496894 containerd[1597]: time="2025-02-13T15:58:41.496881081Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.14\" returns image reference \"sha256:582085ec6cd04751293bebad40e35d6b2066b81f6e5868a9db60b8127ca7921d\"" Feb 13 15:58:41.526018 containerd[1597]: time="2025-02-13T15:58:41.525941689Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.14\"" Feb 13 15:58:43.141796 containerd[1597]: time="2025-02-13T15:58:43.141658805Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:58:43.143473 containerd[1597]: time="2025-02-13T15:58:43.143413844Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.14: active requests=0, bytes read=15767000" Feb 13 15:58:43.144360 containerd[1597]: time="2025-02-13T15:58:43.144305744Z" level=info msg="ImageCreate event name:\"sha256:dfb84ea1121ad6a9ceccfe5078af3eee1b27b8d2b2e93d6449d11e1526dbeff8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:58:43.147505 containerd[1597]: time="2025-02-13T15:58:43.147436655Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf0046be3eb6c4831b6b2a1b3e24f18e27778663890144478f11a82622b48c48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:58:43.148747 containerd[1597]: time="2025-02-13T15:58:43.148619761Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.14\" with image id \"sha256:dfb84ea1121ad6a9ceccfe5078af3eee1b27b8d2b2e93d6449d11e1526dbeff8\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.14\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf0046be3eb6c4831b6b2a1b3e24f18e27778663890144478f11a82622b48c48\", size \"17170727\" in 1.62260447s" Feb 13 15:58:43.148747 containerd[1597]: time="2025-02-13T15:58:43.148737364Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.14\" returns image reference \"sha256:dfb84ea1121ad6a9ceccfe5078af3eee1b27b8d2b2e93d6449d11e1526dbeff8\"" Feb 13 15:58:43.174812 containerd[1597]: time="2025-02-13T15:58:43.174778470Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.14\"" Feb 13 15:58:43.858059 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Feb 13 15:58:43.865894 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:58:43.995332 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:58:43.997397 (kubelet)[2281]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:58:44.050033 kubelet[2281]: E0213 15:58:44.049479 2281 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:58:44.054062 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:58:44.054511 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:58:44.284600 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2125068246.mount: Deactivated successfully. Feb 13 15:58:44.891193 containerd[1597]: time="2025-02-13T15:58:44.890943120Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:58:44.892577 containerd[1597]: time="2025-02-13T15:58:44.892470234Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.14: active requests=0, bytes read=25273401" Feb 13 15:58:44.893946 containerd[1597]: time="2025-02-13T15:58:44.893870785Z" level=info msg="ImageCreate event name:\"sha256:8acaac6288aef2fbe5821a7539f95a6043513e648e6ffaf6a545a93fa77fe8c8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:58:44.898477 containerd[1597]: time="2025-02-13T15:58:44.898407124Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:197988595a902751e4e570a5e4d74182f12d83c1d175c1e79aa020f358f6535b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:58:44.900274 containerd[1597]: time="2025-02-13T15:58:44.899899597Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.14\" with image id \"sha256:8acaac6288aef2fbe5821a7539f95a6043513e648e6ffaf6a545a93fa77fe8c8\", repo tag \"registry.k8s.io/kube-proxy:v1.29.14\", repo digest \"registry.k8s.io/kube-proxy@sha256:197988595a902751e4e570a5e4d74182f12d83c1d175c1e79aa020f358f6535b\", size \"25272394\" in 1.725071407s" Feb 13 15:58:44.900274 containerd[1597]: time="2025-02-13T15:58:44.899971719Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.14\" returns image reference \"sha256:8acaac6288aef2fbe5821a7539f95a6043513e648e6ffaf6a545a93fa77fe8c8\"" Feb 13 15:58:44.924403 containerd[1597]: time="2025-02-13T15:58:44.924311693Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 15:58:45.512173 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2392623066.mount: Deactivated successfully. Feb 13 15:58:46.142080 containerd[1597]: time="2025-02-13T15:58:46.142012530Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:58:46.143943 containerd[1597]: time="2025-02-13T15:58:46.143320557Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485461" Feb 13 15:58:46.145204 containerd[1597]: time="2025-02-13T15:58:46.145164316Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:58:46.153088 containerd[1597]: time="2025-02-13T15:58:46.153035401Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:58:46.154620 containerd[1597]: time="2025-02-13T15:58:46.154584194Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.229972534s" Feb 13 15:58:46.154725 containerd[1597]: time="2025-02-13T15:58:46.154710636Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 13 15:58:46.176878 containerd[1597]: time="2025-02-13T15:58:46.176837461Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 15:58:46.735722 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1418391146.mount: Deactivated successfully. Feb 13 15:58:46.744087 containerd[1597]: time="2025-02-13T15:58:46.742355977Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:58:46.744087 containerd[1597]: time="2025-02-13T15:58:46.743988292Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268841" Feb 13 15:58:46.744502 containerd[1597]: time="2025-02-13T15:58:46.744463341Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:58:46.746697 containerd[1597]: time="2025-02-13T15:58:46.746587026Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:58:46.747902 containerd[1597]: time="2025-02-13T15:58:46.747852413Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 570.97483ms" Feb 13 15:58:46.747902 containerd[1597]: time="2025-02-13T15:58:46.747894774Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 13 15:58:46.772219 containerd[1597]: time="2025-02-13T15:58:46.772133003Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Feb 13 15:58:47.347994 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2101029003.mount: Deactivated successfully. Feb 13 15:58:50.429585 containerd[1597]: time="2025-02-13T15:58:50.429449897Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:58:50.431665 containerd[1597]: time="2025-02-13T15:58:50.431604419Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200866" Feb 13 15:58:50.432886 containerd[1597]: time="2025-02-13T15:58:50.432828123Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:58:50.435825 containerd[1597]: time="2025-02-13T15:58:50.435772540Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:58:50.437432 containerd[1597]: time="2025-02-13T15:58:50.437271409Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 3.665077885s" Feb 13 15:58:50.437432 containerd[1597]: time="2025-02-13T15:58:50.437309370Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Feb 13 15:58:54.297198 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Feb 13 15:58:54.306394 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:58:54.428343 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:58:54.437828 (kubelet)[2469]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:58:54.486742 kubelet[2469]: E0213 15:58:54.486692 2469 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:58:54.491379 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:58:54.491536 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:58:54.512802 systemd[1]: Started sshd@12-157.90.248.142:22-118.193.38.84:43294.service - OpenSSH per-connection server daemon (118.193.38.84:43294). Feb 13 15:58:55.945226 sshd[2480]: Invalid user installer from 118.193.38.84 port 43294 Feb 13 15:58:56.218306 sshd[2480]: Received disconnect from 118.193.38.84 port 43294:11: Bye Bye [preauth] Feb 13 15:58:56.218306 sshd[2480]: Disconnected from invalid user installer 118.193.38.84 port 43294 [preauth] Feb 13 15:58:56.223116 systemd[1]: sshd@12-157.90.248.142:22-118.193.38.84:43294.service: Deactivated successfully. Feb 13 15:58:56.304627 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:58:56.318653 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:58:56.352389 systemd[1]: Reloading requested from client PID 2492 ('systemctl') (unit session-7.scope)... Feb 13 15:58:56.352404 systemd[1]: Reloading... Feb 13 15:58:56.451176 zram_generator::config[2532]: No configuration found. Feb 13 15:58:56.575467 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:58:56.644183 systemd[1]: Reloading finished in 291 ms. Feb 13 15:58:56.692477 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:58:56.692739 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:58:56.695999 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:58:56.820341 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:58:56.832901 (kubelet)[2593]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:58:56.887814 kubelet[2593]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:58:56.887814 kubelet[2593]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:58:56.887814 kubelet[2593]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:58:56.888273 kubelet[2593]: I0213 15:58:56.887880 2593 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:58:57.640975 kubelet[2593]: I0213 15:58:57.640934 2593 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Feb 13 15:58:57.640975 kubelet[2593]: I0213 15:58:57.640969 2593 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:58:57.641320 kubelet[2593]: I0213 15:58:57.641209 2593 server.go:919] "Client rotation is on, will bootstrap in background" Feb 13 15:58:57.661656 kubelet[2593]: I0213 15:58:57.661263 2593 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:58:57.662435 kubelet[2593]: E0213 15:58:57.662163 2593 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://157.90.248.142:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 157.90.248.142:6443: connect: connection refused Feb 13 15:58:57.671827 kubelet[2593]: I0213 15:58:57.671797 2593 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:58:57.674249 kubelet[2593]: I0213 15:58:57.673631 2593 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:58:57.674249 kubelet[2593]: I0213 15:58:57.673857 2593 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:58:57.674249 kubelet[2593]: I0213 15:58:57.673881 2593 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:58:57.674249 kubelet[2593]: I0213 15:58:57.673890 2593 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:58:57.674249 kubelet[2593]: I0213 15:58:57.674020 2593 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:58:57.676848 kubelet[2593]: I0213 15:58:57.676818 2593 kubelet.go:396] "Attempting to sync node with API server" Feb 13 15:58:57.676948 kubelet[2593]: I0213 15:58:57.676940 2593 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:58:57.677032 kubelet[2593]: I0213 15:58:57.677022 2593 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:58:57.677110 kubelet[2593]: I0213 15:58:57.677100 2593 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:58:57.677750 kubelet[2593]: W0213 15:58:57.677694 2593 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://157.90.248.142:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-1-f-29672fd7f0&limit=500&resourceVersion=0": dial tcp 157.90.248.142:6443: connect: connection refused Feb 13 15:58:57.677814 kubelet[2593]: E0213 15:58:57.677775 2593 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://157.90.248.142:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-1-f-29672fd7f0&limit=500&resourceVersion=0": dial tcp 157.90.248.142:6443: connect: connection refused Feb 13 15:58:57.680378 kubelet[2593]: W0213 15:58:57.680334 2593 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://157.90.248.142:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 157.90.248.142:6443: connect: connection refused Feb 13 15:58:57.680496 kubelet[2593]: E0213 15:58:57.680485 2593 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://157.90.248.142:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 157.90.248.142:6443: connect: connection refused Feb 13 15:58:57.681638 kubelet[2593]: I0213 15:58:57.680876 2593 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:58:57.681638 kubelet[2593]: I0213 15:58:57.681433 2593 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:58:57.682264 kubelet[2593]: W0213 15:58:57.682243 2593 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 15:58:57.683171 kubelet[2593]: I0213 15:58:57.683134 2593 server.go:1256] "Started kubelet" Feb 13 15:58:57.686907 kubelet[2593]: I0213 15:58:57.686876 2593 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:58:57.687697 kubelet[2593]: I0213 15:58:57.687666 2593 server.go:461] "Adding debug handlers to kubelet server" Feb 13 15:58:57.688539 kubelet[2593]: I0213 15:58:57.688519 2593 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:58:57.688898 kubelet[2593]: I0213 15:58:57.688883 2593 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:58:57.690918 kubelet[2593]: E0213 15:58:57.690897 2593 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://157.90.248.142:6443/api/v1/namespaces/default/events\": dial tcp 157.90.248.142:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4152-2-1-f-29672fd7f0.1823cfc7b1c5bb70 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152-2-1-f-29672fd7f0,UID:ci-4152-2-1-f-29672fd7f0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152-2-1-f-29672fd7f0,},FirstTimestamp:2025-02-13 15:58:57.683110768 +0000 UTC m=+0.845801476,LastTimestamp:2025-02-13 15:58:57.683110768 +0000 UTC m=+0.845801476,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152-2-1-f-29672fd7f0,}" Feb 13 15:58:57.691587 kubelet[2593]: I0213 15:58:57.691539 2593 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:58:57.696491 kubelet[2593]: E0213 15:58:57.696470 2593 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:58:57.696854 kubelet[2593]: E0213 15:58:57.696832 2593 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152-2-1-f-29672fd7f0\" not found" Feb 13 15:58:57.696958 kubelet[2593]: I0213 15:58:57.696946 2593 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:58:57.697127 kubelet[2593]: I0213 15:58:57.697111 2593 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 13 15:58:57.697970 kubelet[2593]: I0213 15:58:57.697254 2593 reconciler_new.go:29] "Reconciler: start to sync state" Feb 13 15:58:57.697970 kubelet[2593]: W0213 15:58:57.697589 2593 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://157.90.248.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 157.90.248.142:6443: connect: connection refused Feb 13 15:58:57.697970 kubelet[2593]: E0213 15:58:57.697631 2593 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://157.90.248.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 157.90.248.142:6443: connect: connection refused Feb 13 15:58:57.697970 kubelet[2593]: E0213 15:58:57.697834 2593 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://157.90.248.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-1-f-29672fd7f0?timeout=10s\": dial tcp 157.90.248.142:6443: connect: connection refused" interval="200ms" Feb 13 15:58:57.698741 kubelet[2593]: I0213 15:58:57.698715 2593 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:58:57.698829 kubelet[2593]: I0213 15:58:57.698807 2593 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:58:57.700007 kubelet[2593]: I0213 15:58:57.699977 2593 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:58:57.714674 kubelet[2593]: I0213 15:58:57.714639 2593 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:58:57.716821 kubelet[2593]: I0213 15:58:57.716786 2593 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:58:57.716971 kubelet[2593]: I0213 15:58:57.716961 2593 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:58:57.717050 kubelet[2593]: I0213 15:58:57.717040 2593 kubelet.go:2329] "Starting kubelet main sync loop" Feb 13 15:58:57.717241 kubelet[2593]: E0213 15:58:57.717226 2593 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:58:57.731965 kubelet[2593]: W0213 15:58:57.731912 2593 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://157.90.248.142:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 157.90.248.142:6443: connect: connection refused Feb 13 15:58:57.732299 kubelet[2593]: E0213 15:58:57.732140 2593 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://157.90.248.142:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 157.90.248.142:6443: connect: connection refused Feb 13 15:58:57.733555 kubelet[2593]: I0213 15:58:57.733526 2593 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:58:57.733555 kubelet[2593]: I0213 15:58:57.733552 2593 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:58:57.733674 kubelet[2593]: I0213 15:58:57.733572 2593 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:58:57.735973 kubelet[2593]: I0213 15:58:57.735924 2593 policy_none.go:49] "None policy: Start" Feb 13 15:58:57.737102 kubelet[2593]: I0213 15:58:57.737031 2593 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:58:57.737222 kubelet[2593]: I0213 15:58:57.737141 2593 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:58:57.744392 kubelet[2593]: I0213 15:58:57.744356 2593 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:58:57.744668 kubelet[2593]: I0213 15:58:57.744631 2593 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:58:57.749667 kubelet[2593]: E0213 15:58:57.749627 2593 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4152-2-1-f-29672fd7f0\" not found" Feb 13 15:58:57.799850 kubelet[2593]: I0213 15:58:57.799636 2593 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-1-f-29672fd7f0" Feb 13 15:58:57.800502 kubelet[2593]: E0213 15:58:57.800483 2593 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://157.90.248.142:6443/api/v1/nodes\": dial tcp 157.90.248.142:6443: connect: connection refused" node="ci-4152-2-1-f-29672fd7f0" Feb 13 15:58:57.817669 kubelet[2593]: I0213 15:58:57.817630 2593 topology_manager.go:215] "Topology Admit Handler" podUID="cda3808ce3eb2317c3dad37e842c8267" podNamespace="kube-system" podName="kube-apiserver-ci-4152-2-1-f-29672fd7f0" Feb 13 15:58:57.820347 kubelet[2593]: I0213 15:58:57.819835 2593 topology_manager.go:215] "Topology Admit Handler" podUID="482eea32cb3ef19127acbfb9030f4ddf" podNamespace="kube-system" podName="kube-controller-manager-ci-4152-2-1-f-29672fd7f0" Feb 13 15:58:57.822047 kubelet[2593]: I0213 15:58:57.821984 2593 topology_manager.go:215] "Topology Admit Handler" podUID="1e9ba421b085646ca55e81c3621f194c" podNamespace="kube-system" podName="kube-scheduler-ci-4152-2-1-f-29672fd7f0" Feb 13 15:58:57.898637 kubelet[2593]: E0213 15:58:57.898430 2593 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://157.90.248.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-1-f-29672fd7f0?timeout=10s\": dial tcp 157.90.248.142:6443: connect: connection refused" interval="400ms" Feb 13 15:58:57.998006 kubelet[2593]: I0213 15:58:57.997887 2593 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1e9ba421b085646ca55e81c3621f194c-kubeconfig\") pod \"kube-scheduler-ci-4152-2-1-f-29672fd7f0\" (UID: \"1e9ba421b085646ca55e81c3621f194c\") " pod="kube-system/kube-scheduler-ci-4152-2-1-f-29672fd7f0" Feb 13 15:58:57.998006 kubelet[2593]: I0213 15:58:57.997972 2593 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cda3808ce3eb2317c3dad37e842c8267-ca-certs\") pod \"kube-apiserver-ci-4152-2-1-f-29672fd7f0\" (UID: \"cda3808ce3eb2317c3dad37e842c8267\") " pod="kube-system/kube-apiserver-ci-4152-2-1-f-29672fd7f0" Feb 13 15:58:57.998442 kubelet[2593]: I0213 15:58:57.998107 2593 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cda3808ce3eb2317c3dad37e842c8267-k8s-certs\") pod \"kube-apiserver-ci-4152-2-1-f-29672fd7f0\" (UID: \"cda3808ce3eb2317c3dad37e842c8267\") " pod="kube-system/kube-apiserver-ci-4152-2-1-f-29672fd7f0" Feb 13 15:58:57.998442 kubelet[2593]: I0213 15:58:57.998201 2593 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cda3808ce3eb2317c3dad37e842c8267-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152-2-1-f-29672fd7f0\" (UID: \"cda3808ce3eb2317c3dad37e842c8267\") " pod="kube-system/kube-apiserver-ci-4152-2-1-f-29672fd7f0" Feb 13 15:58:57.998442 kubelet[2593]: I0213 15:58:57.998244 2593 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/482eea32cb3ef19127acbfb9030f4ddf-ca-certs\") pod \"kube-controller-manager-ci-4152-2-1-f-29672fd7f0\" (UID: \"482eea32cb3ef19127acbfb9030f4ddf\") " pod="kube-system/kube-controller-manager-ci-4152-2-1-f-29672fd7f0" Feb 13 15:58:57.998442 kubelet[2593]: I0213 15:58:57.998287 2593 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/482eea32cb3ef19127acbfb9030f4ddf-flexvolume-dir\") pod \"kube-controller-manager-ci-4152-2-1-f-29672fd7f0\" (UID: \"482eea32cb3ef19127acbfb9030f4ddf\") " pod="kube-system/kube-controller-manager-ci-4152-2-1-f-29672fd7f0" Feb 13 15:58:57.998442 kubelet[2593]: I0213 15:58:57.998318 2593 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/482eea32cb3ef19127acbfb9030f4ddf-k8s-certs\") pod \"kube-controller-manager-ci-4152-2-1-f-29672fd7f0\" (UID: \"482eea32cb3ef19127acbfb9030f4ddf\") " pod="kube-system/kube-controller-manager-ci-4152-2-1-f-29672fd7f0" Feb 13 15:58:57.998674 kubelet[2593]: I0213 15:58:57.998344 2593 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/482eea32cb3ef19127acbfb9030f4ddf-kubeconfig\") pod \"kube-controller-manager-ci-4152-2-1-f-29672fd7f0\" (UID: \"482eea32cb3ef19127acbfb9030f4ddf\") " pod="kube-system/kube-controller-manager-ci-4152-2-1-f-29672fd7f0" Feb 13 15:58:57.998674 kubelet[2593]: I0213 15:58:57.998384 2593 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/482eea32cb3ef19127acbfb9030f4ddf-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152-2-1-f-29672fd7f0\" (UID: \"482eea32cb3ef19127acbfb9030f4ddf\") " pod="kube-system/kube-controller-manager-ci-4152-2-1-f-29672fd7f0" Feb 13 15:58:58.002937 kubelet[2593]: I0213 15:58:58.002896 2593 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-1-f-29672fd7f0" Feb 13 15:58:58.003797 kubelet[2593]: E0213 15:58:58.003767 2593 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://157.90.248.142:6443/api/v1/nodes\": dial tcp 157.90.248.142:6443: connect: connection refused" node="ci-4152-2-1-f-29672fd7f0" Feb 13 15:58:58.125781 containerd[1597]: time="2025-02-13T15:58:58.125704704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152-2-1-f-29672fd7f0,Uid:cda3808ce3eb2317c3dad37e842c8267,Namespace:kube-system,Attempt:0,}" Feb 13 15:58:58.128890 containerd[1597]: time="2025-02-13T15:58:58.128606194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152-2-1-f-29672fd7f0,Uid:482eea32cb3ef19127acbfb9030f4ddf,Namespace:kube-system,Attempt:0,}" Feb 13 15:58:58.133028 containerd[1597]: time="2025-02-13T15:58:58.132959989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152-2-1-f-29672fd7f0,Uid:1e9ba421b085646ca55e81c3621f194c,Namespace:kube-system,Attempt:0,}" Feb 13 15:58:58.299909 kubelet[2593]: E0213 15:58:58.299849 2593 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://157.90.248.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-1-f-29672fd7f0?timeout=10s\": dial tcp 157.90.248.142:6443: connect: connection refused" interval="800ms" Feb 13 15:58:58.407129 kubelet[2593]: I0213 15:58:58.406988 2593 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-1-f-29672fd7f0" Feb 13 15:58:58.407714 kubelet[2593]: E0213 15:58:58.407682 2593 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://157.90.248.142:6443/api/v1/nodes\": dial tcp 157.90.248.142:6443: connect: connection refused" node="ci-4152-2-1-f-29672fd7f0" Feb 13 15:58:58.638966 kubelet[2593]: W0213 15:58:58.638717 2593 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://157.90.248.142:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-1-f-29672fd7f0&limit=500&resourceVersion=0": dial tcp 157.90.248.142:6443: connect: connection refused Feb 13 15:58:58.638966 kubelet[2593]: E0213 15:58:58.638812 2593 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://157.90.248.142:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-1-f-29672fd7f0&limit=500&resourceVersion=0": dial tcp 157.90.248.142:6443: connect: connection refused Feb 13 15:58:58.663461 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2134940685.mount: Deactivated successfully. Feb 13 15:58:58.673386 containerd[1597]: time="2025-02-13T15:58:58.673329982Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:58:58.676183 containerd[1597]: time="2025-02-13T15:58:58.676074109Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:58:58.678857 containerd[1597]: time="2025-02-13T15:58:58.678673314Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:58:58.680010 containerd[1597]: time="2025-02-13T15:58:58.679885935Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:58:58.682432 containerd[1597]: time="2025-02-13T15:58:58.682101453Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:58:58.682432 containerd[1597]: time="2025-02-13T15:58:58.682303016Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Feb 13 15:58:58.683448 containerd[1597]: time="2025-02-13T15:58:58.683365795Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:58:58.687441 containerd[1597]: time="2025-02-13T15:58:58.687395824Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:58:58.689200 containerd[1597]: time="2025-02-13T15:58:58.688670566Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 562.883421ms" Feb 13 15:58:58.689946 containerd[1597]: time="2025-02-13T15:58:58.689881147Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 561.186592ms" Feb 13 15:58:58.690990 containerd[1597]: time="2025-02-13T15:58:58.690949125Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 557.896694ms" Feb 13 15:58:58.729756 kubelet[2593]: W0213 15:58:58.729237 2593 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://157.90.248.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 157.90.248.142:6443: connect: connection refused Feb 13 15:58:58.729756 kubelet[2593]: E0213 15:58:58.729719 2593 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://157.90.248.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 157.90.248.142:6443: connect: connection refused Feb 13 15:58:58.811551 containerd[1597]: time="2025-02-13T15:58:58.810704789Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:58:58.811551 containerd[1597]: time="2025-02-13T15:58:58.811376401Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:58:58.811551 containerd[1597]: time="2025-02-13T15:58:58.811401321Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:58:58.812534 containerd[1597]: time="2025-02-13T15:58:58.811898170Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:58:58.812534 containerd[1597]: time="2025-02-13T15:58:58.812051092Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:58:58.812534 containerd[1597]: time="2025-02-13T15:58:58.812187175Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:58:58.813406 containerd[1597]: time="2025-02-13T15:58:58.813305114Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:58:58.813406 containerd[1597]: time="2025-02-13T15:58:58.813213072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:58:58.816231 containerd[1597]: time="2025-02-13T15:58:58.813480597Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:58:58.816231 containerd[1597]: time="2025-02-13T15:58:58.814294771Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:58:58.816231 containerd[1597]: time="2025-02-13T15:58:58.814309651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:58:58.816231 containerd[1597]: time="2025-02-13T15:58:58.814395413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:58:58.895903 containerd[1597]: time="2025-02-13T15:58:58.894824319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152-2-1-f-29672fd7f0,Uid:482eea32cb3ef19127acbfb9030f4ddf,Namespace:kube-system,Attempt:0,} returns sandbox id \"c5e0226fe38f611cab5f893b7e6ded656debdea335094431e5a4f849d3dab859\"" Feb 13 15:58:58.902176 containerd[1597]: time="2025-02-13T15:58:58.902049283Z" level=info msg="CreateContainer within sandbox \"c5e0226fe38f611cab5f893b7e6ded656debdea335094431e5a4f849d3dab859\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 15:58:58.908508 containerd[1597]: time="2025-02-13T15:58:58.908461354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152-2-1-f-29672fd7f0,Uid:1e9ba421b085646ca55e81c3621f194c,Namespace:kube-system,Attempt:0,} returns sandbox id \"ba6dc931369f11a8d5a54106b7a4d87cf091f45ecf91b2a11c4d238f919cf963\"" Feb 13 15:58:58.911395 containerd[1597]: time="2025-02-13T15:58:58.911359604Z" level=info msg="CreateContainer within sandbox \"ba6dc931369f11a8d5a54106b7a4d87cf091f45ecf91b2a11c4d238f919cf963\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 15:58:58.917099 containerd[1597]: time="2025-02-13T15:58:58.917044422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152-2-1-f-29672fd7f0,Uid:cda3808ce3eb2317c3dad37e842c8267,Namespace:kube-system,Attempt:0,} returns sandbox id \"ad19b2092d2f72e8e565a7b8e0d2543f360196658d272eb7e9fe502924477083\"" Feb 13 15:58:58.921771 containerd[1597]: time="2025-02-13T15:58:58.921733103Z" level=info msg="CreateContainer within sandbox \"ad19b2092d2f72e8e565a7b8e0d2543f360196658d272eb7e9fe502924477083\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 15:58:58.934472 containerd[1597]: time="2025-02-13T15:58:58.934420681Z" level=info msg="CreateContainer within sandbox \"ba6dc931369f11a8d5a54106b7a4d87cf091f45ecf91b2a11c4d238f919cf963\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c7a14bb4adb47a87c6d56c4b5dd129836df90d1aabc32a6912fc4841b0e57c7a\"" Feb 13 15:58:58.935733 containerd[1597]: time="2025-02-13T15:58:58.935698023Z" level=info msg="CreateContainer within sandbox \"c5e0226fe38f611cab5f893b7e6ded656debdea335094431e5a4f849d3dab859\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"aab415f40eeae452fde9ca719484feefd3a22e6582dd11e0671ace3fce6c6133\"" Feb 13 15:58:58.936873 containerd[1597]: time="2025-02-13T15:58:58.936811682Z" level=info msg="StartContainer for \"aab415f40eeae452fde9ca719484feefd3a22e6582dd11e0671ace3fce6c6133\"" Feb 13 15:58:58.941781 containerd[1597]: time="2025-02-13T15:58:58.941717407Z" level=info msg="CreateContainer within sandbox \"ad19b2092d2f72e8e565a7b8e0d2543f360196658d272eb7e9fe502924477083\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b50dbca73b6d8fce71099e5668449e530ad2e5a1f7e569d571c4216321d6ff32\"" Feb 13 15:58:58.941893 containerd[1597]: time="2025-02-13T15:58:58.941872570Z" level=info msg="StartContainer for \"c7a14bb4adb47a87c6d56c4b5dd129836df90d1aabc32a6912fc4841b0e57c7a\"" Feb 13 15:58:58.948496 containerd[1597]: time="2025-02-13T15:58:58.947384265Z" level=info msg="StartContainer for \"b50dbca73b6d8fce71099e5668449e530ad2e5a1f7e569d571c4216321d6ff32\"" Feb 13 15:58:59.042552 containerd[1597]: time="2025-02-13T15:58:59.042398014Z" level=info msg="StartContainer for \"c7a14bb4adb47a87c6d56c4b5dd129836df90d1aabc32a6912fc4841b0e57c7a\" returns successfully" Feb 13 15:58:59.053291 containerd[1597]: time="2025-02-13T15:58:59.053249319Z" level=info msg="StartContainer for \"aab415f40eeae452fde9ca719484feefd3a22e6582dd11e0671ace3fce6c6133\" returns successfully" Feb 13 15:58:59.066779 containerd[1597]: time="2025-02-13T15:58:59.066736668Z" level=info msg="StartContainer for \"b50dbca73b6d8fce71099e5668449e530ad2e5a1f7e569d571c4216321d6ff32\" returns successfully" Feb 13 15:58:59.071537 kubelet[2593]: W0213 15:58:59.071495 2593 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://157.90.248.142:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 157.90.248.142:6443: connect: connection refused Feb 13 15:58:59.074343 kubelet[2593]: E0213 15:58:59.074304 2593 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://157.90.248.142:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 157.90.248.142:6443: connect: connection refused Feb 13 15:58:59.100684 kubelet[2593]: E0213 15:58:59.100636 2593 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://157.90.248.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-1-f-29672fd7f0?timeout=10s\": dial tcp 157.90.248.142:6443: connect: connection refused" interval="1.6s" Feb 13 15:58:59.172340 kubelet[2593]: W0213 15:58:59.172181 2593 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://157.90.248.142:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 157.90.248.142:6443: connect: connection refused Feb 13 15:58:59.172340 kubelet[2593]: E0213 15:58:59.172266 2593 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://157.90.248.142:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 157.90.248.142:6443: connect: connection refused Feb 13 15:58:59.211455 kubelet[2593]: I0213 15:58:59.211359 2593 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-1-f-29672fd7f0" Feb 13 15:58:59.212013 kubelet[2593]: E0213 15:58:59.211997 2593 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://157.90.248.142:6443/api/v1/nodes\": dial tcp 157.90.248.142:6443: connect: connection refused" node="ci-4152-2-1-f-29672fd7f0" Feb 13 15:59:00.817961 kubelet[2593]: I0213 15:59:00.817897 2593 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-1-f-29672fd7f0" Feb 13 15:59:01.562503 kubelet[2593]: E0213 15:59:01.562461 2593 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4152-2-1-f-29672fd7f0\" not found" node="ci-4152-2-1-f-29672fd7f0" Feb 13 15:59:01.638166 kubelet[2593]: I0213 15:59:01.636803 2593 kubelet_node_status.go:76] "Successfully registered node" node="ci-4152-2-1-f-29672fd7f0" Feb 13 15:59:01.682587 kubelet[2593]: I0213 15:59:01.682548 2593 apiserver.go:52] "Watching apiserver" Feb 13 15:59:01.797517 kubelet[2593]: I0213 15:59:01.797456 2593 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 13 15:59:04.521953 systemd[1]: Reloading requested from client PID 2865 ('systemctl') (unit session-7.scope)... Feb 13 15:59:04.521974 systemd[1]: Reloading... Feb 13 15:59:04.624187 zram_generator::config[2908]: No configuration found. Feb 13 15:59:04.720512 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:59:04.800309 systemd[1]: Reloading finished in 277 ms. Feb 13 15:59:04.836425 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:59:04.837021 kubelet[2593]: I0213 15:59:04.836538 2593 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:59:04.856981 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:59:04.857651 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:59:04.865449 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:59:04.988438 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:59:04.993432 (kubelet)[2960]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:59:05.075610 kubelet[2960]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:59:05.075610 kubelet[2960]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:59:05.075610 kubelet[2960]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:59:05.075610 kubelet[2960]: I0213 15:59:05.073932 2960 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:59:05.081398 kubelet[2960]: I0213 15:59:05.079001 2960 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Feb 13 15:59:05.081398 kubelet[2960]: I0213 15:59:05.079033 2960 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:59:05.081398 kubelet[2960]: I0213 15:59:05.079272 2960 server.go:919] "Client rotation is on, will bootstrap in background" Feb 13 15:59:05.081604 kubelet[2960]: I0213 15:59:05.081485 2960 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 15:59:05.084688 kubelet[2960]: I0213 15:59:05.084477 2960 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:59:05.096214 kubelet[2960]: I0213 15:59:05.096177 2960 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:59:05.096656 kubelet[2960]: I0213 15:59:05.096643 2960 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:59:05.096840 kubelet[2960]: I0213 15:59:05.096822 2960 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:59:05.096924 kubelet[2960]: I0213 15:59:05.096846 2960 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:59:05.096924 kubelet[2960]: I0213 15:59:05.096855 2960 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:59:05.096924 kubelet[2960]: I0213 15:59:05.096888 2960 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:59:05.097001 kubelet[2960]: I0213 15:59:05.096986 2960 kubelet.go:396] "Attempting to sync node with API server" Feb 13 15:59:05.097001 kubelet[2960]: I0213 15:59:05.096999 2960 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:59:05.097044 kubelet[2960]: I0213 15:59:05.097024 2960 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:59:05.097044 kubelet[2960]: I0213 15:59:05.097037 2960 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:59:05.099502 kubelet[2960]: I0213 15:59:05.099306 2960 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:59:05.104349 kubelet[2960]: I0213 15:59:05.100189 2960 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:59:05.104349 kubelet[2960]: I0213 15:59:05.100900 2960 server.go:1256] "Started kubelet" Feb 13 15:59:05.106487 kubelet[2960]: I0213 15:59:05.106412 2960 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:59:05.116335 sudo[2976]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 13 15:59:05.116997 sudo[2976]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Feb 13 15:59:05.117522 kubelet[2960]: I0213 15:59:05.117065 2960 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:59:05.118655 kubelet[2960]: I0213 15:59:05.118344 2960 server.go:461] "Adding debug handlers to kubelet server" Feb 13 15:59:05.120569 kubelet[2960]: I0213 15:59:05.120543 2960 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:59:05.120756 kubelet[2960]: I0213 15:59:05.120741 2960 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:59:05.125333 kubelet[2960]: I0213 15:59:05.125075 2960 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:59:05.143892 kubelet[2960]: I0213 15:59:05.143850 2960 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 13 15:59:05.144020 kubelet[2960]: I0213 15:59:05.144009 2960 reconciler_new.go:29] "Reconciler: start to sync state" Feb 13 15:59:05.148874 kubelet[2960]: I0213 15:59:05.148320 2960 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:59:05.151290 kubelet[2960]: I0213 15:59:05.151256 2960 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:59:05.151290 kubelet[2960]: I0213 15:59:05.151297 2960 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:59:05.151441 kubelet[2960]: I0213 15:59:05.151320 2960 kubelet.go:2329] "Starting kubelet main sync loop" Feb 13 15:59:05.151441 kubelet[2960]: E0213 15:59:05.151391 2960 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:59:05.170786 kubelet[2960]: I0213 15:59:05.170757 2960 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:59:05.171176 kubelet[2960]: I0213 15:59:05.171141 2960 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:59:05.175978 kubelet[2960]: E0213 15:59:05.175940 2960 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:59:05.177248 kubelet[2960]: I0213 15:59:05.177227 2960 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:59:05.233511 kubelet[2960]: I0213 15:59:05.232505 2960 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-1-f-29672fd7f0" Feb 13 15:59:05.247520 kubelet[2960]: I0213 15:59:05.246932 2960 kubelet_node_status.go:112] "Node was previously registered" node="ci-4152-2-1-f-29672fd7f0" Feb 13 15:59:05.247777 kubelet[2960]: I0213 15:59:05.247765 2960 kubelet_node_status.go:76] "Successfully registered node" node="ci-4152-2-1-f-29672fd7f0" Feb 13 15:59:05.255310 kubelet[2960]: E0213 15:59:05.255280 2960 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 15:59:05.264809 kubelet[2960]: I0213 15:59:05.264767 2960 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:59:05.264809 kubelet[2960]: I0213 15:59:05.264791 2960 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:59:05.264809 kubelet[2960]: I0213 15:59:05.264808 2960 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:59:05.264972 kubelet[2960]: I0213 15:59:05.264946 2960 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 15:59:05.264972 kubelet[2960]: I0213 15:59:05.264968 2960 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 15:59:05.265033 kubelet[2960]: I0213 15:59:05.264976 2960 policy_none.go:49] "None policy: Start" Feb 13 15:59:05.269202 kubelet[2960]: I0213 15:59:05.267853 2960 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:59:05.269202 kubelet[2960]: I0213 15:59:05.267891 2960 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:59:05.269202 kubelet[2960]: I0213 15:59:05.268193 2960 state_mem.go:75] "Updated machine memory state" Feb 13 15:59:05.270300 kubelet[2960]: I0213 15:59:05.270265 2960 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:59:05.270743 kubelet[2960]: I0213 15:59:05.270705 2960 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:59:05.455897 kubelet[2960]: I0213 15:59:05.455753 2960 topology_manager.go:215] "Topology Admit Handler" podUID="cda3808ce3eb2317c3dad37e842c8267" podNamespace="kube-system" podName="kube-apiserver-ci-4152-2-1-f-29672fd7f0" Feb 13 15:59:05.456926 kubelet[2960]: I0213 15:59:05.456896 2960 topology_manager.go:215] "Topology Admit Handler" podUID="482eea32cb3ef19127acbfb9030f4ddf" podNamespace="kube-system" podName="kube-controller-manager-ci-4152-2-1-f-29672fd7f0" Feb 13 15:59:05.456998 kubelet[2960]: I0213 15:59:05.456989 2960 topology_manager.go:215] "Topology Admit Handler" podUID="1e9ba421b085646ca55e81c3621f194c" podNamespace="kube-system" podName="kube-scheduler-ci-4152-2-1-f-29672fd7f0" Feb 13 15:59:05.469620 kubelet[2960]: E0213 15:59:05.469569 2960 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4152-2-1-f-29672fd7f0\" already exists" pod="kube-system/kube-apiserver-ci-4152-2-1-f-29672fd7f0" Feb 13 15:59:05.546650 kubelet[2960]: I0213 15:59:05.546410 2960 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1e9ba421b085646ca55e81c3621f194c-kubeconfig\") pod \"kube-scheduler-ci-4152-2-1-f-29672fd7f0\" (UID: \"1e9ba421b085646ca55e81c3621f194c\") " pod="kube-system/kube-scheduler-ci-4152-2-1-f-29672fd7f0" Feb 13 15:59:05.546650 kubelet[2960]: I0213 15:59:05.546484 2960 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cda3808ce3eb2317c3dad37e842c8267-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152-2-1-f-29672fd7f0\" (UID: \"cda3808ce3eb2317c3dad37e842c8267\") " pod="kube-system/kube-apiserver-ci-4152-2-1-f-29672fd7f0" Feb 13 15:59:05.546650 kubelet[2960]: I0213 15:59:05.546593 2960 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/482eea32cb3ef19127acbfb9030f4ddf-ca-certs\") pod \"kube-controller-manager-ci-4152-2-1-f-29672fd7f0\" (UID: \"482eea32cb3ef19127acbfb9030f4ddf\") " pod="kube-system/kube-controller-manager-ci-4152-2-1-f-29672fd7f0" Feb 13 15:59:05.546650 kubelet[2960]: I0213 15:59:05.546621 2960 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/482eea32cb3ef19127acbfb9030f4ddf-flexvolume-dir\") pod \"kube-controller-manager-ci-4152-2-1-f-29672fd7f0\" (UID: \"482eea32cb3ef19127acbfb9030f4ddf\") " pod="kube-system/kube-controller-manager-ci-4152-2-1-f-29672fd7f0" Feb 13 15:59:05.547241 kubelet[2960]: I0213 15:59:05.547008 2960 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/482eea32cb3ef19127acbfb9030f4ddf-kubeconfig\") pod \"kube-controller-manager-ci-4152-2-1-f-29672fd7f0\" (UID: \"482eea32cb3ef19127acbfb9030f4ddf\") " pod="kube-system/kube-controller-manager-ci-4152-2-1-f-29672fd7f0" Feb 13 15:59:05.547241 kubelet[2960]: I0213 15:59:05.547064 2960 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cda3808ce3eb2317c3dad37e842c8267-ca-certs\") pod \"kube-apiserver-ci-4152-2-1-f-29672fd7f0\" (UID: \"cda3808ce3eb2317c3dad37e842c8267\") " pod="kube-system/kube-apiserver-ci-4152-2-1-f-29672fd7f0" Feb 13 15:59:05.547241 kubelet[2960]: I0213 15:59:05.547085 2960 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cda3808ce3eb2317c3dad37e842c8267-k8s-certs\") pod \"kube-apiserver-ci-4152-2-1-f-29672fd7f0\" (UID: \"cda3808ce3eb2317c3dad37e842c8267\") " pod="kube-system/kube-apiserver-ci-4152-2-1-f-29672fd7f0" Feb 13 15:59:05.547241 kubelet[2960]: I0213 15:59:05.547104 2960 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/482eea32cb3ef19127acbfb9030f4ddf-k8s-certs\") pod \"kube-controller-manager-ci-4152-2-1-f-29672fd7f0\" (UID: \"482eea32cb3ef19127acbfb9030f4ddf\") " pod="kube-system/kube-controller-manager-ci-4152-2-1-f-29672fd7f0" Feb 13 15:59:05.547241 kubelet[2960]: I0213 15:59:05.547204 2960 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/482eea32cb3ef19127acbfb9030f4ddf-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152-2-1-f-29672fd7f0\" (UID: \"482eea32cb3ef19127acbfb9030f4ddf\") " pod="kube-system/kube-controller-manager-ci-4152-2-1-f-29672fd7f0" Feb 13 15:59:05.663761 sudo[2976]: pam_unix(sudo:session): session closed for user root Feb 13 15:59:06.111481 kubelet[2960]: I0213 15:59:06.111000 2960 apiserver.go:52] "Watching apiserver" Feb 13 15:59:06.144646 kubelet[2960]: I0213 15:59:06.144588 2960 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 13 15:59:06.188680 kubelet[2960]: I0213 15:59:06.188505 2960 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4152-2-1-f-29672fd7f0" podStartSLOduration=1.188413906 podStartE2EDuration="1.188413906s" podCreationTimestamp="2025-02-13 15:59:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:59:06.176215112 +0000 UTC m=+1.178692632" watchObservedRunningTime="2025-02-13 15:59:06.188413906 +0000 UTC m=+1.190891466" Feb 13 15:59:06.202547 kubelet[2960]: I0213 15:59:06.202480 2960 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4152-2-1-f-29672fd7f0" podStartSLOduration=2.202402889 podStartE2EDuration="2.202402889s" podCreationTimestamp="2025-02-13 15:59:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:59:06.188991155 +0000 UTC m=+1.191468675" watchObservedRunningTime="2025-02-13 15:59:06.202402889 +0000 UTC m=+1.204880449" Feb 13 15:59:06.202741 kubelet[2960]: I0213 15:59:06.202660 2960 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4152-2-1-f-29672fd7f0" podStartSLOduration=1.202633893 podStartE2EDuration="1.202633893s" podCreationTimestamp="2025-02-13 15:59:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:59:06.20056598 +0000 UTC m=+1.203043540" watchObservedRunningTime="2025-02-13 15:59:06.202633893 +0000 UTC m=+1.205111453" Feb 13 15:59:06.218026 kubelet[2960]: E0213 15:59:06.217970 2960 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4152-2-1-f-29672fd7f0\" already exists" pod="kube-system/kube-apiserver-ci-4152-2-1-f-29672fd7f0" Feb 13 15:59:07.026055 sudo[1956]: pam_unix(sudo:session): session closed for user root Feb 13 15:59:07.186757 sshd[1955]: Connection closed by 139.178.89.65 port 35598 Feb 13 15:59:07.187862 sshd-session[1952]: pam_unix(sshd:session): session closed for user core Feb 13 15:59:07.194924 systemd[1]: sshd@11-157.90.248.142:22-139.178.89.65:35598.service: Deactivated successfully. Feb 13 15:59:07.199686 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 15:59:07.200454 systemd-logind[1567]: Session 7 logged out. Waiting for processes to exit. Feb 13 15:59:07.202099 systemd-logind[1567]: Removed session 7. Feb 13 15:59:07.762614 systemd[1]: Started sshd@13-157.90.248.142:22-27.111.32.174:56970.service - OpenSSH per-connection server daemon (27.111.32.174:56970). Feb 13 15:59:08.754708 sshd[3028]: Invalid user gmodserver from 27.111.32.174 port 56970 Feb 13 15:59:08.933671 sshd[3028]: Received disconnect from 27.111.32.174 port 56970:11: Bye Bye [preauth] Feb 13 15:59:08.933671 sshd[3028]: Disconnected from invalid user gmodserver 27.111.32.174 port 56970 [preauth] Feb 13 15:59:08.935656 systemd[1]: sshd@13-157.90.248.142:22-27.111.32.174:56970.service: Deactivated successfully. Feb 13 15:59:16.830768 systemd[1]: Started sshd@14-157.90.248.142:22-125.94.71.207:52420.service - OpenSSH per-connection server daemon (125.94.71.207:52420). Feb 13 15:59:17.484737 kubelet[2960]: I0213 15:59:17.484680 2960 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 15:59:17.485323 containerd[1597]: time="2025-02-13T15:59:17.485057911Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 15:59:17.486065 kubelet[2960]: I0213 15:59:17.485675 2960 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 15:59:18.200096 sshd[3033]: Invalid user eric from 125.94.71.207 port 52420 Feb 13 15:59:18.366510 kubelet[2960]: I0213 15:59:18.366435 2960 topology_manager.go:215] "Topology Admit Handler" podUID="c77521d2-2705-426b-9652-911fc3c51e45" podNamespace="kube-system" podName="kube-proxy-92fkb" Feb 13 15:59:18.376882 kubelet[2960]: I0213 15:59:18.374386 2960 topology_manager.go:215] "Topology Admit Handler" podUID="23d0f52a-b204-4952-b16e-9af68c46b3de" podNamespace="kube-system" podName="cilium-w99cn" Feb 13 15:59:18.427998 kubelet[2960]: I0213 15:59:18.427949 2960 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c77521d2-2705-426b-9652-911fc3c51e45-xtables-lock\") pod \"kube-proxy-92fkb\" (UID: \"c77521d2-2705-426b-9652-911fc3c51e45\") " pod="kube-system/kube-proxy-92fkb" Feb 13 15:59:18.427998 kubelet[2960]: I0213 15:59:18.427996 2960 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/23d0f52a-b204-4952-b16e-9af68c46b3de-etc-cni-netd\") pod \"cilium-w99cn\" (UID: \"23d0f52a-b204-4952-b16e-9af68c46b3de\") " pod="kube-system/cilium-w99cn" Feb 13 15:59:18.428215 kubelet[2960]: I0213 15:59:18.428017 2960 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/23d0f52a-b204-4952-b16e-9af68c46b3de-xtables-lock\") pod \"cilium-w99cn\" (UID: \"23d0f52a-b204-4952-b16e-9af68c46b3de\") " pod="kube-system/cilium-w99cn" Feb 13 15:59:18.428215 kubelet[2960]: I0213 15:59:18.428048 2960 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c77521d2-2705-426b-9652-911fc3c51e45-kube-proxy\") pod \"kube-proxy-92fkb\" (UID: \"c77521d2-2705-426b-9652-911fc3c51e45\") " pod="kube-system/kube-proxy-92fkb" Feb 13 15:59:18.428215 kubelet[2960]: I0213 15:59:18.428067 2960 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/23d0f52a-b204-4952-b16e-9af68c46b3de-lib-modules\") pod \"cilium-w99cn\" (UID: \"23d0f52a-b204-4952-b16e-9af68c46b3de\") " pod="kube-system/cilium-w99cn" Feb 13 15:59:18.428215 kubelet[2960]: I0213 15:59:18.428085 2960 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c77521d2-2705-426b-9652-911fc3c51e45-lib-modules\") pod \"kube-proxy-92fkb\" (UID: \"c77521d2-2705-426b-9652-911fc3c51e45\") " pod="kube-system/kube-proxy-92fkb" Feb 13 15:59:18.428215 kubelet[2960]: I0213 15:59:18.428121 2960 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-665mq\" (UniqueName: \"kubernetes.io/projected/c77521d2-2705-426b-9652-911fc3c51e45-kube-api-access-665mq\") pod \"kube-proxy-92fkb\" (UID: \"c77521d2-2705-426b-9652-911fc3c51e45\") " pod="kube-system/kube-proxy-92fkb" Feb 13 15:59:18.428215 kubelet[2960]: I0213 15:59:18.428144 2960 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/23d0f52a-b204-4952-b16e-9af68c46b3de-bpf-maps\") pod \"cilium-w99cn\" (UID: \"23d0f52a-b204-4952-b16e-9af68c46b3de\") " pod="kube-system/cilium-w99cn" Feb 13 15:59:18.428398 kubelet[2960]: I0213 15:59:18.428181 2960 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/23d0f52a-b204-4952-b16e-9af68c46b3de-cilium-config-path\") pod \"cilium-w99cn\" (UID: \"23d0f52a-b204-4952-b16e-9af68c46b3de\") " pod="kube-system/cilium-w99cn" Feb 13 15:59:18.428398 kubelet[2960]: I0213 15:59:18.428204 2960 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/23d0f52a-b204-4952-b16e-9af68c46b3de-cni-path\") pod \"cilium-w99cn\" (UID: \"23d0f52a-b204-4952-b16e-9af68c46b3de\") " pod="kube-system/cilium-w99cn" Feb 13 15:59:18.428398 kubelet[2960]: I0213 15:59:18.428222 2960 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/23d0f52a-b204-4952-b16e-9af68c46b3de-clustermesh-secrets\") pod \"cilium-w99cn\" (UID: \"23d0f52a-b204-4952-b16e-9af68c46b3de\") " pod="kube-system/cilium-w99cn" Feb 13 15:59:18.428398 kubelet[2960]: I0213 15:59:18.428241 2960 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkd4b\" (UniqueName: \"kubernetes.io/projected/23d0f52a-b204-4952-b16e-9af68c46b3de-kube-api-access-kkd4b\") pod \"cilium-w99cn\" (UID: \"23d0f52a-b204-4952-b16e-9af68c46b3de\") " pod="kube-system/cilium-w99cn" Feb 13 15:59:18.428398 kubelet[2960]: I0213 15:59:18.428263 2960 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/23d0f52a-b204-4952-b16e-9af68c46b3de-host-proc-sys-net\") pod \"cilium-w99cn\" (UID: \"23d0f52a-b204-4952-b16e-9af68c46b3de\") " pod="kube-system/cilium-w99cn" Feb 13 15:59:18.428398 kubelet[2960]: I0213 15:59:18.428281 2960 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/23d0f52a-b204-4952-b16e-9af68c46b3de-hostproc\") pod \"cilium-w99cn\" (UID: \"23d0f52a-b204-4952-b16e-9af68c46b3de\") " pod="kube-system/cilium-w99cn" Feb 13 15:59:18.428675 kubelet[2960]: I0213 15:59:18.428301 2960 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/23d0f52a-b204-4952-b16e-9af68c46b3de-host-proc-sys-kernel\") pod \"cilium-w99cn\" (UID: \"23d0f52a-b204-4952-b16e-9af68c46b3de\") " pod="kube-system/cilium-w99cn" Feb 13 15:59:18.428675 kubelet[2960]: I0213 15:59:18.428319 2960 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/23d0f52a-b204-4952-b16e-9af68c46b3de-hubble-tls\") pod \"cilium-w99cn\" (UID: \"23d0f52a-b204-4952-b16e-9af68c46b3de\") " pod="kube-system/cilium-w99cn" Feb 13 15:59:18.428675 kubelet[2960]: I0213 15:59:18.428345 2960 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/23d0f52a-b204-4952-b16e-9af68c46b3de-cilium-run\") pod \"cilium-w99cn\" (UID: \"23d0f52a-b204-4952-b16e-9af68c46b3de\") " pod="kube-system/cilium-w99cn" Feb 13 15:59:18.428675 kubelet[2960]: I0213 15:59:18.428366 2960 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/23d0f52a-b204-4952-b16e-9af68c46b3de-cilium-cgroup\") pod \"cilium-w99cn\" (UID: \"23d0f52a-b204-4952-b16e-9af68c46b3de\") " pod="kube-system/cilium-w99cn" Feb 13 15:59:18.543236 kubelet[2960]: I0213 15:59:18.540059 2960 topology_manager.go:215] "Topology Admit Handler" podUID="6498fa03-41c7-4e17-89a9-556afca84ae2" podNamespace="kube-system" podName="cilium-operator-5cc964979-cgxkn" Feb 13 15:59:18.629401 kubelet[2960]: I0213 15:59:18.629355 2960 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nf4b\" (UniqueName: \"kubernetes.io/projected/6498fa03-41c7-4e17-89a9-556afca84ae2-kube-api-access-9nf4b\") pod \"cilium-operator-5cc964979-cgxkn\" (UID: \"6498fa03-41c7-4e17-89a9-556afca84ae2\") " pod="kube-system/cilium-operator-5cc964979-cgxkn" Feb 13 15:59:18.629401 kubelet[2960]: I0213 15:59:18.629416 2960 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6498fa03-41c7-4e17-89a9-556afca84ae2-cilium-config-path\") pod \"cilium-operator-5cc964979-cgxkn\" (UID: \"6498fa03-41c7-4e17-89a9-556afca84ae2\") " pod="kube-system/cilium-operator-5cc964979-cgxkn" Feb 13 15:59:18.684219 containerd[1597]: time="2025-02-13T15:59:18.682437724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-92fkb,Uid:c77521d2-2705-426b-9652-911fc3c51e45,Namespace:kube-system,Attempt:0,}" Feb 13 15:59:18.696381 containerd[1597]: time="2025-02-13T15:59:18.692999529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w99cn,Uid:23d0f52a-b204-4952-b16e-9af68c46b3de,Namespace:kube-system,Attempt:0,}" Feb 13 15:59:18.714016 containerd[1597]: time="2025-02-13T15:59:18.713610253Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:59:18.714016 containerd[1597]: time="2025-02-13T15:59:18.713684774Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:59:18.714016 containerd[1597]: time="2025-02-13T15:59:18.713711374Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:59:18.714016 containerd[1597]: time="2025-02-13T15:59:18.713799455Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:59:18.723619 containerd[1597]: time="2025-02-13T15:59:18.723346088Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:59:18.723619 containerd[1597]: time="2025-02-13T15:59:18.723405009Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:59:18.723619 containerd[1597]: time="2025-02-13T15:59:18.723417609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:59:18.724350 containerd[1597]: time="2025-02-13T15:59:18.723496050Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:59:18.764599 containerd[1597]: time="2025-02-13T15:59:18.764553495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-92fkb,Uid:c77521d2-2705-426b-9652-911fc3c51e45,Namespace:kube-system,Attempt:0,} returns sandbox id \"191118fb03b182d3e53131f7a5dc437d52385a4e323c87a4f0b6c95ca6ff9ae7\"" Feb 13 15:59:18.769564 containerd[1597]: time="2025-02-13T15:59:18.769528954Z" level=info msg="CreateContainer within sandbox \"191118fb03b182d3e53131f7a5dc437d52385a4e323c87a4f0b6c95ca6ff9ae7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 15:59:18.784725 containerd[1597]: time="2025-02-13T15:59:18.784660093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w99cn,Uid:23d0f52a-b204-4952-b16e-9af68c46b3de,Namespace:kube-system,Attempt:0,} returns sandbox id \"fa5972442b9ec8154e8d5f0f5de7996cf6c6ae7349d5a107cc4aef0e24bb1d80\"" Feb 13 15:59:18.786966 containerd[1597]: time="2025-02-13T15:59:18.786906360Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 15:59:18.797212 containerd[1597]: time="2025-02-13T15:59:18.794123485Z" level=info msg="CreateContainer within sandbox \"191118fb03b182d3e53131f7a5dc437d52385a4e323c87a4f0b6c95ca6ff9ae7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6017fdb215a408f3d927ee5aee387f2c930aab4dd3a4cb217b923ed271a5735e\"" Feb 13 15:59:18.797212 containerd[1597]: time="2025-02-13T15:59:18.796685955Z" level=info msg="StartContainer for \"6017fdb215a408f3d927ee5aee387f2c930aab4dd3a4cb217b923ed271a5735e\"" Feb 13 15:59:18.863284 containerd[1597]: time="2025-02-13T15:59:18.863076661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-cgxkn,Uid:6498fa03-41c7-4e17-89a9-556afca84ae2,Namespace:kube-system,Attempt:0,}" Feb 13 15:59:18.866941 containerd[1597]: time="2025-02-13T15:59:18.863967151Z" level=info msg="StartContainer for \"6017fdb215a408f3d927ee5aee387f2c930aab4dd3a4cb217b923ed271a5735e\" returns successfully" Feb 13 15:59:18.899720 containerd[1597]: time="2025-02-13T15:59:18.899586373Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:59:18.899853 containerd[1597]: time="2025-02-13T15:59:18.899779095Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:59:18.899853 containerd[1597]: time="2025-02-13T15:59:18.899813335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:59:18.900182 containerd[1597]: time="2025-02-13T15:59:18.899938057Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:59:18.983284 containerd[1597]: time="2025-02-13T15:59:18.983013000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-cgxkn,Uid:6498fa03-41c7-4e17-89a9-556afca84ae2,Namespace:kube-system,Attempt:0,} returns sandbox id \"399dd598260fba6509fb244b8dbd16f4346b48d3e777c9a1a4c6d301bdee26c7\"" Feb 13 15:59:19.127917 sshd[3033]: Received disconnect from 125.94.71.207 port 52420:11: Bye Bye [preauth] Feb 13 15:59:19.127917 sshd[3033]: Disconnected from invalid user eric 125.94.71.207 port 52420 [preauth] Feb 13 15:59:19.132677 systemd[1]: sshd@14-157.90.248.142:22-125.94.71.207:52420.service: Deactivated successfully. Feb 13 15:59:19.260639 kubelet[2960]: I0213 15:59:19.260418 2960 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-92fkb" podStartSLOduration=1.260353175 podStartE2EDuration="1.260353175s" podCreationTimestamp="2025-02-13 15:59:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:59:19.259863649 +0000 UTC m=+14.262341409" watchObservedRunningTime="2025-02-13 15:59:19.260353175 +0000 UTC m=+14.262830695" Feb 13 15:59:19.692538 systemd[1]: Started sshd@15-157.90.248.142:22-186.124.22.55:39558.service - OpenSSH per-connection server daemon (186.124.22.55:39558). Feb 13 15:59:20.995360 sshd[3324]: Invalid user john from 186.124.22.55 port 39558 Feb 13 15:59:21.236504 sshd[3324]: Received disconnect from 186.124.22.55 port 39558:11: Bye Bye [preauth] Feb 13 15:59:21.236504 sshd[3324]: Disconnected from invalid user john 186.124.22.55 port 39558 [preauth] Feb 13 15:59:21.241332 systemd[1]: sshd@15-157.90.248.142:22-186.124.22.55:39558.service: Deactivated successfully. Feb 13 15:59:22.990082 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3254582737.mount: Deactivated successfully. Feb 13 15:59:30.481136 containerd[1597]: time="2025-02-13T15:59:30.481061786Z" level=error msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" failed" error="failed to pull and unpack image \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\": failed to copy: httpReadSeeker: failed open: unexpected status code https://quay.io/v2/cilium/cilium/blobs/sha256:7e29147ae784da5b49bd606db5db2eb71b4dd74bc521c5458a6529c3c8d3babc: 504 Gateway Time-out" Feb 13 15:59:30.481649 containerd[1597]: time="2025-02-13T15:59:30.481112987Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=150845086" Feb 13 15:59:30.481649 containerd[1597]: time="2025-02-13T15:59:30.482038678Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 15:59:30.483332 kubelet[2960]: E0213 15:59:30.481426 2960 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\": failed to copy: httpReadSeeker: failed open: unexpected status code https://quay.io/v2/cilium/cilium/blobs/sha256:7e29147ae784da5b49bd606db5db2eb71b4dd74bc521c5458a6529c3c8d3babc: 504 Gateway Time-out" image="quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5" Feb 13 15:59:30.483332 kubelet[2960]: E0213 15:59:30.481484 2960 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\": failed to copy: httpReadSeeker: failed open: unexpected status code https://quay.io/v2/cilium/cilium/blobs/sha256:7e29147ae784da5b49bd606db5db2eb71b4dd74bc521c5458a6529c3c8d3babc: 504 Gateway Time-out" image="quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5" Feb 13 15:59:30.483332 kubelet[2960]: E0213 15:59:30.481736 2960 kuberuntime_manager.go:1262] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 13 15:59:30.483332 kubelet[2960]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 13 15:59:30.483332 kubelet[2960]: rm /hostbin/cilium-mount Feb 13 15:59:30.483830 kubelet[2960]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-kkd4b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-w99cn_kube-system(23d0f52a-b204-4952-b16e-9af68c46b3de): ErrImagePull: failed to pull and unpack image "quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5": failed to copy: httpReadSeeker: failed open: unexpected status code https://quay.io/v2/cilium/cilium/blobs/sha256:7e29147ae784da5b49bd606db5db2eb71b4dd74bc521c5458a6529c3c8d3babc: 504 Gateway Time-out Feb 13 15:59:30.483895 kubelet[2960]: E0213 15:59:30.481789 2960 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with ErrImagePull: \"failed to pull and unpack image \\\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://quay.io/v2/cilium/cilium/blobs/sha256:7e29147ae784da5b49bd606db5db2eb71b4dd74bc521c5458a6529c3c8d3babc: 504 Gateway Time-out\"" pod="kube-system/cilium-w99cn" podUID="23d0f52a-b204-4952-b16e-9af68c46b3de" Feb 13 15:59:31.280669 kubelet[2960]: E0213 15:59:31.280004 2960 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\\\"\"" pod="kube-system/cilium-w99cn" podUID="23d0f52a-b204-4952-b16e-9af68c46b3de" Feb 13 15:59:31.963328 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3126756347.mount: Deactivated successfully. Feb 13 15:59:41.894417 containerd[1597]: time="2025-02-13T15:59:41.894254059Z" level=error msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" failed" error="failed to pull and unpack image \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\": failed to copy: httpReadSeeker: failed open: unexpected status code https://quay.io/v2/cilium/operator-generic/blobs/sha256:b144a5d00f9fb32d0c04abaa0766fb6f57469470366f8646be0ea183fb201bdc: 504 Gateway Time-out" Feb 13 15:59:41.894417 containerd[1597]: time="2025-02-13T15:59:41.894380220Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=16040079" Feb 13 15:59:41.894946 kubelet[2960]: E0213 15:59:41.894753 2960 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\": failed to copy: httpReadSeeker: failed open: unexpected status code https://quay.io/v2/cilium/operator-generic/blobs/sha256:b144a5d00f9fb32d0c04abaa0766fb6f57469470366f8646be0ea183fb201bdc: 504 Gateway Time-out" image="quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e" Feb 13 15:59:41.894946 kubelet[2960]: E0213 15:59:41.894841 2960 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\": failed to copy: httpReadSeeker: failed open: unexpected status code https://quay.io/v2/cilium/operator-generic/blobs/sha256:b144a5d00f9fb32d0c04abaa0766fb6f57469470366f8646be0ea183fb201bdc: 504 Gateway Time-out" image="quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e" Feb 13 15:59:41.895782 kubelet[2960]: E0213 15:59:41.895338 2960 kuberuntime_manager.go:1262] container &Container{Name:cilium-operator,Image:quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Command:[cilium-operator-generic],Args:[--config-dir=/tmp/cilium/config-map --debug=$(CILIUM_DEBUG)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:K8S_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CILIUM_K8S_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CILIUM_DEBUG,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:&ConfigMapKeySelector{LocalObjectReference:LocalObjectReference{Name:cilium-config,},Key:debug,Optional:*true,},SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cilium-config-path,ReadOnly:true,MountPath:/tmp/cilium/config-map,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-9nf4b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 9234 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:60,TimeoutSeconds:3,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-operator-5cc964979-cgxkn_kube-system(6498fa03-41c7-4e17-89a9-556afca84ae2): ErrImagePull: failed to pull and unpack image "quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e": failed to copy: httpReadSeeker: failed open: unexpected status code https://quay.io/v2/cilium/operator-generic/blobs/sha256:b144a5d00f9fb32d0c04abaa0766fb6f57469470366f8646be0ea183fb201bdc: 504 Gateway Time-out Feb 13 15:59:41.895782 kubelet[2960]: E0213 15:59:41.895577 2960 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cilium-operator\" with ErrImagePull: \"failed to pull and unpack image \\\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://quay.io/v2/cilium/operator-generic/blobs/sha256:b144a5d00f9fb32d0c04abaa0766fb6f57469470366f8646be0ea183fb201bdc: 504 Gateway Time-out\"" pod="kube-system/cilium-operator-5cc964979-cgxkn" podUID="6498fa03-41c7-4e17-89a9-556afca84ae2" Feb 13 15:59:42.156644 containerd[1597]: time="2025-02-13T15:59:42.155335482Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 15:59:42.304250 kubelet[2960]: E0213 15:59:42.304202 2960 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cilium-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\\\"\"" pod="kube-system/cilium-operator-5cc964979-cgxkn" podUID="6498fa03-41c7-4e17-89a9-556afca84ae2" Feb 13 15:59:53.556647 containerd[1597]: time="2025-02-13T15:59:53.556568442Z" level=error msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" failed" error="failed to pull and unpack image \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\": failed to copy: httpReadSeeker: failed open: unexpected status code https://quay.io/v2/cilium/cilium/blobs/sha256:d501de6a3af490eb336bf83d7fd23a1c9fe7fb688e658a9596c5b497920493ba: 504 Gateway Time-out" Feb 13 15:59:53.556647 containerd[1597]: time="2025-02-13T15:59:53.556632203Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=77544572" Feb 13 15:59:53.557188 kubelet[2960]: E0213 15:59:53.556898 2960 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\": failed to copy: httpReadSeeker: failed open: unexpected status code https://quay.io/v2/cilium/cilium/blobs/sha256:d501de6a3af490eb336bf83d7fd23a1c9fe7fb688e658a9596c5b497920493ba: 504 Gateway Time-out" image="quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5" Feb 13 15:59:53.557188 kubelet[2960]: E0213 15:59:53.556939 2960 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\": failed to copy: httpReadSeeker: failed open: unexpected status code https://quay.io/v2/cilium/cilium/blobs/sha256:d501de6a3af490eb336bf83d7fd23a1c9fe7fb688e658a9596c5b497920493ba: 504 Gateway Time-out" image="quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5" Feb 13 15:59:53.557188 kubelet[2960]: E0213 15:59:53.557034 2960 kuberuntime_manager.go:1262] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 13 15:59:53.557188 kubelet[2960]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 13 15:59:53.557188 kubelet[2960]: rm /hostbin/cilium-mount Feb 13 15:59:53.557188 kubelet[2960]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-kkd4b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-w99cn_kube-system(23d0f52a-b204-4952-b16e-9af68c46b3de): ErrImagePull: failed to pull and unpack image "quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5": failed to copy: httpReadSeeker: failed open: unexpected status code https://quay.io/v2/cilium/cilium/blobs/sha256:d501de6a3af490eb336bf83d7fd23a1c9fe7fb688e658a9596c5b497920493ba: 504 Gateway Time-out Feb 13 15:59:53.557188 kubelet[2960]: E0213 15:59:53.557087 2960 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with ErrImagePull: \"failed to pull and unpack image \\\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://quay.io/v2/cilium/cilium/blobs/sha256:d501de6a3af490eb336bf83d7fd23a1c9fe7fb688e658a9596c5b497920493ba: 504 Gateway Time-out\"" pod="kube-system/cilium-w99cn" podUID="23d0f52a-b204-4952-b16e-9af68c46b3de" Feb 13 15:59:56.044248 systemd[1]: Started sshd@16-157.90.248.142:22-119.159.234.131:32705.service - OpenSSH per-connection server daemon (119.159.234.131:32705). Feb 13 15:59:56.905183 sshd[3348]: Invalid user manager from 119.159.234.131 port 32705 Feb 13 15:59:57.060473 sshd[3348]: Received disconnect from 119.159.234.131 port 32705:11: Bye Bye [preauth] Feb 13 15:59:57.060473 sshd[3348]: Disconnected from invalid user manager 119.159.234.131 port 32705 [preauth] Feb 13 15:59:57.065246 systemd[1]: sshd@16-157.90.248.142:22-119.159.234.131:32705.service: Deactivated successfully. Feb 13 15:59:57.161267 containerd[1597]: time="2025-02-13T15:59:57.161114209Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 15:59:59.147547 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3076301143.mount: Deactivated successfully. Feb 13 15:59:59.467654 containerd[1597]: time="2025-02-13T15:59:59.467478321Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:59:59.469672 containerd[1597]: time="2025-02-13T15:59:59.469115503Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Feb 13 15:59:59.471206 containerd[1597]: time="2025-02-13T15:59:59.470686643Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:59:59.473948 containerd[1597]: time="2025-02-13T15:59:59.473828325Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.312626875s" Feb 13 15:59:59.473948 containerd[1597]: time="2025-02-13T15:59:59.473861485Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 13 15:59:59.476709 containerd[1597]: time="2025-02-13T15:59:59.476589161Z" level=info msg="CreateContainer within sandbox \"399dd598260fba6509fb244b8dbd16f4346b48d3e777c9a1a4c6d301bdee26c7\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 15:59:59.493362 containerd[1597]: time="2025-02-13T15:59:59.493318302Z" level=info msg="CreateContainer within sandbox \"399dd598260fba6509fb244b8dbd16f4346b48d3e777c9a1a4c6d301bdee26c7\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"136ce744cf5d2f3f78d74b7680abd7e14ac5838a248c2881749302ed7e8628a7\"" Feb 13 15:59:59.493865 containerd[1597]: time="2025-02-13T15:59:59.493833348Z" level=info msg="StartContainer for \"136ce744cf5d2f3f78d74b7680abd7e14ac5838a248c2881749302ed7e8628a7\"" Feb 13 15:59:59.549065 containerd[1597]: time="2025-02-13T15:59:59.548997835Z" level=info msg="StartContainer for \"136ce744cf5d2f3f78d74b7680abd7e14ac5838a248c2881749302ed7e8628a7\" returns successfully" Feb 13 16:00:08.155185 kubelet[2960]: E0213 16:00:08.153197 2960 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\\\"\"" pod="kube-system/cilium-w99cn" podUID="23d0f52a-b204-4952-b16e-9af68c46b3de" Feb 13 16:00:08.177412 kubelet[2960]: I0213 16:00:08.177341 2960 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-cgxkn" podStartSLOduration=9.688402396 podStartE2EDuration="50.177279859s" podCreationTimestamp="2025-02-13 15:59:18 +0000 UTC" firstStartedPulling="2025-02-13 15:59:18.985237426 +0000 UTC m=+13.987714946" lastFinishedPulling="2025-02-13 15:59:59.474114889 +0000 UTC m=+54.476592409" observedRunningTime="2025-02-13 16:00:00.367813712 +0000 UTC m=+55.370291232" watchObservedRunningTime="2025-02-13 16:00:08.177279859 +0000 UTC m=+63.179757379" Feb 13 16:00:19.156288 containerd[1597]: time="2025-02-13T16:00:19.154811561Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 16:00:20.237411 systemd[1]: Started sshd@17-157.90.248.142:22-118.193.38.84:37898.service - OpenSSH per-connection server daemon (118.193.38.84:37898). Feb 13 16:00:21.670547 sshd[3408]: Invalid user dockeruser from 118.193.38.84 port 37898 Feb 13 16:00:21.943264 sshd[3408]: Received disconnect from 118.193.38.84 port 37898:11: Bye Bye [preauth] Feb 13 16:00:21.943264 sshd[3408]: Disconnected from invalid user dockeruser 118.193.38.84 port 37898 [preauth] Feb 13 16:00:21.945645 systemd[1]: sshd@17-157.90.248.142:22-118.193.38.84:37898.service: Deactivated successfully. Feb 13 16:00:23.374926 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1135568615.mount: Deactivated successfully. Feb 13 16:00:24.854737 containerd[1597]: time="2025-02-13T16:00:24.854665627Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:00:24.856214 containerd[1597]: time="2025-02-13T16:00:24.856102806Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Feb 13 16:00:24.857409 containerd[1597]: time="2025-02-13T16:00:24.857353783Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:00:24.859396 containerd[1597]: time="2025-02-13T16:00:24.859354650Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 5.704483848s" Feb 13 16:00:24.859466 containerd[1597]: time="2025-02-13T16:00:24.859396770Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 13 16:00:24.862117 containerd[1597]: time="2025-02-13T16:00:24.862085927Z" level=info msg="CreateContainer within sandbox \"fa5972442b9ec8154e8d5f0f5de7996cf6c6ae7349d5a107cc4aef0e24bb1d80\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 16:00:24.876402 containerd[1597]: time="2025-02-13T16:00:24.876328798Z" level=info msg="CreateContainer within sandbox \"fa5972442b9ec8154e8d5f0f5de7996cf6c6ae7349d5a107cc4aef0e24bb1d80\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9878e759e01ea2040e464dbbf824928d931eb2f2b87c7d717caacb061937c38b\"" Feb 13 16:00:24.878308 containerd[1597]: time="2025-02-13T16:00:24.877282451Z" level=info msg="StartContainer for \"9878e759e01ea2040e464dbbf824928d931eb2f2b87c7d717caacb061937c38b\"" Feb 13 16:00:24.903985 systemd[1]: run-containerd-runc-k8s.io-9878e759e01ea2040e464dbbf824928d931eb2f2b87c7d717caacb061937c38b-runc.TrXko7.mount: Deactivated successfully. Feb 13 16:00:24.936772 containerd[1597]: time="2025-02-13T16:00:24.936728851Z" level=info msg="StartContainer for \"9878e759e01ea2040e464dbbf824928d931eb2f2b87c7d717caacb061937c38b\" returns successfully" Feb 13 16:00:25.072910 containerd[1597]: time="2025-02-13T16:00:25.072837683Z" level=info msg="shim disconnected" id=9878e759e01ea2040e464dbbf824928d931eb2f2b87c7d717caacb061937c38b namespace=k8s.io Feb 13 16:00:25.072910 containerd[1597]: time="2025-02-13T16:00:25.072900364Z" level=warning msg="cleaning up after shim disconnected" id=9878e759e01ea2040e464dbbf824928d931eb2f2b87c7d717caacb061937c38b namespace=k8s.io Feb 13 16:00:25.072910 containerd[1597]: time="2025-02-13T16:00:25.072909964Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:00:25.414742 containerd[1597]: time="2025-02-13T16:00:25.414676725Z" level=info msg="CreateContainer within sandbox \"fa5972442b9ec8154e8d5f0f5de7996cf6c6ae7349d5a107cc4aef0e24bb1d80\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 16:00:25.437787 containerd[1597]: time="2025-02-13T16:00:25.437217868Z" level=info msg="CreateContainer within sandbox \"fa5972442b9ec8154e8d5f0f5de7996cf6c6ae7349d5a107cc4aef0e24bb1d80\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ba47ad2ad514a2dfba625d4ac1b759bdab726b1071819a168ad38ada3ed03628\"" Feb 13 16:00:25.446462 containerd[1597]: time="2025-02-13T16:00:25.445498260Z" level=info msg="StartContainer for \"ba47ad2ad514a2dfba625d4ac1b759bdab726b1071819a168ad38ada3ed03628\"" Feb 13 16:00:25.502308 containerd[1597]: time="2025-02-13T16:00:25.502117942Z" level=info msg="StartContainer for \"ba47ad2ad514a2dfba625d4ac1b759bdab726b1071819a168ad38ada3ed03628\" returns successfully" Feb 13 16:00:25.512118 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 16:00:25.512849 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 16:00:25.513448 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 16:00:25.520715 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 16:00:25.538783 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 16:00:25.547077 containerd[1597]: time="2025-02-13T16:00:25.546951386Z" level=info msg="shim disconnected" id=ba47ad2ad514a2dfba625d4ac1b759bdab726b1071819a168ad38ada3ed03628 namespace=k8s.io Feb 13 16:00:25.547077 containerd[1597]: time="2025-02-13T16:00:25.547015907Z" level=warning msg="cleaning up after shim disconnected" id=ba47ad2ad514a2dfba625d4ac1b759bdab726b1071819a168ad38ada3ed03628 namespace=k8s.io Feb 13 16:00:25.547077 containerd[1597]: time="2025-02-13T16:00:25.547025387Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:00:25.620492 systemd[1]: Started sshd@18-157.90.248.142:22-27.111.32.174:52408.service - OpenSSH per-connection server daemon (27.111.32.174:52408). Feb 13 16:00:25.874981 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9878e759e01ea2040e464dbbf824928d931eb2f2b87c7d717caacb061937c38b-rootfs.mount: Deactivated successfully. Feb 13 16:00:26.420393 containerd[1597]: time="2025-02-13T16:00:26.420322786Z" level=info msg="CreateContainer within sandbox \"fa5972442b9ec8154e8d5f0f5de7996cf6c6ae7349d5a107cc4aef0e24bb1d80\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 16:00:26.456878 containerd[1597]: time="2025-02-13T16:00:26.456826438Z" level=info msg="CreateContainer within sandbox \"fa5972442b9ec8154e8d5f0f5de7996cf6c6ae7349d5a107cc4aef0e24bb1d80\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"85ad3676c23c36eb920a39839e63f3bf5e0eda685e9245d2dd99f901247e6525\"" Feb 13 16:00:26.457854 containerd[1597]: time="2025-02-13T16:00:26.457806771Z" level=info msg="StartContainer for \"85ad3676c23c36eb920a39839e63f3bf5e0eda685e9245d2dd99f901247e6525\"" Feb 13 16:00:26.535386 containerd[1597]: time="2025-02-13T16:00:26.535255815Z" level=info msg="StartContainer for \"85ad3676c23c36eb920a39839e63f3bf5e0eda685e9245d2dd99f901247e6525\" returns successfully" Feb 13 16:00:26.566307 containerd[1597]: time="2025-02-13T16:00:26.565955068Z" level=info msg="shim disconnected" id=85ad3676c23c36eb920a39839e63f3bf5e0eda685e9245d2dd99f901247e6525 namespace=k8s.io Feb 13 16:00:26.566307 containerd[1597]: time="2025-02-13T16:00:26.566087990Z" level=warning msg="cleaning up after shim disconnected" id=85ad3676c23c36eb920a39839e63f3bf5e0eda685e9245d2dd99f901247e6525 namespace=k8s.io Feb 13 16:00:26.566307 containerd[1597]: time="2025-02-13T16:00:26.566096670Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:00:26.579147 containerd[1597]: time="2025-02-13T16:00:26.579013204Z" level=warning msg="cleanup warnings time=\"2025-02-13T16:00:26Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 16:00:26.604059 sshd[3571]: Invalid user rstudio from 27.111.32.174 port 52408 Feb 13 16:00:26.783546 sshd[3571]: Received disconnect from 27.111.32.174 port 52408:11: Bye Bye [preauth] Feb 13 16:00:26.783546 sshd[3571]: Disconnected from invalid user rstudio 27.111.32.174 port 52408 [preauth] Feb 13 16:00:26.788923 systemd[1]: sshd@18-157.90.248.142:22-27.111.32.174:52408.service: Deactivated successfully. Feb 13 16:00:26.876106 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-85ad3676c23c36eb920a39839e63f3bf5e0eda685e9245d2dd99f901247e6525-rootfs.mount: Deactivated successfully. Feb 13 16:00:27.425545 containerd[1597]: time="2025-02-13T16:00:27.423682144Z" level=info msg="CreateContainer within sandbox \"fa5972442b9ec8154e8d5f0f5de7996cf6c6ae7349d5a107cc4aef0e24bb1d80\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 16:00:27.446854 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3208970651.mount: Deactivated successfully. Feb 13 16:00:27.452191 containerd[1597]: time="2025-02-13T16:00:27.451989726Z" level=info msg="CreateContainer within sandbox \"fa5972442b9ec8154e8d5f0f5de7996cf6c6ae7349d5a107cc4aef0e24bb1d80\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6db30ad86cf25c442939c54e779a12a51ae29440b127bd983e249904484a502a\"" Feb 13 16:00:27.452957 containerd[1597]: time="2025-02-13T16:00:27.452915898Z" level=info msg="StartContainer for \"6db30ad86cf25c442939c54e779a12a51ae29440b127bd983e249904484a502a\"" Feb 13 16:00:27.507935 containerd[1597]: time="2025-02-13T16:00:27.507724477Z" level=info msg="StartContainer for \"6db30ad86cf25c442939c54e779a12a51ae29440b127bd983e249904484a502a\" returns successfully" Feb 13 16:00:27.538693 containerd[1597]: time="2025-02-13T16:00:27.538599373Z" level=info msg="shim disconnected" id=6db30ad86cf25c442939c54e779a12a51ae29440b127bd983e249904484a502a namespace=k8s.io Feb 13 16:00:27.538693 containerd[1597]: time="2025-02-13T16:00:27.538656854Z" level=warning msg="cleaning up after shim disconnected" id=6db30ad86cf25c442939c54e779a12a51ae29440b127bd983e249904484a502a namespace=k8s.io Feb 13 16:00:27.538693 containerd[1597]: time="2025-02-13T16:00:27.538665614Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:00:27.549384 containerd[1597]: time="2025-02-13T16:00:27.549335277Z" level=warning msg="cleanup warnings time=\"2025-02-13T16:00:27Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 16:00:27.874244 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6db30ad86cf25c442939c54e779a12a51ae29440b127bd983e249904484a502a-rootfs.mount: Deactivated successfully. Feb 13 16:00:28.428230 containerd[1597]: time="2025-02-13T16:00:28.427270112Z" level=info msg="CreateContainer within sandbox \"fa5972442b9ec8154e8d5f0f5de7996cf6c6ae7349d5a107cc4aef0e24bb1d80\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 16:00:28.443368 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1738571325.mount: Deactivated successfully. Feb 13 16:00:28.450463 containerd[1597]: time="2025-02-13T16:00:28.450411704Z" level=info msg="CreateContainer within sandbox \"fa5972442b9ec8154e8d5f0f5de7996cf6c6ae7349d5a107cc4aef0e24bb1d80\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"da430d7ade601667ba4f965a61a830d127fab53745a878a52039fbe4dbe76fac\"" Feb 13 16:00:28.452137 containerd[1597]: time="2025-02-13T16:00:28.452035645Z" level=info msg="StartContainer for \"da430d7ade601667ba4f965a61a830d127fab53745a878a52039fbe4dbe76fac\"" Feb 13 16:00:28.512633 containerd[1597]: time="2025-02-13T16:00:28.512586582Z" level=info msg="StartContainer for \"da430d7ade601667ba4f965a61a830d127fab53745a878a52039fbe4dbe76fac\" returns successfully" Feb 13 16:00:28.575387 kubelet[2960]: I0213 16:00:28.575356 2960 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 16:00:28.622605 kubelet[2960]: I0213 16:00:28.620671 2960 topology_manager.go:215] "Topology Admit Handler" podUID="b47b9a93-30db-4ba0-af0a-57a749e8320d" podNamespace="kube-system" podName="coredns-76f75df574-kpxk2" Feb 13 16:00:28.625422 kubelet[2960]: I0213 16:00:28.624442 2960 topology_manager.go:215] "Topology Admit Handler" podUID="0ba66ad7-b734-44af-895e-d742f2a6bffd" podNamespace="kube-system" podName="coredns-76f75df574-qmktv" Feb 13 16:00:28.724449 kubelet[2960]: I0213 16:00:28.724300 2960 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b47b9a93-30db-4ba0-af0a-57a749e8320d-config-volume\") pod \"coredns-76f75df574-kpxk2\" (UID: \"b47b9a93-30db-4ba0-af0a-57a749e8320d\") " pod="kube-system/coredns-76f75df574-kpxk2" Feb 13 16:00:28.724449 kubelet[2960]: I0213 16:00:28.724357 2960 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bqmv\" (UniqueName: \"kubernetes.io/projected/b47b9a93-30db-4ba0-af0a-57a749e8320d-kube-api-access-5bqmv\") pod \"coredns-76f75df574-kpxk2\" (UID: \"b47b9a93-30db-4ba0-af0a-57a749e8320d\") " pod="kube-system/coredns-76f75df574-kpxk2" Feb 13 16:00:28.826218 kubelet[2960]: I0213 16:00:28.825114 2960 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0ba66ad7-b734-44af-895e-d742f2a6bffd-config-volume\") pod \"coredns-76f75df574-qmktv\" (UID: \"0ba66ad7-b734-44af-895e-d742f2a6bffd\") " pod="kube-system/coredns-76f75df574-qmktv" Feb 13 16:00:28.826218 kubelet[2960]: I0213 16:00:28.825659 2960 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nh69\" (UniqueName: \"kubernetes.io/projected/0ba66ad7-b734-44af-895e-d742f2a6bffd-kube-api-access-6nh69\") pod \"coredns-76f75df574-qmktv\" (UID: \"0ba66ad7-b734-44af-895e-d742f2a6bffd\") " pod="kube-system/coredns-76f75df574-qmktv" Feb 13 16:00:28.936958 containerd[1597]: time="2025-02-13T16:00:28.936622299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-kpxk2,Uid:b47b9a93-30db-4ba0-af0a-57a749e8320d,Namespace:kube-system,Attempt:0,}" Feb 13 16:00:29.240860 containerd[1597]: time="2025-02-13T16:00:29.240731521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-qmktv,Uid:0ba66ad7-b734-44af-895e-d742f2a6bffd,Namespace:kube-system,Attempt:0,}" Feb 13 16:00:30.617502 systemd-networkd[1236]: cilium_host: Link UP Feb 13 16:00:30.617811 systemd-networkd[1236]: cilium_net: Link UP Feb 13 16:00:30.618140 systemd-networkd[1236]: cilium_net: Gained carrier Feb 13 16:00:30.619129 systemd-networkd[1236]: cilium_host: Gained carrier Feb 13 16:00:30.727620 systemd-networkd[1236]: cilium_vxlan: Link UP Feb 13 16:00:30.727627 systemd-networkd[1236]: cilium_vxlan: Gained carrier Feb 13 16:00:30.974742 systemd-networkd[1236]: cilium_host: Gained IPv6LL Feb 13 16:00:31.003402 kernel: NET: Registered PF_ALG protocol family Feb 13 16:00:31.454767 systemd-networkd[1236]: cilium_net: Gained IPv6LL Feb 13 16:00:31.696544 systemd-networkd[1236]: lxc_health: Link UP Feb 13 16:00:31.703036 systemd-networkd[1236]: lxc_health: Gained carrier Feb 13 16:00:32.025228 systemd-networkd[1236]: lxc830a702f21a1: Link UP Feb 13 16:00:32.031553 kernel: eth0: renamed from tmp5489d Feb 13 16:00:32.036870 systemd-networkd[1236]: lxc830a702f21a1: Gained carrier Feb 13 16:00:32.286212 systemd-networkd[1236]: lxc6a1d64708b0d: Link UP Feb 13 16:00:32.292678 kernel: eth0: renamed from tmp37297 Feb 13 16:00:32.297311 systemd-networkd[1236]: lxc6a1d64708b0d: Gained carrier Feb 13 16:00:32.541338 systemd-networkd[1236]: cilium_vxlan: Gained IPv6LL Feb 13 16:00:32.718529 kubelet[2960]: I0213 16:00:32.718490 2960 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-w99cn" podStartSLOduration=8.645185922 podStartE2EDuration="1m14.718451023s" podCreationTimestamp="2025-02-13 15:59:18 +0000 UTC" firstStartedPulling="2025-02-13 15:59:18.786453434 +0000 UTC m=+13.788930954" lastFinishedPulling="2025-02-13 16:00:24.859718535 +0000 UTC m=+79.862196055" observedRunningTime="2025-02-13 16:00:29.453742514 +0000 UTC m=+84.456220034" watchObservedRunningTime="2025-02-13 16:00:32.718451023 +0000 UTC m=+87.720928543" Feb 13 16:00:33.118497 systemd-networkd[1236]: lxc_health: Gained IPv6LL Feb 13 16:00:33.565650 systemd-networkd[1236]: lxc830a702f21a1: Gained IPv6LL Feb 13 16:00:34.205501 systemd-networkd[1236]: lxc6a1d64708b0d: Gained IPv6LL Feb 13 16:00:36.004874 containerd[1597]: time="2025-02-13T16:00:36.004773883Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:00:36.005738 containerd[1597]: time="2025-02-13T16:00:36.005605134Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:00:36.006032 containerd[1597]: time="2025-02-13T16:00:36.005911578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:00:36.007308 containerd[1597]: time="2025-02-13T16:00:36.007258996Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:00:36.029236 containerd[1597]: time="2025-02-13T16:00:36.028809968Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:00:36.029236 containerd[1597]: time="2025-02-13T16:00:36.028946810Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:00:36.029236 containerd[1597]: time="2025-02-13T16:00:36.028975930Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:00:36.029236 containerd[1597]: time="2025-02-13T16:00:36.029052971Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:00:36.123948 containerd[1597]: time="2025-02-13T16:00:36.123885694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-qmktv,Uid:0ba66ad7-b734-44af-895e-d742f2a6bffd,Namespace:kube-system,Attempt:0,} returns sandbox id \"37297f924f5903db94189ba620f5ae4458a6f7b241369b599a3b91e2681081c6\"" Feb 13 16:00:36.129307 containerd[1597]: time="2025-02-13T16:00:36.129145365Z" level=info msg="CreateContainer within sandbox \"37297f924f5903db94189ba620f5ae4458a6f7b241369b599a3b91e2681081c6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 16:00:36.131756 containerd[1597]: time="2025-02-13T16:00:36.131701760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-kpxk2,Uid:b47b9a93-30db-4ba0-af0a-57a749e8320d,Namespace:kube-system,Attempt:0,} returns sandbox id \"5489d324e7dd97bf4ace3640a97da872f5efe8d5c03ab57f50fbbfdebb5dddb1\"" Feb 13 16:00:36.140359 containerd[1597]: time="2025-02-13T16:00:36.140121394Z" level=info msg="CreateContainer within sandbox \"5489d324e7dd97bf4ace3640a97da872f5efe8d5c03ab57f50fbbfdebb5dddb1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 16:00:36.154431 containerd[1597]: time="2025-02-13T16:00:36.154383827Z" level=info msg="CreateContainer within sandbox \"37297f924f5903db94189ba620f5ae4458a6f7b241369b599a3b91e2681081c6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fc907b82d7b155c97206c6c7ba6f8608c0b0625e6c555976488e022423f27317\"" Feb 13 16:00:36.156339 containerd[1597]: time="2025-02-13T16:00:36.156306413Z" level=info msg="StartContainer for \"fc907b82d7b155c97206c6c7ba6f8608c0b0625e6c555976488e022423f27317\"" Feb 13 16:00:36.161925 containerd[1597]: time="2025-02-13T16:00:36.161772327Z" level=info msg="CreateContainer within sandbox \"5489d324e7dd97bf4ace3640a97da872f5efe8d5c03ab57f50fbbfdebb5dddb1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d296c4ac12da0922d7755800c5d29b1044e6773e678d6fc8b002bc64310ce8ed\"" Feb 13 16:00:36.162948 containerd[1597]: time="2025-02-13T16:00:36.162793620Z" level=info msg="StartContainer for \"d296c4ac12da0922d7755800c5d29b1044e6773e678d6fc8b002bc64310ce8ed\"" Feb 13 16:00:36.236644 containerd[1597]: time="2025-02-13T16:00:36.236534898Z" level=info msg="StartContainer for \"fc907b82d7b155c97206c6c7ba6f8608c0b0625e6c555976488e022423f27317\" returns successfully" Feb 13 16:00:36.246371 containerd[1597]: time="2025-02-13T16:00:36.245537100Z" level=info msg="StartContainer for \"d296c4ac12da0922d7755800c5d29b1044e6773e678d6fc8b002bc64310ce8ed\" returns successfully" Feb 13 16:00:36.488933 kubelet[2960]: I0213 16:00:36.488504 2960 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-kpxk2" podStartSLOduration=78.488461186 podStartE2EDuration="1m18.488461186s" podCreationTimestamp="2025-02-13 15:59:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 16:00:36.470230459 +0000 UTC m=+91.472707979" watchObservedRunningTime="2025-02-13 16:00:36.488461186 +0000 UTC m=+91.490938706" Feb 13 16:00:36.505454 kubelet[2960]: I0213 16:00:36.505410 2960 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-qmktv" podStartSLOduration=78.505368815 podStartE2EDuration="1m18.505368815s" podCreationTimestamp="2025-02-13 15:59:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 16:00:36.504933729 +0000 UTC m=+91.507411289" watchObservedRunningTime="2025-02-13 16:00:36.505368815 +0000 UTC m=+91.507846295" Feb 13 16:00:50.475575 systemd[1]: Started sshd@19-157.90.248.142:22-125.94.71.207:49306.service - OpenSSH per-connection server daemon (125.94.71.207:49306). Feb 13 16:00:51.847655 sshd[4382]: Invalid user pruebas from 125.94.71.207 port 49306 Feb 13 16:00:52.761421 sshd[4382]: Received disconnect from 125.94.71.207 port 49306:11: Bye Bye [preauth] Feb 13 16:00:52.761421 sshd[4382]: Disconnected from invalid user pruebas 125.94.71.207 port 49306 [preauth] Feb 13 16:00:52.763999 systemd[1]: sshd@19-157.90.248.142:22-125.94.71.207:49306.service: Deactivated successfully. Feb 13 16:00:55.512486 systemd[1]: Started sshd@20-157.90.248.142:22-186.124.22.55:47922.service - OpenSSH per-connection server daemon (186.124.22.55:47922). Feb 13 16:00:56.881138 sshd[4387]: Invalid user gitlab-psql from 186.124.22.55 port 47922 Feb 13 16:00:57.140069 sshd[4387]: Received disconnect from 186.124.22.55 port 47922:11: Bye Bye [preauth] Feb 13 16:00:57.140069 sshd[4387]: Disconnected from invalid user gitlab-psql 186.124.22.55 port 47922 [preauth] Feb 13 16:00:57.144455 systemd[1]: sshd@20-157.90.248.142:22-186.124.22.55:47922.service: Deactivated successfully. Feb 13 16:01:25.507437 systemd[1]: Started sshd@21-157.90.248.142:22-119.159.234.131:45541.service - OpenSSH per-connection server daemon (119.159.234.131:45541). Feb 13 16:01:26.375976 sshd[4396]: Invalid user webserver from 119.159.234.131 port 45541 Feb 13 16:01:26.533907 sshd[4396]: Received disconnect from 119.159.234.131 port 45541:11: Bye Bye [preauth] Feb 13 16:01:26.533907 sshd[4396]: Disconnected from invalid user webserver 119.159.234.131 port 45541 [preauth] Feb 13 16:01:26.535510 systemd[1]: sshd@21-157.90.248.142:22-119.159.234.131:45541.service: Deactivated successfully. Feb 13 16:01:42.201430 systemd[1]: Started sshd@22-157.90.248.142:22-118.193.38.84:46296.service - OpenSSH per-connection server daemon (118.193.38.84:46296). Feb 13 16:01:43.634482 sshd[4401]: Invalid user upload from 118.193.38.84 port 46296 Feb 13 16:01:43.905038 sshd[4401]: Received disconnect from 118.193.38.84 port 46296:11: Bye Bye [preauth] Feb 13 16:01:43.905038 sshd[4401]: Disconnected from invalid user upload 118.193.38.84 port 46296 [preauth] Feb 13 16:01:43.907250 systemd[1]: sshd@22-157.90.248.142:22-118.193.38.84:46296.service: Deactivated successfully. Feb 13 16:01:44.751413 systemd[1]: Started sshd@23-157.90.248.142:22-27.111.32.174:48130.service - OpenSSH per-connection server daemon (27.111.32.174:48130). Feb 13 16:01:45.745252 sshd[4406]: Invalid user manager from 27.111.32.174 port 48130 Feb 13 16:01:45.925454 sshd[4406]: Received disconnect from 27.111.32.174 port 48130:11: Bye Bye [preauth] Feb 13 16:01:45.925454 sshd[4406]: Disconnected from invalid user manager 27.111.32.174 port 48130 [preauth] Feb 13 16:01:45.927807 systemd[1]: sshd@23-157.90.248.142:22-27.111.32.174:48130.service: Deactivated successfully. Feb 13 16:02:22.059478 systemd[1]: Started sshd@24-157.90.248.142:22-125.94.71.207:46198.service - OpenSSH per-connection server daemon (125.94.71.207:46198). Feb 13 16:02:23.427636 sshd[4419]: Invalid user autcom from 125.94.71.207 port 46198 Feb 13 16:02:23.686272 sshd[4419]: Received disconnect from 125.94.71.207 port 46198:11: Bye Bye [preauth] Feb 13 16:02:23.686272 sshd[4419]: Disconnected from invalid user autcom 125.94.71.207 port 46198 [preauth] Feb 13 16:02:23.689012 systemd[1]: sshd@24-157.90.248.142:22-125.94.71.207:46198.service: Deactivated successfully. Feb 13 16:02:29.138561 systemd[1]: Started sshd@25-157.90.248.142:22-186.124.22.55:35162.service - OpenSSH per-connection server daemon (186.124.22.55:35162). Feb 13 16:02:30.492613 sshd[4428]: Invalid user erpnext from 186.124.22.55 port 35162 Feb 13 16:02:30.746338 sshd[4428]: Received disconnect from 186.124.22.55 port 35162:11: Bye Bye [preauth] Feb 13 16:02:30.746338 sshd[4428]: Disconnected from invalid user erpnext 186.124.22.55 port 35162 [preauth] Feb 13 16:02:30.750105 systemd[1]: sshd@25-157.90.248.142:22-186.124.22.55:35162.service: Deactivated successfully. Feb 13 16:02:48.270629 systemd[1]: Started sshd@26-157.90.248.142:22-119.159.234.131:58377.service - OpenSSH per-connection server daemon (119.159.234.131:58377). Feb 13 16:02:49.164660 sshd[4436]: Invalid user gitlab-runner from 119.159.234.131 port 58377 Feb 13 16:02:49.350136 sshd[4436]: Received disconnect from 119.159.234.131 port 58377:11: Bye Bye [preauth] Feb 13 16:02:49.350136 sshd[4436]: Disconnected from invalid user gitlab-runner 119.159.234.131 port 58377 [preauth] Feb 13 16:02:49.352982 systemd[1]: sshd@26-157.90.248.142:22-119.159.234.131:58377.service: Deactivated successfully. Feb 13 16:03:00.500103 systemd[1]: Started sshd@27-157.90.248.142:22-27.111.32.174:41248.service - OpenSSH per-connection server daemon (27.111.32.174:41248). Feb 13 16:03:01.454708 sshd[4444]: Invalid user webserver from 27.111.32.174 port 41248 Feb 13 16:03:01.634005 sshd[4444]: Received disconnect from 27.111.32.174 port 41248:11: Bye Bye [preauth] Feb 13 16:03:01.634005 sshd[4444]: Disconnected from invalid user webserver 27.111.32.174 port 41248 [preauth] Feb 13 16:03:01.637857 systemd[1]: sshd@27-157.90.248.142:22-27.111.32.174:41248.service: Deactivated successfully. Feb 13 16:03:18.279451 systemd[1]: Started sshd@28-157.90.248.142:22-118.193.38.84:55878.service - OpenSSH per-connection server daemon (118.193.38.84:55878). Feb 13 16:03:19.719667 sshd[4451]: Invalid user test from 118.193.38.84 port 55878 Feb 13 16:03:19.991400 sshd[4451]: Received disconnect from 118.193.38.84 port 55878:11: Bye Bye [preauth] Feb 13 16:03:19.991400 sshd[4451]: Disconnected from invalid user test 118.193.38.84 port 55878 [preauth] Feb 13 16:03:19.995910 systemd[1]: sshd@28-157.90.248.142:22-118.193.38.84:55878.service: Deactivated successfully. Feb 13 16:03:47.916735 systemd[1]: Started sshd@29-157.90.248.142:22-125.94.71.207:43082.service - OpenSSH per-connection server daemon (125.94.71.207:43082). Feb 13 16:03:49.802652 sshd[4459]: Invalid user import from 125.94.71.207 port 43082 Feb 13 16:03:50.166039 sshd[4459]: Received disconnect from 125.94.71.207 port 43082:11: Bye Bye [preauth] Feb 13 16:03:50.166039 sshd[4459]: Disconnected from invalid user import 125.94.71.207 port 43082 [preauth] Feb 13 16:03:50.167463 systemd[1]: sshd@29-157.90.248.142:22-125.94.71.207:43082.service: Deactivated successfully. Feb 13 16:03:54.169831 systemd[1]: Started sshd@30-157.90.248.142:22-139.178.89.65:49016.service - OpenSSH per-connection server daemon (139.178.89.65:49016). Feb 13 16:03:55.173955 sshd[4466]: Accepted publickey for core from 139.178.89.65 port 49016 ssh2: RSA SHA256:KBbJQhuYoAcoXq4F5fzIreCcUwPw2deXowxxtv7qw9g Feb 13 16:03:55.176375 sshd-session[4466]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:03:55.182547 systemd-logind[1567]: New session 8 of user core. Feb 13 16:03:55.187682 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 16:03:55.960185 sshd[4469]: Connection closed by 139.178.89.65 port 49016 Feb 13 16:03:55.962405 sshd-session[4466]: pam_unix(sshd:session): session closed for user core Feb 13 16:03:55.969695 systemd[1]: sshd@30-157.90.248.142:22-139.178.89.65:49016.service: Deactivated successfully. Feb 13 16:03:55.975310 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 16:03:55.978721 systemd-logind[1567]: Session 8 logged out. Waiting for processes to exit. Feb 13 16:03:55.979929 systemd-logind[1567]: Removed session 8. Feb 13 16:03:59.992566 systemd[1]: Started sshd@31-157.90.248.142:22-186.124.22.55:40458.service - OpenSSH per-connection server daemon (186.124.22.55:40458). Feb 13 16:04:01.127397 systemd[1]: Started sshd@32-157.90.248.142:22-139.178.89.65:40976.service - OpenSSH per-connection server daemon (139.178.89.65:40976). Feb 13 16:04:01.323551 sshd[4481]: Invalid user rune from 186.124.22.55 port 40458 Feb 13 16:04:01.570914 sshd[4481]: Received disconnect from 186.124.22.55 port 40458:11: Bye Bye [preauth] Feb 13 16:04:01.570914 sshd[4481]: Disconnected from invalid user rune 186.124.22.55 port 40458 [preauth] Feb 13 16:04:01.572496 systemd[1]: sshd@31-157.90.248.142:22-186.124.22.55:40458.service: Deactivated successfully. Feb 13 16:04:02.124934 sshd[4483]: Accepted publickey for core from 139.178.89.65 port 40976 ssh2: RSA SHA256:KBbJQhuYoAcoXq4F5fzIreCcUwPw2deXowxxtv7qw9g Feb 13 16:04:02.127012 sshd-session[4483]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:04:02.132463 systemd-logind[1567]: New session 9 of user core. Feb 13 16:04:02.138777 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 16:04:02.890233 sshd[4489]: Connection closed by 139.178.89.65 port 40976 Feb 13 16:04:02.891406 sshd-session[4483]: pam_unix(sshd:session): session closed for user core Feb 13 16:04:02.899304 systemd[1]: sshd@32-157.90.248.142:22-139.178.89.65:40976.service: Deactivated successfully. Feb 13 16:04:02.905947 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 16:04:02.907959 systemd-logind[1567]: Session 9 logged out. Waiting for processes to exit. Feb 13 16:04:02.909720 systemd-logind[1567]: Removed session 9. Feb 13 16:04:08.061693 systemd[1]: Started sshd@33-157.90.248.142:22-139.178.89.65:54436.service - OpenSSH per-connection server daemon (139.178.89.65:54436). Feb 13 16:04:09.051729 sshd[4502]: Accepted publickey for core from 139.178.89.65 port 54436 ssh2: RSA SHA256:KBbJQhuYoAcoXq4F5fzIreCcUwPw2deXowxxtv7qw9g Feb 13 16:04:09.053602 sshd-session[4502]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:04:09.059181 systemd-logind[1567]: New session 10 of user core. Feb 13 16:04:09.069706 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 16:04:09.803746 sshd[4505]: Connection closed by 139.178.89.65 port 54436 Feb 13 16:04:09.804815 sshd-session[4502]: pam_unix(sshd:session): session closed for user core Feb 13 16:04:09.809888 systemd[1]: sshd@33-157.90.248.142:22-139.178.89.65:54436.service: Deactivated successfully. Feb 13 16:04:09.814970 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 16:04:09.817023 systemd-logind[1567]: Session 10 logged out. Waiting for processes to exit. Feb 13 16:04:09.818778 systemd-logind[1567]: Removed session 10. Feb 13 16:04:09.967682 systemd[1]: Started sshd@34-157.90.248.142:22-139.178.89.65:54446.service - OpenSSH per-connection server daemon (139.178.89.65:54446). Feb 13 16:04:10.952735 sshd[4516]: Accepted publickey for core from 139.178.89.65 port 54446 ssh2: RSA SHA256:KBbJQhuYoAcoXq4F5fzIreCcUwPw2deXowxxtv7qw9g Feb 13 16:04:10.955126 sshd-session[4516]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:04:10.962191 systemd-logind[1567]: New session 11 of user core. Feb 13 16:04:10.971732 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 16:04:11.754844 sshd[4519]: Connection closed by 139.178.89.65 port 54446 Feb 13 16:04:11.755588 sshd-session[4516]: pam_unix(sshd:session): session closed for user core Feb 13 16:04:11.761036 systemd-logind[1567]: Session 11 logged out. Waiting for processes to exit. Feb 13 16:04:11.761141 systemd[1]: sshd@34-157.90.248.142:22-139.178.89.65:54446.service: Deactivated successfully. Feb 13 16:04:11.766640 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 16:04:11.769035 systemd-logind[1567]: Removed session 11. Feb 13 16:04:11.865507 systemd[1]: Started sshd@35-157.90.248.142:22-119.159.234.131:14706.service - OpenSSH per-connection server daemon (119.159.234.131:14706). Feb 13 16:04:11.928570 systemd[1]: Started sshd@36-157.90.248.142:22-139.178.89.65:54456.service - OpenSSH per-connection server daemon (139.178.89.65:54456). Feb 13 16:04:12.745487 sshd[4527]: Invalid user angel from 119.159.234.131 port 14706 Feb 13 16:04:12.906871 sshd[4527]: Received disconnect from 119.159.234.131 port 14706:11: Bye Bye [preauth] Feb 13 16:04:12.906871 sshd[4527]: Disconnected from invalid user angel 119.159.234.131 port 14706 [preauth] Feb 13 16:04:12.910213 systemd[1]: sshd@35-157.90.248.142:22-119.159.234.131:14706.service: Deactivated successfully. Feb 13 16:04:12.921889 sshd[4529]: Accepted publickey for core from 139.178.89.65 port 54456 ssh2: RSA SHA256:KBbJQhuYoAcoXq4F5fzIreCcUwPw2deXowxxtv7qw9g Feb 13 16:04:12.923470 sshd-session[4529]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:04:12.930330 systemd-logind[1567]: New session 12 of user core. Feb 13 16:04:12.936468 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 16:04:13.681965 sshd[4535]: Connection closed by 139.178.89.65 port 54456 Feb 13 16:04:13.683249 sshd-session[4529]: pam_unix(sshd:session): session closed for user core Feb 13 16:04:13.686862 systemd[1]: sshd@36-157.90.248.142:22-139.178.89.65:54456.service: Deactivated successfully. Feb 13 16:04:13.694112 systemd-logind[1567]: Session 12 logged out. Waiting for processes to exit. Feb 13 16:04:13.695877 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 16:04:13.698140 systemd-logind[1567]: Removed session 12. Feb 13 16:04:15.805493 systemd[1]: Started sshd@37-157.90.248.142:22-27.111.32.174:50068.service - OpenSSH per-connection server daemon (27.111.32.174:50068). Feb 13 16:04:16.847971 sshd[4547]: Invalid user ark from 27.111.32.174 port 50068 Feb 13 16:04:17.040119 sshd[4547]: Received disconnect from 27.111.32.174 port 50068:11: Bye Bye [preauth] Feb 13 16:04:17.040119 sshd[4547]: Disconnected from invalid user ark 27.111.32.174 port 50068 [preauth] Feb 13 16:04:17.043867 systemd[1]: sshd@37-157.90.248.142:22-27.111.32.174:50068.service: Deactivated successfully. Feb 13 16:04:18.849617 systemd[1]: Started sshd@38-157.90.248.142:22-139.178.89.65:51628.service - OpenSSH per-connection server daemon (139.178.89.65:51628). Feb 13 16:04:19.849262 sshd[4552]: Accepted publickey for core from 139.178.89.65 port 51628 ssh2: RSA SHA256:KBbJQhuYoAcoXq4F5fzIreCcUwPw2deXowxxtv7qw9g Feb 13 16:04:19.851419 sshd-session[4552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:04:19.857888 systemd-logind[1567]: New session 13 of user core. Feb 13 16:04:19.861562 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 16:04:20.607920 sshd[4557]: Connection closed by 139.178.89.65 port 51628 Feb 13 16:04:20.608867 sshd-session[4552]: pam_unix(sshd:session): session closed for user core Feb 13 16:04:20.613133 systemd[1]: sshd@38-157.90.248.142:22-139.178.89.65:51628.service: Deactivated successfully. Feb 13 16:04:20.618044 systemd-logind[1567]: Session 13 logged out. Waiting for processes to exit. Feb 13 16:04:20.619162 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 16:04:20.620440 systemd-logind[1567]: Removed session 13. Feb 13 16:04:20.777554 systemd[1]: Started sshd@39-157.90.248.142:22-139.178.89.65:51632.service - OpenSSH per-connection server daemon (139.178.89.65:51632). Feb 13 16:04:21.775851 sshd[4568]: Accepted publickey for core from 139.178.89.65 port 51632 ssh2: RSA SHA256:KBbJQhuYoAcoXq4F5fzIreCcUwPw2deXowxxtv7qw9g Feb 13 16:04:21.777841 sshd-session[4568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:04:21.782949 systemd-logind[1567]: New session 14 of user core. Feb 13 16:04:21.787817 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 16:04:22.590785 sshd[4571]: Connection closed by 139.178.89.65 port 51632 Feb 13 16:04:22.591833 sshd-session[4568]: pam_unix(sshd:session): session closed for user core Feb 13 16:04:22.596105 systemd[1]: sshd@39-157.90.248.142:22-139.178.89.65:51632.service: Deactivated successfully. Feb 13 16:04:22.602636 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 16:04:22.603110 systemd-logind[1567]: Session 14 logged out. Waiting for processes to exit. Feb 13 16:04:22.605105 systemd-logind[1567]: Removed session 14. Feb 13 16:04:22.754687 systemd[1]: Started sshd@40-157.90.248.142:22-139.178.89.65:51644.service - OpenSSH per-connection server daemon (139.178.89.65:51644). Feb 13 16:04:23.734615 sshd[4580]: Accepted publickey for core from 139.178.89.65 port 51644 ssh2: RSA SHA256:KBbJQhuYoAcoXq4F5fzIreCcUwPw2deXowxxtv7qw9g Feb 13 16:04:23.736669 sshd-session[4580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:04:23.741744 systemd-logind[1567]: New session 15 of user core. Feb 13 16:04:23.747417 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 16:04:26.025544 sshd[4583]: Connection closed by 139.178.89.65 port 51644 Feb 13 16:04:26.025411 sshd-session[4580]: pam_unix(sshd:session): session closed for user core Feb 13 16:04:26.030905 systemd[1]: sshd@40-157.90.248.142:22-139.178.89.65:51644.service: Deactivated successfully. Feb 13 16:04:26.036638 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 16:04:26.036697 systemd-logind[1567]: Session 15 logged out. Waiting for processes to exit. Feb 13 16:04:26.040626 systemd-logind[1567]: Removed session 15. Feb 13 16:04:26.190442 systemd[1]: Started sshd@41-157.90.248.142:22-139.178.89.65:43520.service - OpenSSH per-connection server daemon (139.178.89.65:43520). Feb 13 16:04:27.173081 sshd[4599]: Accepted publickey for core from 139.178.89.65 port 43520 ssh2: RSA SHA256:KBbJQhuYoAcoXq4F5fzIreCcUwPw2deXowxxtv7qw9g Feb 13 16:04:27.175519 sshd-session[4599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:04:27.185091 systemd-logind[1567]: New session 16 of user core. Feb 13 16:04:27.192402 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 16:04:28.039862 sshd[4602]: Connection closed by 139.178.89.65 port 43520 Feb 13 16:04:28.040852 sshd-session[4599]: pam_unix(sshd:session): session closed for user core Feb 13 16:04:28.045595 systemd[1]: sshd@41-157.90.248.142:22-139.178.89.65:43520.service: Deactivated successfully. Feb 13 16:04:28.051498 systemd-logind[1567]: Session 16 logged out. Waiting for processes to exit. Feb 13 16:04:28.051685 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 16:04:28.052944 systemd-logind[1567]: Removed session 16. Feb 13 16:04:28.208662 systemd[1]: Started sshd@42-157.90.248.142:22-139.178.89.65:43532.service - OpenSSH per-connection server daemon (139.178.89.65:43532). Feb 13 16:04:29.194720 sshd[4611]: Accepted publickey for core from 139.178.89.65 port 43532 ssh2: RSA SHA256:KBbJQhuYoAcoXq4F5fzIreCcUwPw2deXowxxtv7qw9g Feb 13 16:04:29.196936 sshd-session[4611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:04:29.204897 systemd-logind[1567]: New session 17 of user core. Feb 13 16:04:29.210169 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 16:04:29.943315 sshd[4614]: Connection closed by 139.178.89.65 port 43532 Feb 13 16:04:29.944335 sshd-session[4611]: pam_unix(sshd:session): session closed for user core Feb 13 16:04:29.948958 systemd[1]: sshd@42-157.90.248.142:22-139.178.89.65:43532.service: Deactivated successfully. Feb 13 16:04:29.952210 systemd-logind[1567]: Session 17 logged out. Waiting for processes to exit. Feb 13 16:04:29.952232 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 16:04:29.953883 systemd-logind[1567]: Removed session 17. Feb 13 16:04:35.109719 systemd[1]: Started sshd@43-157.90.248.142:22-139.178.89.65:42352.service - OpenSSH per-connection server daemon (139.178.89.65:42352). Feb 13 16:04:36.096725 sshd[4628]: Accepted publickey for core from 139.178.89.65 port 42352 ssh2: RSA SHA256:KBbJQhuYoAcoXq4F5fzIreCcUwPw2deXowxxtv7qw9g Feb 13 16:04:36.099439 sshd-session[4628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:04:36.105178 systemd-logind[1567]: New session 18 of user core. Feb 13 16:04:36.109430 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 16:04:36.848844 sshd[4631]: Connection closed by 139.178.89.65 port 42352 Feb 13 16:04:36.849836 sshd-session[4628]: pam_unix(sshd:session): session closed for user core Feb 13 16:04:36.855540 systemd[1]: sshd@43-157.90.248.142:22-139.178.89.65:42352.service: Deactivated successfully. Feb 13 16:04:36.856094 systemd-logind[1567]: Session 18 logged out. Waiting for processes to exit. Feb 13 16:04:36.860663 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 16:04:36.862081 systemd-logind[1567]: Removed session 18. Feb 13 16:04:42.020467 systemd[1]: Started sshd@44-157.90.248.142:22-139.178.89.65:42360.service - OpenSSH per-connection server daemon (139.178.89.65:42360). Feb 13 16:04:43.006596 sshd[4642]: Accepted publickey for core from 139.178.89.65 port 42360 ssh2: RSA SHA256:KBbJQhuYoAcoXq4F5fzIreCcUwPw2deXowxxtv7qw9g Feb 13 16:04:43.008700 sshd-session[4642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:04:43.014220 systemd-logind[1567]: New session 19 of user core. Feb 13 16:04:43.020734 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 16:04:43.754981 sshd[4645]: Connection closed by 139.178.89.65 port 42360 Feb 13 16:04:43.755557 sshd-session[4642]: pam_unix(sshd:session): session closed for user core Feb 13 16:04:43.760201 systemd-logind[1567]: Session 19 logged out. Waiting for processes to exit. Feb 13 16:04:43.761026 systemd[1]: sshd@44-157.90.248.142:22-139.178.89.65:42360.service: Deactivated successfully. Feb 13 16:04:43.766957 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 16:04:43.767823 systemd-logind[1567]: Removed session 19. Feb 13 16:04:43.919515 systemd[1]: Started sshd@45-157.90.248.142:22-139.178.89.65:42372.service - OpenSSH per-connection server daemon (139.178.89.65:42372). Feb 13 16:04:44.895777 sshd[4656]: Accepted publickey for core from 139.178.89.65 port 42372 ssh2: RSA SHA256:KBbJQhuYoAcoXq4F5fzIreCcUwPw2deXowxxtv7qw9g Feb 13 16:04:44.897634 sshd-session[4656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:04:44.902253 systemd-logind[1567]: New session 20 of user core. Feb 13 16:04:44.909649 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 16:04:47.427250 containerd[1597]: time="2025-02-13T16:04:47.425664940Z" level=info msg="StopContainer for \"136ce744cf5d2f3f78d74b7680abd7e14ac5838a248c2881749302ed7e8628a7\" with timeout 30 (s)" Feb 13 16:04:47.429291 containerd[1597]: time="2025-02-13T16:04:47.429183947Z" level=info msg="Stop container \"136ce744cf5d2f3f78d74b7680abd7e14ac5838a248c2881749302ed7e8628a7\" with signal terminated" Feb 13 16:04:47.442533 containerd[1597]: time="2025-02-13T16:04:47.442421603Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 16:04:47.454366 containerd[1597]: time="2025-02-13T16:04:47.454249320Z" level=info msg="StopContainer for \"da430d7ade601667ba4f965a61a830d127fab53745a878a52039fbe4dbe76fac\" with timeout 2 (s)" Feb 13 16:04:47.454769 containerd[1597]: time="2025-02-13T16:04:47.454702966Z" level=info msg="Stop container \"da430d7ade601667ba4f965a61a830d127fab53745a878a52039fbe4dbe76fac\" with signal terminated" Feb 13 16:04:47.462080 systemd-networkd[1236]: lxc_health: Link DOWN Feb 13 16:04:47.462088 systemd-networkd[1236]: lxc_health: Lost carrier Feb 13 16:04:47.484053 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-136ce744cf5d2f3f78d74b7680abd7e14ac5838a248c2881749302ed7e8628a7-rootfs.mount: Deactivated successfully. Feb 13 16:04:47.502573 containerd[1597]: time="2025-02-13T16:04:47.502511682Z" level=info msg="shim disconnected" id=136ce744cf5d2f3f78d74b7680abd7e14ac5838a248c2881749302ed7e8628a7 namespace=k8s.io Feb 13 16:04:47.502573 containerd[1597]: time="2025-02-13T16:04:47.502601763Z" level=warning msg="cleaning up after shim disconnected" id=136ce744cf5d2f3f78d74b7680abd7e14ac5838a248c2881749302ed7e8628a7 namespace=k8s.io Feb 13 16:04:47.502573 containerd[1597]: time="2025-02-13T16:04:47.502613403Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:04:47.514930 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-da430d7ade601667ba4f965a61a830d127fab53745a878a52039fbe4dbe76fac-rootfs.mount: Deactivated successfully. Feb 13 16:04:47.520255 containerd[1597]: time="2025-02-13T16:04:47.520046155Z" level=info msg="shim disconnected" id=da430d7ade601667ba4f965a61a830d127fab53745a878a52039fbe4dbe76fac namespace=k8s.io Feb 13 16:04:47.520255 containerd[1597]: time="2025-02-13T16:04:47.520105356Z" level=warning msg="cleaning up after shim disconnected" id=da430d7ade601667ba4f965a61a830d127fab53745a878a52039fbe4dbe76fac namespace=k8s.io Feb 13 16:04:47.520255 containerd[1597]: time="2025-02-13T16:04:47.520113276Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:04:47.523267 containerd[1597]: time="2025-02-13T16:04:47.523021434Z" level=info msg="StopContainer for \"136ce744cf5d2f3f78d74b7680abd7e14ac5838a248c2881749302ed7e8628a7\" returns successfully" Feb 13 16:04:47.525864 containerd[1597]: time="2025-02-13T16:04:47.525708470Z" level=info msg="StopPodSandbox for \"399dd598260fba6509fb244b8dbd16f4346b48d3e777c9a1a4c6d301bdee26c7\"" Feb 13 16:04:47.525864 containerd[1597]: time="2025-02-13T16:04:47.525754071Z" level=info msg="Container to stop \"136ce744cf5d2f3f78d74b7680abd7e14ac5838a248c2881749302ed7e8628a7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 16:04:47.529042 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-399dd598260fba6509fb244b8dbd16f4346b48d3e777c9a1a4c6d301bdee26c7-shm.mount: Deactivated successfully. Feb 13 16:04:47.543465 containerd[1597]: time="2025-02-13T16:04:47.543129342Z" level=info msg="StopContainer for \"da430d7ade601667ba4f965a61a830d127fab53745a878a52039fbe4dbe76fac\" returns successfully" Feb 13 16:04:47.545372 containerd[1597]: time="2025-02-13T16:04:47.545250810Z" level=info msg="StopPodSandbox for \"fa5972442b9ec8154e8d5f0f5de7996cf6c6ae7349d5a107cc4aef0e24bb1d80\"" Feb 13 16:04:47.545372 containerd[1597]: time="2025-02-13T16:04:47.545316131Z" level=info msg="Container to stop \"9878e759e01ea2040e464dbbf824928d931eb2f2b87c7d717caacb061937c38b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 16:04:47.545372 containerd[1597]: time="2025-02-13T16:04:47.545331491Z" level=info msg="Container to stop \"ba47ad2ad514a2dfba625d4ac1b759bdab726b1071819a168ad38ada3ed03628\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 16:04:47.545372 containerd[1597]: time="2025-02-13T16:04:47.545341611Z" level=info msg="Container to stop \"da430d7ade601667ba4f965a61a830d127fab53745a878a52039fbe4dbe76fac\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 16:04:47.545372 containerd[1597]: time="2025-02-13T16:04:47.545350851Z" level=info msg="Container to stop \"85ad3676c23c36eb920a39839e63f3bf5e0eda685e9245d2dd99f901247e6525\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 16:04:47.545372 containerd[1597]: time="2025-02-13T16:04:47.545358771Z" level=info msg="Container to stop \"6db30ad86cf25c442939c54e779a12a51ae29440b127bd983e249904484a502a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 16:04:47.550364 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fa5972442b9ec8154e8d5f0f5de7996cf6c6ae7349d5a107cc4aef0e24bb1d80-shm.mount: Deactivated successfully. Feb 13 16:04:47.593654 containerd[1597]: time="2025-02-13T16:04:47.593586212Z" level=info msg="shim disconnected" id=fa5972442b9ec8154e8d5f0f5de7996cf6c6ae7349d5a107cc4aef0e24bb1d80 namespace=k8s.io Feb 13 16:04:47.594888 containerd[1597]: time="2025-02-13T16:04:47.594858669Z" level=warning msg="cleaning up after shim disconnected" id=fa5972442b9ec8154e8d5f0f5de7996cf6c6ae7349d5a107cc4aef0e24bb1d80 namespace=k8s.io Feb 13 16:04:47.595233 containerd[1597]: time="2025-02-13T16:04:47.595069552Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:04:47.597518 containerd[1597]: time="2025-02-13T16:04:47.597288461Z" level=info msg="shim disconnected" id=399dd598260fba6509fb244b8dbd16f4346b48d3e777c9a1a4c6d301bdee26c7 namespace=k8s.io Feb 13 16:04:47.597518 containerd[1597]: time="2025-02-13T16:04:47.597380903Z" level=warning msg="cleaning up after shim disconnected" id=399dd598260fba6509fb244b8dbd16f4346b48d3e777c9a1a4c6d301bdee26c7 namespace=k8s.io Feb 13 16:04:47.597518 containerd[1597]: time="2025-02-13T16:04:47.597389423Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:04:47.611658 containerd[1597]: time="2025-02-13T16:04:47.611121685Z" level=info msg="TearDown network for sandbox \"399dd598260fba6509fb244b8dbd16f4346b48d3e777c9a1a4c6d301bdee26c7\" successfully" Feb 13 16:04:47.611658 containerd[1597]: time="2025-02-13T16:04:47.611207446Z" level=info msg="StopPodSandbox for \"399dd598260fba6509fb244b8dbd16f4346b48d3e777c9a1a4c6d301bdee26c7\" returns successfully" Feb 13 16:04:47.611658 containerd[1597]: time="2025-02-13T16:04:47.611571291Z" level=info msg="TearDown network for sandbox \"fa5972442b9ec8154e8d5f0f5de7996cf6c6ae7349d5a107cc4aef0e24bb1d80\" successfully" Feb 13 16:04:47.611658 containerd[1597]: time="2025-02-13T16:04:47.611593891Z" level=info msg="StopPodSandbox for \"fa5972442b9ec8154e8d5f0f5de7996cf6c6ae7349d5a107cc4aef0e24bb1d80\" returns successfully" Feb 13 16:04:47.800060 kubelet[2960]: I0213 16:04:47.799819 2960 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkd4b\" (UniqueName: \"kubernetes.io/projected/23d0f52a-b204-4952-b16e-9af68c46b3de-kube-api-access-kkd4b\") pod \"23d0f52a-b204-4952-b16e-9af68c46b3de\" (UID: \"23d0f52a-b204-4952-b16e-9af68c46b3de\") " Feb 13 16:04:47.800060 kubelet[2960]: I0213 16:04:47.799874 2960 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/23d0f52a-b204-4952-b16e-9af68c46b3de-host-proc-sys-kernel\") pod \"23d0f52a-b204-4952-b16e-9af68c46b3de\" (UID: \"23d0f52a-b204-4952-b16e-9af68c46b3de\") " Feb 13 16:04:47.800060 kubelet[2960]: I0213 16:04:47.799908 2960 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6498fa03-41c7-4e17-89a9-556afca84ae2-cilium-config-path\") pod \"6498fa03-41c7-4e17-89a9-556afca84ae2\" (UID: \"6498fa03-41c7-4e17-89a9-556afca84ae2\") " Feb 13 16:04:47.800060 kubelet[2960]: I0213 16:04:47.799937 2960 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/23d0f52a-b204-4952-b16e-9af68c46b3de-xtables-lock\") pod \"23d0f52a-b204-4952-b16e-9af68c46b3de\" (UID: \"23d0f52a-b204-4952-b16e-9af68c46b3de\") " Feb 13 16:04:47.800060 kubelet[2960]: I0213 16:04:47.799963 2960 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/23d0f52a-b204-4952-b16e-9af68c46b3de-host-proc-sys-net\") pod \"23d0f52a-b204-4952-b16e-9af68c46b3de\" (UID: \"23d0f52a-b204-4952-b16e-9af68c46b3de\") " Feb 13 16:04:47.800060 kubelet[2960]: I0213 16:04:47.799989 2960 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/23d0f52a-b204-4952-b16e-9af68c46b3de-etc-cni-netd\") pod \"23d0f52a-b204-4952-b16e-9af68c46b3de\" (UID: \"23d0f52a-b204-4952-b16e-9af68c46b3de\") " Feb 13 16:04:47.800060 kubelet[2960]: I0213 16:04:47.800014 2960 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/23d0f52a-b204-4952-b16e-9af68c46b3de-hostproc\") pod \"23d0f52a-b204-4952-b16e-9af68c46b3de\" (UID: \"23d0f52a-b204-4952-b16e-9af68c46b3de\") " Feb 13 16:04:47.802210 kubelet[2960]: I0213 16:04:47.800792 2960 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/23d0f52a-b204-4952-b16e-9af68c46b3de-cilium-run\") pod \"23d0f52a-b204-4952-b16e-9af68c46b3de\" (UID: \"23d0f52a-b204-4952-b16e-9af68c46b3de\") " Feb 13 16:04:47.802210 kubelet[2960]: I0213 16:04:47.800850 2960 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/23d0f52a-b204-4952-b16e-9af68c46b3de-cilium-config-path\") pod \"23d0f52a-b204-4952-b16e-9af68c46b3de\" (UID: \"23d0f52a-b204-4952-b16e-9af68c46b3de\") " Feb 13 16:04:47.802210 kubelet[2960]: I0213 16:04:47.800879 2960 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/23d0f52a-b204-4952-b16e-9af68c46b3de-bpf-maps\") pod \"23d0f52a-b204-4952-b16e-9af68c46b3de\" (UID: \"23d0f52a-b204-4952-b16e-9af68c46b3de\") " Feb 13 16:04:47.802210 kubelet[2960]: I0213 16:04:47.800927 2960 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/23d0f52a-b204-4952-b16e-9af68c46b3de-cni-path\") pod \"23d0f52a-b204-4952-b16e-9af68c46b3de\" (UID: \"23d0f52a-b204-4952-b16e-9af68c46b3de\") " Feb 13 16:04:47.802210 kubelet[2960]: I0213 16:04:47.800956 2960 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/23d0f52a-b204-4952-b16e-9af68c46b3de-hubble-tls\") pod \"23d0f52a-b204-4952-b16e-9af68c46b3de\" (UID: \"23d0f52a-b204-4952-b16e-9af68c46b3de\") " Feb 13 16:04:47.802210 kubelet[2960]: I0213 16:04:47.800982 2960 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/23d0f52a-b204-4952-b16e-9af68c46b3de-cilium-cgroup\") pod \"23d0f52a-b204-4952-b16e-9af68c46b3de\" (UID: \"23d0f52a-b204-4952-b16e-9af68c46b3de\") " Feb 13 16:04:47.802210 kubelet[2960]: I0213 16:04:47.801011 2960 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/23d0f52a-b204-4952-b16e-9af68c46b3de-clustermesh-secrets\") pod \"23d0f52a-b204-4952-b16e-9af68c46b3de\" (UID: \"23d0f52a-b204-4952-b16e-9af68c46b3de\") " Feb 13 16:04:47.802210 kubelet[2960]: I0213 16:04:47.801041 2960 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/23d0f52a-b204-4952-b16e-9af68c46b3de-lib-modules\") pod \"23d0f52a-b204-4952-b16e-9af68c46b3de\" (UID: \"23d0f52a-b204-4952-b16e-9af68c46b3de\") " Feb 13 16:04:47.802210 kubelet[2960]: I0213 16:04:47.801070 2960 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nf4b\" (UniqueName: \"kubernetes.io/projected/6498fa03-41c7-4e17-89a9-556afca84ae2-kube-api-access-9nf4b\") pod \"6498fa03-41c7-4e17-89a9-556afca84ae2\" (UID: \"6498fa03-41c7-4e17-89a9-556afca84ae2\") " Feb 13 16:04:47.802210 kubelet[2960]: I0213 16:04:47.801216 2960 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/23d0f52a-b204-4952-b16e-9af68c46b3de-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "23d0f52a-b204-4952-b16e-9af68c46b3de" (UID: "23d0f52a-b204-4952-b16e-9af68c46b3de"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 16:04:47.802719 kubelet[2960]: I0213 16:04:47.802330 2960 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/23d0f52a-b204-4952-b16e-9af68c46b3de-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "23d0f52a-b204-4952-b16e-9af68c46b3de" (UID: "23d0f52a-b204-4952-b16e-9af68c46b3de"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 16:04:47.802719 kubelet[2960]: I0213 16:04:47.802403 2960 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/23d0f52a-b204-4952-b16e-9af68c46b3de-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "23d0f52a-b204-4952-b16e-9af68c46b3de" (UID: "23d0f52a-b204-4952-b16e-9af68c46b3de"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 16:04:47.802719 kubelet[2960]: I0213 16:04:47.802449 2960 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/23d0f52a-b204-4952-b16e-9af68c46b3de-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "23d0f52a-b204-4952-b16e-9af68c46b3de" (UID: "23d0f52a-b204-4952-b16e-9af68c46b3de"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 16:04:47.802719 kubelet[2960]: I0213 16:04:47.802477 2960 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/23d0f52a-b204-4952-b16e-9af68c46b3de-hostproc" (OuterVolumeSpecName: "hostproc") pod "23d0f52a-b204-4952-b16e-9af68c46b3de" (UID: "23d0f52a-b204-4952-b16e-9af68c46b3de"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 16:04:47.802719 kubelet[2960]: I0213 16:04:47.802502 2960 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/23d0f52a-b204-4952-b16e-9af68c46b3de-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "23d0f52a-b204-4952-b16e-9af68c46b3de" (UID: "23d0f52a-b204-4952-b16e-9af68c46b3de"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 16:04:47.802966 kubelet[2960]: I0213 16:04:47.802939 2960 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/23d0f52a-b204-4952-b16e-9af68c46b3de-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "23d0f52a-b204-4952-b16e-9af68c46b3de" (UID: "23d0f52a-b204-4952-b16e-9af68c46b3de"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 16:04:47.803498 kubelet[2960]: I0213 16:04:47.803105 2960 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/23d0f52a-b204-4952-b16e-9af68c46b3de-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "23d0f52a-b204-4952-b16e-9af68c46b3de" (UID: "23d0f52a-b204-4952-b16e-9af68c46b3de"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 16:04:47.803615 kubelet[2960]: I0213 16:04:47.803129 2960 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/23d0f52a-b204-4952-b16e-9af68c46b3de-cni-path" (OuterVolumeSpecName: "cni-path") pod "23d0f52a-b204-4952-b16e-9af68c46b3de" (UID: "23d0f52a-b204-4952-b16e-9af68c46b3de"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 16:04:47.805700 kubelet[2960]: I0213 16:04:47.805678 2960 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/23d0f52a-b204-4952-b16e-9af68c46b3de-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "23d0f52a-b204-4952-b16e-9af68c46b3de" (UID: "23d0f52a-b204-4952-b16e-9af68c46b3de"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 16:04:47.808014 kubelet[2960]: I0213 16:04:47.807989 2960 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23d0f52a-b204-4952-b16e-9af68c46b3de-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "23d0f52a-b204-4952-b16e-9af68c46b3de" (UID: "23d0f52a-b204-4952-b16e-9af68c46b3de"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 16:04:47.808193 kubelet[2960]: I0213 16:04:47.808171 2960 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23d0f52a-b204-4952-b16e-9af68c46b3de-kube-api-access-kkd4b" (OuterVolumeSpecName: "kube-api-access-kkd4b") pod "23d0f52a-b204-4952-b16e-9af68c46b3de" (UID: "23d0f52a-b204-4952-b16e-9af68c46b3de"). InnerVolumeSpecName "kube-api-access-kkd4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 16:04:47.808593 kubelet[2960]: I0213 16:04:47.808441 2960 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23d0f52a-b204-4952-b16e-9af68c46b3de-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "23d0f52a-b204-4952-b16e-9af68c46b3de" (UID: "23d0f52a-b204-4952-b16e-9af68c46b3de"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 16:04:47.808671 kubelet[2960]: I0213 16:04:47.808566 2960 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6498fa03-41c7-4e17-89a9-556afca84ae2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6498fa03-41c7-4e17-89a9-556afca84ae2" (UID: "6498fa03-41c7-4e17-89a9-556afca84ae2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 16:04:47.809417 kubelet[2960]: I0213 16:04:47.809382 2960 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23d0f52a-b204-4952-b16e-9af68c46b3de-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "23d0f52a-b204-4952-b16e-9af68c46b3de" (UID: "23d0f52a-b204-4952-b16e-9af68c46b3de"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 16:04:47.809934 kubelet[2960]: I0213 16:04:47.809897 2960 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6498fa03-41c7-4e17-89a9-556afca84ae2-kube-api-access-9nf4b" (OuterVolumeSpecName: "kube-api-access-9nf4b") pod "6498fa03-41c7-4e17-89a9-556afca84ae2" (UID: "6498fa03-41c7-4e17-89a9-556afca84ae2"). InnerVolumeSpecName "kube-api-access-9nf4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 16:04:47.902352 kubelet[2960]: I0213 16:04:47.902249 2960 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/23d0f52a-b204-4952-b16e-9af68c46b3de-xtables-lock\") on node \"ci-4152-2-1-f-29672fd7f0\" DevicePath \"\"" Feb 13 16:04:47.902352 kubelet[2960]: I0213 16:04:47.902341 2960 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/23d0f52a-b204-4952-b16e-9af68c46b3de-host-proc-sys-net\") on node \"ci-4152-2-1-f-29672fd7f0\" DevicePath \"\"" Feb 13 16:04:47.902636 kubelet[2960]: I0213 16:04:47.902371 2960 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/23d0f52a-b204-4952-b16e-9af68c46b3de-etc-cni-netd\") on node \"ci-4152-2-1-f-29672fd7f0\" DevicePath \"\"" Feb 13 16:04:47.902636 kubelet[2960]: I0213 16:04:47.902396 2960 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/23d0f52a-b204-4952-b16e-9af68c46b3de-hostproc\") on node \"ci-4152-2-1-f-29672fd7f0\" DevicePath \"\"" Feb 13 16:04:47.902636 kubelet[2960]: I0213 16:04:47.902421 2960 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/23d0f52a-b204-4952-b16e-9af68c46b3de-cilium-run\") on node \"ci-4152-2-1-f-29672fd7f0\" DevicePath \"\"" Feb 13 16:04:47.902636 kubelet[2960]: I0213 16:04:47.902445 2960 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/23d0f52a-b204-4952-b16e-9af68c46b3de-cilium-config-path\") on node \"ci-4152-2-1-f-29672fd7f0\" DevicePath \"\"" Feb 13 16:04:47.902636 kubelet[2960]: I0213 16:04:47.902468 2960 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/23d0f52a-b204-4952-b16e-9af68c46b3de-bpf-maps\") on node \"ci-4152-2-1-f-29672fd7f0\" DevicePath \"\"" Feb 13 16:04:47.902636 kubelet[2960]: I0213 16:04:47.902492 2960 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/23d0f52a-b204-4952-b16e-9af68c46b3de-cni-path\") on node \"ci-4152-2-1-f-29672fd7f0\" DevicePath \"\"" Feb 13 16:04:47.902636 kubelet[2960]: I0213 16:04:47.902515 2960 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/23d0f52a-b204-4952-b16e-9af68c46b3de-hubble-tls\") on node \"ci-4152-2-1-f-29672fd7f0\" DevicePath \"\"" Feb 13 16:04:47.902636 kubelet[2960]: I0213 16:04:47.902545 2960 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/23d0f52a-b204-4952-b16e-9af68c46b3de-cilium-cgroup\") on node \"ci-4152-2-1-f-29672fd7f0\" DevicePath \"\"" Feb 13 16:04:47.902636 kubelet[2960]: I0213 16:04:47.902609 2960 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/23d0f52a-b204-4952-b16e-9af68c46b3de-clustermesh-secrets\") on node \"ci-4152-2-1-f-29672fd7f0\" DevicePath \"\"" Feb 13 16:04:47.902636 kubelet[2960]: I0213 16:04:47.902641 2960 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-9nf4b\" (UniqueName: \"kubernetes.io/projected/6498fa03-41c7-4e17-89a9-556afca84ae2-kube-api-access-9nf4b\") on node \"ci-4152-2-1-f-29672fd7f0\" DevicePath \"\"" Feb 13 16:04:47.903144 kubelet[2960]: I0213 16:04:47.902665 2960 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/23d0f52a-b204-4952-b16e-9af68c46b3de-lib-modules\") on node \"ci-4152-2-1-f-29672fd7f0\" DevicePath \"\"" Feb 13 16:04:47.903144 kubelet[2960]: I0213 16:04:47.902692 2960 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-kkd4b\" (UniqueName: \"kubernetes.io/projected/23d0f52a-b204-4952-b16e-9af68c46b3de-kube-api-access-kkd4b\") on node \"ci-4152-2-1-f-29672fd7f0\" DevicePath \"\"" Feb 13 16:04:47.903144 kubelet[2960]: I0213 16:04:47.902716 2960 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/23d0f52a-b204-4952-b16e-9af68c46b3de-host-proc-sys-kernel\") on node \"ci-4152-2-1-f-29672fd7f0\" DevicePath \"\"" Feb 13 16:04:47.903144 kubelet[2960]: I0213 16:04:47.902739 2960 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6498fa03-41c7-4e17-89a9-556afca84ae2-cilium-config-path\") on node \"ci-4152-2-1-f-29672fd7f0\" DevicePath \"\"" Feb 13 16:04:48.072962 kubelet[2960]: I0213 16:04:48.072830 2960 scope.go:117] "RemoveContainer" containerID="da430d7ade601667ba4f965a61a830d127fab53745a878a52039fbe4dbe76fac" Feb 13 16:04:48.083031 containerd[1597]: time="2025-02-13T16:04:48.081759580Z" level=info msg="RemoveContainer for \"da430d7ade601667ba4f965a61a830d127fab53745a878a52039fbe4dbe76fac\"" Feb 13 16:04:48.089900 containerd[1597]: time="2025-02-13T16:04:48.087187172Z" level=info msg="RemoveContainer for \"da430d7ade601667ba4f965a61a830d127fab53745a878a52039fbe4dbe76fac\" returns successfully" Feb 13 16:04:48.090026 kubelet[2960]: I0213 16:04:48.087459 2960 scope.go:117] "RemoveContainer" containerID="6db30ad86cf25c442939c54e779a12a51ae29440b127bd983e249904484a502a" Feb 13 16:04:48.091744 containerd[1597]: time="2025-02-13T16:04:48.091388228Z" level=info msg="RemoveContainer for \"6db30ad86cf25c442939c54e779a12a51ae29440b127bd983e249904484a502a\"" Feb 13 16:04:48.097554 containerd[1597]: time="2025-02-13T16:04:48.096713459Z" level=info msg="RemoveContainer for \"6db30ad86cf25c442939c54e779a12a51ae29440b127bd983e249904484a502a\" returns successfully" Feb 13 16:04:48.097776 kubelet[2960]: I0213 16:04:48.096918 2960 scope.go:117] "RemoveContainer" containerID="85ad3676c23c36eb920a39839e63f3bf5e0eda685e9245d2dd99f901247e6525" Feb 13 16:04:48.100494 containerd[1597]: time="2025-02-13T16:04:48.100176105Z" level=info msg="RemoveContainer for \"85ad3676c23c36eb920a39839e63f3bf5e0eda685e9245d2dd99f901247e6525\"" Feb 13 16:04:48.103496 containerd[1597]: time="2025-02-13T16:04:48.103472188Z" level=info msg="RemoveContainer for \"85ad3676c23c36eb920a39839e63f3bf5e0eda685e9245d2dd99f901247e6525\" returns successfully" Feb 13 16:04:48.103888 kubelet[2960]: I0213 16:04:48.103805 2960 scope.go:117] "RemoveContainer" containerID="ba47ad2ad514a2dfba625d4ac1b759bdab726b1071819a168ad38ada3ed03628" Feb 13 16:04:48.107711 containerd[1597]: time="2025-02-13T16:04:48.107564323Z" level=info msg="RemoveContainer for \"ba47ad2ad514a2dfba625d4ac1b759bdab726b1071819a168ad38ada3ed03628\"" Feb 13 16:04:48.113713 containerd[1597]: time="2025-02-13T16:04:48.113655644Z" level=info msg="RemoveContainer for \"ba47ad2ad514a2dfba625d4ac1b759bdab726b1071819a168ad38ada3ed03628\" returns successfully" Feb 13 16:04:48.114079 kubelet[2960]: I0213 16:04:48.113982 2960 scope.go:117] "RemoveContainer" containerID="9878e759e01ea2040e464dbbf824928d931eb2f2b87c7d717caacb061937c38b" Feb 13 16:04:48.115073 containerd[1597]: time="2025-02-13T16:04:48.115029622Z" level=info msg="RemoveContainer for \"9878e759e01ea2040e464dbbf824928d931eb2f2b87c7d717caacb061937c38b\"" Feb 13 16:04:48.120333 containerd[1597]: time="2025-02-13T16:04:48.120219051Z" level=info msg="RemoveContainer for \"9878e759e01ea2040e464dbbf824928d931eb2f2b87c7d717caacb061937c38b\" returns successfully" Feb 13 16:04:48.120756 kubelet[2960]: I0213 16:04:48.120622 2960 scope.go:117] "RemoveContainer" containerID="da430d7ade601667ba4f965a61a830d127fab53745a878a52039fbe4dbe76fac" Feb 13 16:04:48.121099 containerd[1597]: time="2025-02-13T16:04:48.120936821Z" level=error msg="ContainerStatus for \"da430d7ade601667ba4f965a61a830d127fab53745a878a52039fbe4dbe76fac\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"da430d7ade601667ba4f965a61a830d127fab53745a878a52039fbe4dbe76fac\": not found" Feb 13 16:04:48.121248 kubelet[2960]: E0213 16:04:48.121080 2960 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"da430d7ade601667ba4f965a61a830d127fab53745a878a52039fbe4dbe76fac\": not found" containerID="da430d7ade601667ba4f965a61a830d127fab53745a878a52039fbe4dbe76fac" Feb 13 16:04:48.121471 kubelet[2960]: I0213 16:04:48.121406 2960 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"da430d7ade601667ba4f965a61a830d127fab53745a878a52039fbe4dbe76fac"} err="failed to get container status \"da430d7ade601667ba4f965a61a830d127fab53745a878a52039fbe4dbe76fac\": rpc error: code = NotFound desc = an error occurred when try to find container \"da430d7ade601667ba4f965a61a830d127fab53745a878a52039fbe4dbe76fac\": not found" Feb 13 16:04:48.121471 kubelet[2960]: I0213 16:04:48.121424 2960 scope.go:117] "RemoveContainer" containerID="6db30ad86cf25c442939c54e779a12a51ae29440b127bd983e249904484a502a" Feb 13 16:04:48.121969 containerd[1597]: time="2025-02-13T16:04:48.121776992Z" level=error msg="ContainerStatus for \"6db30ad86cf25c442939c54e779a12a51ae29440b127bd983e249904484a502a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6db30ad86cf25c442939c54e779a12a51ae29440b127bd983e249904484a502a\": not found" Feb 13 16:04:48.122027 kubelet[2960]: E0213 16:04:48.121884 2960 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6db30ad86cf25c442939c54e779a12a51ae29440b127bd983e249904484a502a\": not found" containerID="6db30ad86cf25c442939c54e779a12a51ae29440b127bd983e249904484a502a" Feb 13 16:04:48.122027 kubelet[2960]: I0213 16:04:48.121916 2960 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6db30ad86cf25c442939c54e779a12a51ae29440b127bd983e249904484a502a"} err="failed to get container status \"6db30ad86cf25c442939c54e779a12a51ae29440b127bd983e249904484a502a\": rpc error: code = NotFound desc = an error occurred when try to find container \"6db30ad86cf25c442939c54e779a12a51ae29440b127bd983e249904484a502a\": not found" Feb 13 16:04:48.122027 kubelet[2960]: I0213 16:04:48.121927 2960 scope.go:117] "RemoveContainer" containerID="85ad3676c23c36eb920a39839e63f3bf5e0eda685e9245d2dd99f901247e6525" Feb 13 16:04:48.122436 containerd[1597]: time="2025-02-13T16:04:48.122252758Z" level=error msg="ContainerStatus for \"85ad3676c23c36eb920a39839e63f3bf5e0eda685e9245d2dd99f901247e6525\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"85ad3676c23c36eb920a39839e63f3bf5e0eda685e9245d2dd99f901247e6525\": not found" Feb 13 16:04:48.122598 kubelet[2960]: E0213 16:04:48.122512 2960 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"85ad3676c23c36eb920a39839e63f3bf5e0eda685e9245d2dd99f901247e6525\": not found" containerID="85ad3676c23c36eb920a39839e63f3bf5e0eda685e9245d2dd99f901247e6525" Feb 13 16:04:48.122598 kubelet[2960]: I0213 16:04:48.122539 2960 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"85ad3676c23c36eb920a39839e63f3bf5e0eda685e9245d2dd99f901247e6525"} err="failed to get container status \"85ad3676c23c36eb920a39839e63f3bf5e0eda685e9245d2dd99f901247e6525\": rpc error: code = NotFound desc = an error occurred when try to find container \"85ad3676c23c36eb920a39839e63f3bf5e0eda685e9245d2dd99f901247e6525\": not found" Feb 13 16:04:48.122598 kubelet[2960]: I0213 16:04:48.122548 2960 scope.go:117] "RemoveContainer" containerID="ba47ad2ad514a2dfba625d4ac1b759bdab726b1071819a168ad38ada3ed03628" Feb 13 16:04:48.122872 containerd[1597]: time="2025-02-13T16:04:48.122837006Z" level=error msg="ContainerStatus for \"ba47ad2ad514a2dfba625d4ac1b759bdab726b1071819a168ad38ada3ed03628\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ba47ad2ad514a2dfba625d4ac1b759bdab726b1071819a168ad38ada3ed03628\": not found" Feb 13 16:04:48.123115 kubelet[2960]: E0213 16:04:48.123004 2960 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ba47ad2ad514a2dfba625d4ac1b759bdab726b1071819a168ad38ada3ed03628\": not found" containerID="ba47ad2ad514a2dfba625d4ac1b759bdab726b1071819a168ad38ada3ed03628" Feb 13 16:04:48.123115 kubelet[2960]: I0213 16:04:48.123050 2960 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ba47ad2ad514a2dfba625d4ac1b759bdab726b1071819a168ad38ada3ed03628"} err="failed to get container status \"ba47ad2ad514a2dfba625d4ac1b759bdab726b1071819a168ad38ada3ed03628\": rpc error: code = NotFound desc = an error occurred when try to find container \"ba47ad2ad514a2dfba625d4ac1b759bdab726b1071819a168ad38ada3ed03628\": not found" Feb 13 16:04:48.123115 kubelet[2960]: I0213 16:04:48.123060 2960 scope.go:117] "RemoveContainer" containerID="9878e759e01ea2040e464dbbf824928d931eb2f2b87c7d717caacb061937c38b" Feb 13 16:04:48.123629 containerd[1597]: time="2025-02-13T16:04:48.123456854Z" level=error msg="ContainerStatus for \"9878e759e01ea2040e464dbbf824928d931eb2f2b87c7d717caacb061937c38b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9878e759e01ea2040e464dbbf824928d931eb2f2b87c7d717caacb061937c38b\": not found" Feb 13 16:04:48.123679 kubelet[2960]: E0213 16:04:48.123596 2960 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9878e759e01ea2040e464dbbf824928d931eb2f2b87c7d717caacb061937c38b\": not found" containerID="9878e759e01ea2040e464dbbf824928d931eb2f2b87c7d717caacb061937c38b" Feb 13 16:04:48.123780 kubelet[2960]: I0213 16:04:48.123731 2960 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9878e759e01ea2040e464dbbf824928d931eb2f2b87c7d717caacb061937c38b"} err="failed to get container status \"9878e759e01ea2040e464dbbf824928d931eb2f2b87c7d717caacb061937c38b\": rpc error: code = NotFound desc = an error occurred when try to find container \"9878e759e01ea2040e464dbbf824928d931eb2f2b87c7d717caacb061937c38b\": not found" Feb 13 16:04:48.123780 kubelet[2960]: I0213 16:04:48.123747 2960 scope.go:117] "RemoveContainer" containerID="136ce744cf5d2f3f78d74b7680abd7e14ac5838a248c2881749302ed7e8628a7" Feb 13 16:04:48.124965 containerd[1597]: time="2025-02-13T16:04:48.124945834Z" level=info msg="RemoveContainer for \"136ce744cf5d2f3f78d74b7680abd7e14ac5838a248c2881749302ed7e8628a7\"" Feb 13 16:04:48.127912 containerd[1597]: time="2025-02-13T16:04:48.127882873Z" level=info msg="RemoveContainer for \"136ce744cf5d2f3f78d74b7680abd7e14ac5838a248c2881749302ed7e8628a7\" returns successfully" Feb 13 16:04:48.128213 kubelet[2960]: I0213 16:04:48.128117 2960 scope.go:117] "RemoveContainer" containerID="136ce744cf5d2f3f78d74b7680abd7e14ac5838a248c2881749302ed7e8628a7" Feb 13 16:04:48.128506 containerd[1597]: time="2025-02-13T16:04:48.128473921Z" level=error msg="ContainerStatus for \"136ce744cf5d2f3f78d74b7680abd7e14ac5838a248c2881749302ed7e8628a7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"136ce744cf5d2f3f78d74b7680abd7e14ac5838a248c2881749302ed7e8628a7\": not found" Feb 13 16:04:48.128657 kubelet[2960]: E0213 16:04:48.128619 2960 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"136ce744cf5d2f3f78d74b7680abd7e14ac5838a248c2881749302ed7e8628a7\": not found" containerID="136ce744cf5d2f3f78d74b7680abd7e14ac5838a248c2881749302ed7e8628a7" Feb 13 16:04:48.128657 kubelet[2960]: I0213 16:04:48.128647 2960 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"136ce744cf5d2f3f78d74b7680abd7e14ac5838a248c2881749302ed7e8628a7"} err="failed to get container status \"136ce744cf5d2f3f78d74b7680abd7e14ac5838a248c2881749302ed7e8628a7\": rpc error: code = NotFound desc = an error occurred when try to find container \"136ce744cf5d2f3f78d74b7680abd7e14ac5838a248c2881749302ed7e8628a7\": not found" Feb 13 16:04:48.423676 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-399dd598260fba6509fb244b8dbd16f4346b48d3e777c9a1a4c6d301bdee26c7-rootfs.mount: Deactivated successfully. Feb 13 16:04:48.424000 systemd[1]: var-lib-kubelet-pods-6498fa03\x2d41c7\x2d4e17\x2d89a9\x2d556afca84ae2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9nf4b.mount: Deactivated successfully. Feb 13 16:04:48.424190 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fa5972442b9ec8154e8d5f0f5de7996cf6c6ae7349d5a107cc4aef0e24bb1d80-rootfs.mount: Deactivated successfully. Feb 13 16:04:48.424375 systemd[1]: var-lib-kubelet-pods-23d0f52a\x2db204\x2d4952\x2db16e\x2d9af68c46b3de-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkkd4b.mount: Deactivated successfully. Feb 13 16:04:48.424594 systemd[1]: var-lib-kubelet-pods-23d0f52a\x2db204\x2d4952\x2db16e\x2d9af68c46b3de-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 16:04:48.424799 systemd[1]: var-lib-kubelet-pods-23d0f52a\x2db204\x2d4952\x2db16e\x2d9af68c46b3de-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 16:04:49.155966 kubelet[2960]: I0213 16:04:49.155917 2960 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="23d0f52a-b204-4952-b16e-9af68c46b3de" path="/var/lib/kubelet/pods/23d0f52a-b204-4952-b16e-9af68c46b3de/volumes" Feb 13 16:04:49.157078 kubelet[2960]: I0213 16:04:49.156659 2960 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="6498fa03-41c7-4e17-89a9-556afca84ae2" path="/var/lib/kubelet/pods/6498fa03-41c7-4e17-89a9-556afca84ae2/volumes" Feb 13 16:04:49.510456 sshd[4659]: Connection closed by 139.178.89.65 port 42372 Feb 13 16:04:49.511779 sshd-session[4656]: pam_unix(sshd:session): session closed for user core Feb 13 16:04:49.515808 systemd-logind[1567]: Session 20 logged out. Waiting for processes to exit. Feb 13 16:04:49.516125 systemd[1]: sshd@45-157.90.248.142:22-139.178.89.65:42372.service: Deactivated successfully. Feb 13 16:04:49.520459 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 16:04:49.521720 systemd-logind[1567]: Removed session 20. Feb 13 16:04:49.676547 systemd[1]: Started sshd@46-157.90.248.142:22-139.178.89.65:52338.service - OpenSSH per-connection server daemon (139.178.89.65:52338). Feb 13 16:04:50.384092 kubelet[2960]: E0213 16:04:50.383984 2960 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 16:04:50.662717 sshd[4824]: Accepted publickey for core from 139.178.89.65 port 52338 ssh2: RSA SHA256:KBbJQhuYoAcoXq4F5fzIreCcUwPw2deXowxxtv7qw9g Feb 13 16:04:50.664558 sshd-session[4824]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:04:50.669425 systemd-logind[1567]: New session 21 of user core. Feb 13 16:04:50.677732 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 16:04:51.793089 kubelet[2960]: I0213 16:04:51.791164 2960 topology_manager.go:215] "Topology Admit Handler" podUID="f612df5d-2081-49e2-9bed-2d1994f41339" podNamespace="kube-system" podName="cilium-kcxbb" Feb 13 16:04:51.793089 kubelet[2960]: E0213 16:04:51.791225 2960 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6498fa03-41c7-4e17-89a9-556afca84ae2" containerName="cilium-operator" Feb 13 16:04:51.793089 kubelet[2960]: E0213 16:04:51.791236 2960 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="23d0f52a-b204-4952-b16e-9af68c46b3de" containerName="apply-sysctl-overwrites" Feb 13 16:04:51.793089 kubelet[2960]: E0213 16:04:51.791243 2960 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="23d0f52a-b204-4952-b16e-9af68c46b3de" containerName="mount-bpf-fs" Feb 13 16:04:51.793089 kubelet[2960]: E0213 16:04:51.791251 2960 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="23d0f52a-b204-4952-b16e-9af68c46b3de" containerName="cilium-agent" Feb 13 16:04:51.793089 kubelet[2960]: E0213 16:04:51.791267 2960 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="23d0f52a-b204-4952-b16e-9af68c46b3de" containerName="mount-cgroup" Feb 13 16:04:51.793089 kubelet[2960]: E0213 16:04:51.791277 2960 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="23d0f52a-b204-4952-b16e-9af68c46b3de" containerName="clean-cilium-state" Feb 13 16:04:51.793089 kubelet[2960]: I0213 16:04:51.791300 2960 memory_manager.go:354] "RemoveStaleState removing state" podUID="6498fa03-41c7-4e17-89a9-556afca84ae2" containerName="cilium-operator" Feb 13 16:04:51.793089 kubelet[2960]: I0213 16:04:51.791344 2960 memory_manager.go:354] "RemoveStaleState removing state" podUID="23d0f52a-b204-4952-b16e-9af68c46b3de" containerName="cilium-agent" Feb 13 16:04:51.924084 kubelet[2960]: I0213 16:04:51.923998 2960 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f612df5d-2081-49e2-9bed-2d1994f41339-lib-modules\") pod \"cilium-kcxbb\" (UID: \"f612df5d-2081-49e2-9bed-2d1994f41339\") " pod="kube-system/cilium-kcxbb" Feb 13 16:04:51.924084 kubelet[2960]: I0213 16:04:51.924076 2960 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f612df5d-2081-49e2-9bed-2d1994f41339-bpf-maps\") pod \"cilium-kcxbb\" (UID: \"f612df5d-2081-49e2-9bed-2d1994f41339\") " pod="kube-system/cilium-kcxbb" Feb 13 16:04:51.924360 kubelet[2960]: I0213 16:04:51.924120 2960 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f612df5d-2081-49e2-9bed-2d1994f41339-cilium-cgroup\") pod \"cilium-kcxbb\" (UID: \"f612df5d-2081-49e2-9bed-2d1994f41339\") " pod="kube-system/cilium-kcxbb" Feb 13 16:04:51.924360 kubelet[2960]: I0213 16:04:51.924187 2960 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f612df5d-2081-49e2-9bed-2d1994f41339-xtables-lock\") pod \"cilium-kcxbb\" (UID: \"f612df5d-2081-49e2-9bed-2d1994f41339\") " pod="kube-system/cilium-kcxbb" Feb 13 16:04:51.924360 kubelet[2960]: I0213 16:04:51.924235 2960 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f612df5d-2081-49e2-9bed-2d1994f41339-cilium-config-path\") pod \"cilium-kcxbb\" (UID: \"f612df5d-2081-49e2-9bed-2d1994f41339\") " pod="kube-system/cilium-kcxbb" Feb 13 16:04:51.925270 kubelet[2960]: I0213 16:04:51.924503 2960 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f612df5d-2081-49e2-9bed-2d1994f41339-host-proc-sys-net\") pod \"cilium-kcxbb\" (UID: \"f612df5d-2081-49e2-9bed-2d1994f41339\") " pod="kube-system/cilium-kcxbb" Feb 13 16:04:51.925270 kubelet[2960]: I0213 16:04:51.924571 2960 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f612df5d-2081-49e2-9bed-2d1994f41339-host-proc-sys-kernel\") pod \"cilium-kcxbb\" (UID: \"f612df5d-2081-49e2-9bed-2d1994f41339\") " pod="kube-system/cilium-kcxbb" Feb 13 16:04:51.925270 kubelet[2960]: I0213 16:04:51.924631 2960 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkppt\" (UniqueName: \"kubernetes.io/projected/f612df5d-2081-49e2-9bed-2d1994f41339-kube-api-access-xkppt\") pod \"cilium-kcxbb\" (UID: \"f612df5d-2081-49e2-9bed-2d1994f41339\") " pod="kube-system/cilium-kcxbb" Feb 13 16:04:51.925270 kubelet[2960]: I0213 16:04:51.924923 2960 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f612df5d-2081-49e2-9bed-2d1994f41339-etc-cni-netd\") pod \"cilium-kcxbb\" (UID: \"f612df5d-2081-49e2-9bed-2d1994f41339\") " pod="kube-system/cilium-kcxbb" Feb 13 16:04:51.926402 kubelet[2960]: I0213 16:04:51.925577 2960 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f612df5d-2081-49e2-9bed-2d1994f41339-clustermesh-secrets\") pod \"cilium-kcxbb\" (UID: \"f612df5d-2081-49e2-9bed-2d1994f41339\") " pod="kube-system/cilium-kcxbb" Feb 13 16:04:51.926402 kubelet[2960]: I0213 16:04:51.925761 2960 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f612df5d-2081-49e2-9bed-2d1994f41339-cilium-ipsec-secrets\") pod \"cilium-kcxbb\" (UID: \"f612df5d-2081-49e2-9bed-2d1994f41339\") " pod="kube-system/cilium-kcxbb" Feb 13 16:04:51.926402 kubelet[2960]: I0213 16:04:51.925826 2960 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f612df5d-2081-49e2-9bed-2d1994f41339-cilium-run\") pod \"cilium-kcxbb\" (UID: \"f612df5d-2081-49e2-9bed-2d1994f41339\") " pod="kube-system/cilium-kcxbb" Feb 13 16:04:51.926402 kubelet[2960]: I0213 16:04:51.926016 2960 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f612df5d-2081-49e2-9bed-2d1994f41339-hostproc\") pod \"cilium-kcxbb\" (UID: \"f612df5d-2081-49e2-9bed-2d1994f41339\") " pod="kube-system/cilium-kcxbb" Feb 13 16:04:51.926402 kubelet[2960]: I0213 16:04:51.926061 2960 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f612df5d-2081-49e2-9bed-2d1994f41339-cni-path\") pod \"cilium-kcxbb\" (UID: \"f612df5d-2081-49e2-9bed-2d1994f41339\") " pod="kube-system/cilium-kcxbb" Feb 13 16:04:51.926402 kubelet[2960]: I0213 16:04:51.926133 2960 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f612df5d-2081-49e2-9bed-2d1994f41339-hubble-tls\") pod \"cilium-kcxbb\" (UID: \"f612df5d-2081-49e2-9bed-2d1994f41339\") " pod="kube-system/cilium-kcxbb" Feb 13 16:04:51.974416 sshd[4827]: Connection closed by 139.178.89.65 port 52338 Feb 13 16:04:51.975285 sshd-session[4824]: pam_unix(sshd:session): session closed for user core Feb 13 16:04:51.981302 systemd[1]: sshd@46-157.90.248.142:22-139.178.89.65:52338.service: Deactivated successfully. Feb 13 16:04:51.985461 systemd-logind[1567]: Session 21 logged out. Waiting for processes to exit. Feb 13 16:04:51.985625 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 16:04:51.987592 systemd-logind[1567]: Removed session 21. Feb 13 16:04:52.100046 containerd[1597]: time="2025-02-13T16:04:52.099971715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kcxbb,Uid:f612df5d-2081-49e2-9bed-2d1994f41339,Namespace:kube-system,Attempt:0,}" Feb 13 16:04:52.123232 containerd[1597]: time="2025-02-13T16:04:52.122977821Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:04:52.123232 containerd[1597]: time="2025-02-13T16:04:52.123032262Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:04:52.123232 containerd[1597]: time="2025-02-13T16:04:52.123046622Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:04:52.123232 containerd[1597]: time="2025-02-13T16:04:52.123127263Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:04:52.137495 systemd[1]: Started sshd@47-157.90.248.142:22-139.178.89.65:52340.service - OpenSSH per-connection server daemon (139.178.89.65:52340). Feb 13 16:04:52.170946 containerd[1597]: time="2025-02-13T16:04:52.170906618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kcxbb,Uid:f612df5d-2081-49e2-9bed-2d1994f41339,Namespace:kube-system,Attempt:0,} returns sandbox id \"fadcb2a0102fc0924211e9143798864d1818c25a3867a7b3871751e8b8628067\"" Feb 13 16:04:52.176492 containerd[1597]: time="2025-02-13T16:04:52.176392051Z" level=info msg="CreateContainer within sandbox \"fadcb2a0102fc0924211e9143798864d1818c25a3867a7b3871751e8b8628067\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 16:04:52.187549 containerd[1597]: time="2025-02-13T16:04:52.187414998Z" level=info msg="CreateContainer within sandbox \"fadcb2a0102fc0924211e9143798864d1818c25a3867a7b3871751e8b8628067\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c839f75222ab65c6bbdaae08754531d6b9016451959c439e58a930b740a6c686\"" Feb 13 16:04:52.188999 containerd[1597]: time="2025-02-13T16:04:52.188077086Z" level=info msg="StartContainer for \"c839f75222ab65c6bbdaae08754531d6b9016451959c439e58a930b740a6c686\"" Feb 13 16:04:52.247960 containerd[1597]: time="2025-02-13T16:04:52.247904042Z" level=info msg="StartContainer for \"c839f75222ab65c6bbdaae08754531d6b9016451959c439e58a930b740a6c686\" returns successfully" Feb 13 16:04:52.269531 kubelet[2960]: I0213 16:04:52.269347 2960 setters.go:568] "Node became not ready" node="ci-4152-2-1-f-29672fd7f0" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T16:04:52Z","lastTransitionTime":"2025-02-13T16:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 13 16:04:52.294589 containerd[1597]: time="2025-02-13T16:04:52.294523342Z" level=info msg="shim disconnected" id=c839f75222ab65c6bbdaae08754531d6b9016451959c439e58a930b740a6c686 namespace=k8s.io Feb 13 16:04:52.294589 containerd[1597]: time="2025-02-13T16:04:52.294581422Z" level=warning msg="cleaning up after shim disconnected" id=c839f75222ab65c6bbdaae08754531d6b9016451959c439e58a930b740a6c686 namespace=k8s.io Feb 13 16:04:52.294589 containerd[1597]: time="2025-02-13T16:04:52.294589703Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:04:52.305885 containerd[1597]: time="2025-02-13T16:04:52.305824012Z" level=warning msg="cleanup warnings time=\"2025-02-13T16:04:52Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 16:04:53.104698 containerd[1597]: time="2025-02-13T16:04:53.104655274Z" level=info msg="CreateContainer within sandbox \"fadcb2a0102fc0924211e9143798864d1818c25a3867a7b3871751e8b8628067\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 16:04:53.127789 sshd[4868]: Accepted publickey for core from 139.178.89.65 port 52340 ssh2: RSA SHA256:KBbJQhuYoAcoXq4F5fzIreCcUwPw2deXowxxtv7qw9g Feb 13 16:04:53.129042 sshd-session[4868]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:04:53.130693 containerd[1597]: time="2025-02-13T16:04:53.130562898Z" level=info msg="CreateContainer within sandbox \"fadcb2a0102fc0924211e9143798864d1818c25a3867a7b3871751e8b8628067\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3ad6e55d992c891654b73d704222f0a4838be6b0bb2513d39afa14ca731d57ba\"" Feb 13 16:04:53.133010 containerd[1597]: time="2025-02-13T16:04:53.132186360Z" level=info msg="StartContainer for \"3ad6e55d992c891654b73d704222f0a4838be6b0bb2513d39afa14ca731d57ba\"" Feb 13 16:04:53.142098 systemd-logind[1567]: New session 22 of user core. Feb 13 16:04:53.147529 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 16:04:53.200879 containerd[1597]: time="2025-02-13T16:04:53.200828673Z" level=info msg="StartContainer for \"3ad6e55d992c891654b73d704222f0a4838be6b0bb2513d39afa14ca731d57ba\" returns successfully" Feb 13 16:04:53.229587 containerd[1597]: time="2025-02-13T16:04:53.229519374Z" level=info msg="shim disconnected" id=3ad6e55d992c891654b73d704222f0a4838be6b0bb2513d39afa14ca731d57ba namespace=k8s.io Feb 13 16:04:53.229587 containerd[1597]: time="2025-02-13T16:04:53.229581175Z" level=warning msg="cleaning up after shim disconnected" id=3ad6e55d992c891654b73d704222f0a4838be6b0bb2513d39afa14ca731d57ba namespace=k8s.io Feb 13 16:04:53.229587 containerd[1597]: time="2025-02-13T16:04:53.229589335Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:04:53.799132 sshd[4960]: Connection closed by 139.178.89.65 port 52340 Feb 13 16:04:53.799985 sshd-session[4868]: pam_unix(sshd:session): session closed for user core Feb 13 16:04:53.804949 systemd[1]: sshd@47-157.90.248.142:22-139.178.89.65:52340.service: Deactivated successfully. Feb 13 16:04:53.804996 systemd-logind[1567]: Session 22 logged out. Waiting for processes to exit. Feb 13 16:04:53.810837 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 16:04:53.812578 systemd-logind[1567]: Removed session 22. Feb 13 16:04:53.966601 systemd[1]: Started sshd@48-157.90.248.142:22-139.178.89.65:52350.service - OpenSSH per-connection server daemon (139.178.89.65:52350). Feb 13 16:04:54.034944 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3ad6e55d992c891654b73d704222f0a4838be6b0bb2513d39afa14ca731d57ba-rootfs.mount: Deactivated successfully. Feb 13 16:04:54.045631 systemd[1]: Started sshd@49-157.90.248.142:22-118.193.38.84:47306.service - OpenSSH per-connection server daemon (118.193.38.84:47306). Feb 13 16:04:54.103013 containerd[1597]: time="2025-02-13T16:04:54.102890508Z" level=info msg="CreateContainer within sandbox \"fadcb2a0102fc0924211e9143798864d1818c25a3867a7b3871751e8b8628067\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 16:04:54.117597 containerd[1597]: time="2025-02-13T16:04:54.117549583Z" level=info msg="CreateContainer within sandbox \"fadcb2a0102fc0924211e9143798864d1818c25a3867a7b3871751e8b8628067\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"362d81d25c858d7d0fad65df6c1b1fdfd313cbc6de9b526d303d7d652ab88238\"" Feb 13 16:04:54.120445 containerd[1597]: time="2025-02-13T16:04:54.120371340Z" level=info msg="StartContainer for \"362d81d25c858d7d0fad65df6c1b1fdfd313cbc6de9b526d303d7d652ab88238\"" Feb 13 16:04:54.196697 containerd[1597]: time="2025-02-13T16:04:54.195463859Z" level=info msg="StartContainer for \"362d81d25c858d7d0fad65df6c1b1fdfd313cbc6de9b526d303d7d652ab88238\" returns successfully" Feb 13 16:04:54.219549 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-362d81d25c858d7d0fad65df6c1b1fdfd313cbc6de9b526d303d7d652ab88238-rootfs.mount: Deactivated successfully. Feb 13 16:04:54.221471 containerd[1597]: time="2025-02-13T16:04:54.221400724Z" level=info msg="shim disconnected" id=362d81d25c858d7d0fad65df6c1b1fdfd313cbc6de9b526d303d7d652ab88238 namespace=k8s.io Feb 13 16:04:54.221471 containerd[1597]: time="2025-02-13T16:04:54.221471605Z" level=warning msg="cleaning up after shim disconnected" id=362d81d25c858d7d0fad65df6c1b1fdfd313cbc6de9b526d303d7d652ab88238 namespace=k8s.io Feb 13 16:04:54.221613 containerd[1597]: time="2025-02-13T16:04:54.221480325Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:04:54.954410 sshd[5015]: Accepted publickey for core from 139.178.89.65 port 52350 ssh2: RSA SHA256:KBbJQhuYoAcoXq4F5fzIreCcUwPw2deXowxxtv7qw9g Feb 13 16:04:54.956120 sshd-session[5015]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:04:54.967504 systemd-logind[1567]: New session 23 of user core. Feb 13 16:04:54.976545 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 16:04:55.112597 containerd[1597]: time="2025-02-13T16:04:55.112447654Z" level=info msg="CreateContainer within sandbox \"fadcb2a0102fc0924211e9143798864d1818c25a3867a7b3871751e8b8628067\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 16:04:55.130385 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3352914483.mount: Deactivated successfully. Feb 13 16:04:55.139751 containerd[1597]: time="2025-02-13T16:04:55.139688576Z" level=info msg="CreateContainer within sandbox \"fadcb2a0102fc0924211e9143798864d1818c25a3867a7b3871751e8b8628067\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c34fb79489c3982197868df5fcffc1bb33bb1df8ac1b3f2ec2fc68029dfaa03b\"" Feb 13 16:04:55.142701 containerd[1597]: time="2025-02-13T16:04:55.140467947Z" level=info msg="StartContainer for \"c34fb79489c3982197868df5fcffc1bb33bb1df8ac1b3f2ec2fc68029dfaa03b\"" Feb 13 16:04:55.216463 containerd[1597]: time="2025-02-13T16:04:55.215982351Z" level=info msg="StartContainer for \"c34fb79489c3982197868df5fcffc1bb33bb1df8ac1b3f2ec2fc68029dfaa03b\" returns successfully" Feb 13 16:04:55.236822 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c34fb79489c3982197868df5fcffc1bb33bb1df8ac1b3f2ec2fc68029dfaa03b-rootfs.mount: Deactivated successfully. Feb 13 16:04:55.240064 containerd[1597]: time="2025-02-13T16:04:55.239998110Z" level=info msg="shim disconnected" id=c34fb79489c3982197868df5fcffc1bb33bb1df8ac1b3f2ec2fc68029dfaa03b namespace=k8s.io Feb 13 16:04:55.240064 containerd[1597]: time="2025-02-13T16:04:55.240064591Z" level=warning msg="cleaning up after shim disconnected" id=c34fb79489c3982197868df5fcffc1bb33bb1df8ac1b3f2ec2fc68029dfaa03b namespace=k8s.io Feb 13 16:04:55.240272 containerd[1597]: time="2025-02-13T16:04:55.240073951Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:04:55.385973 kubelet[2960]: E0213 16:04:55.385874 2960 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 16:04:55.478146 sshd[5017]: Invalid user mapadmin from 118.193.38.84 port 47306 Feb 13 16:04:55.748429 sshd[5017]: Received disconnect from 118.193.38.84 port 47306:11: Bye Bye [preauth] Feb 13 16:04:55.748429 sshd[5017]: Disconnected from invalid user mapadmin 118.193.38.84 port 47306 [preauth] Feb 13 16:04:55.752437 systemd[1]: sshd@49-157.90.248.142:22-118.193.38.84:47306.service: Deactivated successfully. Feb 13 16:04:56.115180 containerd[1597]: time="2025-02-13T16:04:56.114421220Z" level=info msg="CreateContainer within sandbox \"fadcb2a0102fc0924211e9143798864d1818c25a3867a7b3871751e8b8628067\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 16:04:56.147382 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2344521577.mount: Deactivated successfully. Feb 13 16:04:56.158213 containerd[1597]: time="2025-02-13T16:04:56.158101601Z" level=info msg="CreateContainer within sandbox \"fadcb2a0102fc0924211e9143798864d1818c25a3867a7b3871751e8b8628067\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"915cac2779f6aae2990afd7f5b32dd3da1891f3aa2e1e2c167795a9aacd7510a\"" Feb 13 16:04:56.165233 containerd[1597]: time="2025-02-13T16:04:56.164499127Z" level=info msg="StartContainer for \"915cac2779f6aae2990afd7f5b32dd3da1891f3aa2e1e2c167795a9aacd7510a\"" Feb 13 16:04:56.232593 containerd[1597]: time="2025-02-13T16:04:56.232533512Z" level=info msg="StartContainer for \"915cac2779f6aae2990afd7f5b32dd3da1891f3aa2e1e2c167795a9aacd7510a\" returns successfully" Feb 13 16:04:56.516218 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Feb 13 16:04:59.347463 systemd-networkd[1236]: lxc_health: Link UP Feb 13 16:04:59.384426 systemd-networkd[1236]: lxc_health: Gained carrier Feb 13 16:05:00.127649 kubelet[2960]: I0213 16:05:00.125989 2960 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-kcxbb" podStartSLOduration=9.125941906 podStartE2EDuration="9.125941906s" podCreationTimestamp="2025-02-13 16:04:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 16:04:57.140189825 +0000 UTC m=+352.142667425" watchObservedRunningTime="2025-02-13 16:05:00.125941906 +0000 UTC m=+355.128419426" Feb 13 16:05:01.085369 systemd-networkd[1236]: lxc_health: Gained IPv6LL Feb 13 16:05:02.107982 systemd[1]: run-containerd-runc-k8s.io-915cac2779f6aae2990afd7f5b32dd3da1891f3aa2e1e2c167795a9aacd7510a-runc.GxrzAm.mount: Deactivated successfully. Feb 13 16:05:04.279064 systemd[1]: run-containerd-runc-k8s.io-915cac2779f6aae2990afd7f5b32dd3da1891f3aa2e1e2c167795a9aacd7510a-runc.ccbUWK.mount: Deactivated successfully. Feb 13 16:05:05.191460 containerd[1597]: time="2025-02-13T16:05:05.191396679Z" level=info msg="StopPodSandbox for \"399dd598260fba6509fb244b8dbd16f4346b48d3e777c9a1a4c6d301bdee26c7\"" Feb 13 16:05:05.193073 containerd[1597]: time="2025-02-13T16:05:05.191534801Z" level=info msg="TearDown network for sandbox \"399dd598260fba6509fb244b8dbd16f4346b48d3e777c9a1a4c6d301bdee26c7\" successfully" Feb 13 16:05:05.193073 containerd[1597]: time="2025-02-13T16:05:05.191555841Z" level=info msg="StopPodSandbox for \"399dd598260fba6509fb244b8dbd16f4346b48d3e777c9a1a4c6d301bdee26c7\" returns successfully" Feb 13 16:05:05.193073 containerd[1597]: time="2025-02-13T16:05:05.192207289Z" level=info msg="RemovePodSandbox for \"399dd598260fba6509fb244b8dbd16f4346b48d3e777c9a1a4c6d301bdee26c7\"" Feb 13 16:05:05.193073 containerd[1597]: time="2025-02-13T16:05:05.192238170Z" level=info msg="Forcibly stopping sandbox \"399dd598260fba6509fb244b8dbd16f4346b48d3e777c9a1a4c6d301bdee26c7\"" Feb 13 16:05:05.193073 containerd[1597]: time="2025-02-13T16:05:05.192284771Z" level=info msg="TearDown network for sandbox \"399dd598260fba6509fb244b8dbd16f4346b48d3e777c9a1a4c6d301bdee26c7\" successfully" Feb 13 16:05:05.196719 containerd[1597]: time="2025-02-13T16:05:05.196658909Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"399dd598260fba6509fb244b8dbd16f4346b48d3e777c9a1a4c6d301bdee26c7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 16:05:05.196961 containerd[1597]: time="2025-02-13T16:05:05.196928992Z" level=info msg="RemovePodSandbox \"399dd598260fba6509fb244b8dbd16f4346b48d3e777c9a1a4c6d301bdee26c7\" returns successfully" Feb 13 16:05:05.197738 containerd[1597]: time="2025-02-13T16:05:05.197710843Z" level=info msg="StopPodSandbox for \"fa5972442b9ec8154e8d5f0f5de7996cf6c6ae7349d5a107cc4aef0e24bb1d80\"" Feb 13 16:05:05.197809 containerd[1597]: time="2025-02-13T16:05:05.197787124Z" level=info msg="TearDown network for sandbox \"fa5972442b9ec8154e8d5f0f5de7996cf6c6ae7349d5a107cc4aef0e24bb1d80\" successfully" Feb 13 16:05:05.197809 containerd[1597]: time="2025-02-13T16:05:05.197799764Z" level=info msg="StopPodSandbox for \"fa5972442b9ec8154e8d5f0f5de7996cf6c6ae7349d5a107cc4aef0e24bb1d80\" returns successfully" Feb 13 16:05:05.199207 containerd[1597]: time="2025-02-13T16:05:05.198099688Z" level=info msg="RemovePodSandbox for \"fa5972442b9ec8154e8d5f0f5de7996cf6c6ae7349d5a107cc4aef0e24bb1d80\"" Feb 13 16:05:05.199207 containerd[1597]: time="2025-02-13T16:05:05.198126288Z" level=info msg="Forcibly stopping sandbox \"fa5972442b9ec8154e8d5f0f5de7996cf6c6ae7349d5a107cc4aef0e24bb1d80\"" Feb 13 16:05:05.199207 containerd[1597]: time="2025-02-13T16:05:05.198192689Z" level=info msg="TearDown network for sandbox \"fa5972442b9ec8154e8d5f0f5de7996cf6c6ae7349d5a107cc4aef0e24bb1d80\" successfully" Feb 13 16:05:05.201366 containerd[1597]: time="2025-02-13T16:05:05.201202209Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fa5972442b9ec8154e8d5f0f5de7996cf6c6ae7349d5a107cc4aef0e24bb1d80\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 16:05:05.201366 containerd[1597]: time="2025-02-13T16:05:05.201281450Z" level=info msg="RemovePodSandbox \"fa5972442b9ec8154e8d5f0f5de7996cf6c6ae7349d5a107cc4aef0e24bb1d80\" returns successfully" Feb 13 16:05:06.613226 sshd[5078]: Connection closed by 139.178.89.65 port 52350 Feb 13 16:05:06.614399 sshd-session[5015]: pam_unix(sshd:session): session closed for user core Feb 13 16:05:06.620224 systemd-logind[1567]: Session 23 logged out. Waiting for processes to exit. Feb 13 16:05:06.620573 systemd[1]: sshd@48-157.90.248.142:22-139.178.89.65:52350.service: Deactivated successfully. Feb 13 16:05:06.623869 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 16:05:06.624831 systemd-logind[1567]: Removed session 23. Feb 13 16:05:18.965429 systemd[1]: Started sshd@50-157.90.248.142:22-125.94.71.207:39968.service - OpenSSH per-connection server daemon (125.94.71.207:39968). Feb 13 16:05:21.283782 sshd[5815]: Invalid user teamspeak3 from 125.94.71.207 port 39968 Feb 13 16:05:21.537422 sshd[5815]: Received disconnect from 125.94.71.207 port 39968:11: Bye Bye [preauth] Feb 13 16:05:21.537422 sshd[5815]: Disconnected from invalid user teamspeak3 125.94.71.207 port 39968 [preauth] Feb 13 16:05:21.541063 systemd[1]: sshd@50-157.90.248.142:22-125.94.71.207:39968.service: Deactivated successfully. Feb 13 16:05:22.017288 kubelet[2960]: E0213 16:05:22.017246 2960 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:35902->10.0.0.2:2379: read: connection timed out" Feb 13 16:05:22.046478 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c7a14bb4adb47a87c6d56c4b5dd129836df90d1aabc32a6912fc4841b0e57c7a-rootfs.mount: Deactivated successfully. Feb 13 16:05:22.053580 containerd[1597]: time="2025-02-13T16:05:22.053334582Z" level=info msg="shim disconnected" id=c7a14bb4adb47a87c6d56c4b5dd129836df90d1aabc32a6912fc4841b0e57c7a namespace=k8s.io Feb 13 16:05:22.053580 containerd[1597]: time="2025-02-13T16:05:22.053417183Z" level=warning msg="cleaning up after shim disconnected" id=c7a14bb4adb47a87c6d56c4b5dd129836df90d1aabc32a6912fc4841b0e57c7a namespace=k8s.io Feb 13 16:05:22.053580 containerd[1597]: time="2025-02-13T16:05:22.053427104Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:05:22.065313 containerd[1597]: time="2025-02-13T16:05:22.065265341Z" level=warning msg="cleanup warnings time=\"2025-02-13T16:05:22Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 16:05:22.183215 kubelet[2960]: I0213 16:05:22.182997 2960 scope.go:117] "RemoveContainer" containerID="c7a14bb4adb47a87c6d56c4b5dd129836df90d1aabc32a6912fc4841b0e57c7a" Feb 13 16:05:22.186795 containerd[1597]: time="2025-02-13T16:05:22.186756680Z" level=info msg="CreateContainer within sandbox \"ba6dc931369f11a8d5a54106b7a4d87cf091f45ecf91b2a11c4d238f919cf963\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Feb 13 16:05:22.199591 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3494241888.mount: Deactivated successfully. Feb 13 16:05:22.201840 containerd[1597]: time="2025-02-13T16:05:22.201628719Z" level=info msg="CreateContainer within sandbox \"ba6dc931369f11a8d5a54106b7a4d87cf091f45ecf91b2a11c4d238f919cf963\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"be18bbdc796b4f58346861f784a28b927103c5a2aed9cce563154ae1dd3e002e\"" Feb 13 16:05:22.202139 containerd[1597]: time="2025-02-13T16:05:22.202111605Z" level=info msg="StartContainer for \"be18bbdc796b4f58346861f784a28b927103c5a2aed9cce563154ae1dd3e002e\"" Feb 13 16:05:22.268100 containerd[1597]: time="2025-02-13T16:05:22.267888522Z" level=info msg="StartContainer for \"be18bbdc796b4f58346861f784a28b927103c5a2aed9cce563154ae1dd3e002e\" returns successfully" Feb 13 16:05:23.044400 systemd[1]: run-containerd-runc-k8s.io-be18bbdc796b4f58346861f784a28b927103c5a2aed9cce563154ae1dd3e002e-runc.cNvHWc.mount: Deactivated successfully. Feb 13 16:05:23.044979 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aab415f40eeae452fde9ca719484feefd3a22e6582dd11e0671ace3fce6c6133-rootfs.mount: Deactivated successfully. Feb 13 16:05:23.047081 containerd[1597]: time="2025-02-13T16:05:23.046824262Z" level=info msg="shim disconnected" id=aab415f40eeae452fde9ca719484feefd3a22e6582dd11e0671ace3fce6c6133 namespace=k8s.io Feb 13 16:05:23.047081 containerd[1597]: time="2025-02-13T16:05:23.047003385Z" level=warning msg="cleaning up after shim disconnected" id=aab415f40eeae452fde9ca719484feefd3a22e6582dd11e0671ace3fce6c6133 namespace=k8s.io Feb 13 16:05:23.047081 containerd[1597]: time="2025-02-13T16:05:23.047014545Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:05:23.190955 kubelet[2960]: I0213 16:05:23.190921 2960 scope.go:117] "RemoveContainer" containerID="aab415f40eeae452fde9ca719484feefd3a22e6582dd11e0671ace3fce6c6133" Feb 13 16:05:23.193448 containerd[1597]: time="2025-02-13T16:05:23.193295214Z" level=info msg="CreateContainer within sandbox \"c5e0226fe38f611cab5f893b7e6ded656debdea335094431e5a4f849d3dab859\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Feb 13 16:05:23.208169 containerd[1597]: time="2025-02-13T16:05:23.207480803Z" level=info msg="CreateContainer within sandbox \"c5e0226fe38f611cab5f893b7e6ded656debdea335094431e5a4f849d3dab859\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"2fd661c0590945c64950b01df083e31d78bf791851214d4212977fd999b98dfe\"" Feb 13 16:05:23.208169 containerd[1597]: time="2025-02-13T16:05:23.208102772Z" level=info msg="StartContainer for \"2fd661c0590945c64950b01df083e31d78bf791851214d4212977fd999b98dfe\"" Feb 13 16:05:23.279105 containerd[1597]: time="2025-02-13T16:05:23.279043437Z" level=info msg="StartContainer for \"2fd661c0590945c64950b01df083e31d78bf791851214d4212977fd999b98dfe\" returns successfully" Feb 13 16:05:26.622287 kubelet[2960]: E0213 16:05:26.622240 2960 event.go:346] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:35716->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4152-2-1-f-29672fd7f0.1823d01fd12dd4d3 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4152-2-1-f-29672fd7f0,UID:cda3808ce3eb2317c3dad37e842c8267,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4152-2-1-f-29672fd7f0,},FirstTimestamp:2025-02-13 16:05:16.167148755 +0000 UTC m=+371.169626315,LastTimestamp:2025-02-13 16:05:16.167148755 +0000 UTC m=+371.169626315,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152-2-1-f-29672fd7f0,}"