Feb 13 15:16:45.904679 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 15:16:45.904703 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Thu Feb 13 13:51:50 -00 2025 Feb 13 15:16:45.904713 kernel: KASLR enabled Feb 13 15:16:45.904719 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Feb 13 15:16:45.904724 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390bb018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b41218 Feb 13 15:16:45.904730 kernel: random: crng init done Feb 13 15:16:45.904737 kernel: secureboot: Secure boot disabled Feb 13 15:16:45.904743 kernel: ACPI: Early table checksum verification disabled Feb 13 15:16:45.904748 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Feb 13 15:16:45.904756 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Feb 13 15:16:45.904762 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:16:45.904768 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:16:45.904773 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:16:45.904779 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:16:45.904787 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:16:45.904794 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:16:45.904800 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:16:45.904807 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:16:45.904813 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:16:45.904819 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Feb 13 15:16:45.904825 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Feb 13 15:16:45.904831 kernel: NUMA: Failed to initialise from firmware Feb 13 15:16:45.904837 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Feb 13 15:16:45.904844 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] Feb 13 15:16:45.904850 kernel: Zone ranges: Feb 13 15:16:45.904857 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Feb 13 15:16:45.904863 kernel: DMA32 empty Feb 13 15:16:45.904869 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Feb 13 15:16:45.904876 kernel: Movable zone start for each node Feb 13 15:16:45.904882 kernel: Early memory node ranges Feb 13 15:16:45.904888 kernel: node 0: [mem 0x0000000040000000-0x000000013666ffff] Feb 13 15:16:45.904894 kernel: node 0: [mem 0x0000000136670000-0x000000013667ffff] Feb 13 15:16:45.904900 kernel: node 0: [mem 0x0000000136680000-0x000000013676ffff] Feb 13 15:16:45.904906 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Feb 13 15:16:45.904913 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Feb 13 15:16:45.904919 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Feb 13 15:16:45.904925 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Feb 13 15:16:45.904932 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Feb 13 15:16:45.904966 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Feb 13 15:16:45.904973 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Feb 13 15:16:45.904983 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Feb 13 15:16:45.904990 kernel: psci: probing for conduit method from ACPI. Feb 13 15:16:45.904996 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 15:16:45.905004 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 15:16:45.905010 kernel: psci: Trusted OS migration not required Feb 13 15:16:45.905017 kernel: psci: SMC Calling Convention v1.1 Feb 13 15:16:45.905023 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 13 15:16:45.905030 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 15:16:45.905036 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 15:16:45.905043 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 13 15:16:45.905050 kernel: Detected PIPT I-cache on CPU0 Feb 13 15:16:45.905056 kernel: CPU features: detected: GIC system register CPU interface Feb 13 15:16:45.905063 kernel: CPU features: detected: Hardware dirty bit management Feb 13 15:16:45.905071 kernel: CPU features: detected: Spectre-v4 Feb 13 15:16:45.905077 kernel: CPU features: detected: Spectre-BHB Feb 13 15:16:45.905094 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 15:16:45.905101 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 15:16:45.905107 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 15:16:45.905114 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 15:16:45.905120 kernel: alternatives: applying boot alternatives Feb 13 15:16:45.905128 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=26b1bb981574844309559baa9983d7ef1e1e8283aa92ecd6061030daf7cdbbef Feb 13 15:16:45.905135 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 15:16:45.905141 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 15:16:45.905148 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 15:16:45.905157 kernel: Fallback order for Node 0: 0 Feb 13 15:16:45.905163 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Feb 13 15:16:45.905170 kernel: Policy zone: Normal Feb 13 15:16:45.905176 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 15:16:45.905183 kernel: software IO TLB: area num 2. Feb 13 15:16:45.905190 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Feb 13 15:16:45.905196 kernel: Memory: 3883896K/4096000K available (10304K kernel code, 2186K rwdata, 8092K rodata, 38336K init, 897K bss, 212104K reserved, 0K cma-reserved) Feb 13 15:16:45.905203 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 15:16:45.905210 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 15:16:45.905217 kernel: rcu: RCU event tracing is enabled. Feb 13 15:16:45.905224 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 15:16:45.905230 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 15:16:45.905238 kernel: Tracing variant of Tasks RCU enabled. Feb 13 15:16:45.905245 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 15:16:45.905251 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 15:16:45.905257 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 15:16:45.905264 kernel: GICv3: 256 SPIs implemented Feb 13 15:16:45.905270 kernel: GICv3: 0 Extended SPIs implemented Feb 13 15:16:45.905277 kernel: Root IRQ handler: gic_handle_irq Feb 13 15:16:45.905283 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 15:16:45.905290 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 13 15:16:45.905296 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 13 15:16:45.905303 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 15:16:45.905311 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Feb 13 15:16:45.905318 kernel: GICv3: using LPI property table @0x00000001000e0000 Feb 13 15:16:45.905325 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Feb 13 15:16:45.905331 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 15:16:45.905338 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:16:45.905344 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 15:16:45.905351 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 15:16:45.905358 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 15:16:45.905364 kernel: Console: colour dummy device 80x25 Feb 13 15:16:45.905371 kernel: ACPI: Core revision 20230628 Feb 13 15:16:45.905378 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 15:16:45.905386 kernel: pid_max: default: 32768 minimum: 301 Feb 13 15:16:45.905393 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 15:16:45.905399 kernel: landlock: Up and running. Feb 13 15:16:45.905406 kernel: SELinux: Initializing. Feb 13 15:16:45.905413 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:16:45.905420 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:16:45.905426 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:16:45.905433 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:16:45.905440 kernel: rcu: Hierarchical SRCU implementation. Feb 13 15:16:45.905448 kernel: rcu: Max phase no-delay instances is 400. Feb 13 15:16:45.905455 kernel: Platform MSI: ITS@0x8080000 domain created Feb 13 15:16:45.905462 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 13 15:16:45.905468 kernel: Remapping and enabling EFI services. Feb 13 15:16:45.905475 kernel: smp: Bringing up secondary CPUs ... Feb 13 15:16:45.905482 kernel: Detected PIPT I-cache on CPU1 Feb 13 15:16:45.905489 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 13 15:16:45.905496 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Feb 13 15:16:45.905502 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:16:45.905510 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 15:16:45.905517 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 15:16:45.905529 kernel: SMP: Total of 2 processors activated. Feb 13 15:16:45.905538 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 15:16:45.905545 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 15:16:45.905552 kernel: CPU features: detected: Common not Private translations Feb 13 15:16:45.905559 kernel: CPU features: detected: CRC32 instructions Feb 13 15:16:45.905566 kernel: CPU features: detected: Enhanced Virtualization Traps Feb 13 15:16:45.905573 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 15:16:45.905582 kernel: CPU features: detected: LSE atomic instructions Feb 13 15:16:45.905589 kernel: CPU features: detected: Privileged Access Never Feb 13 15:16:45.905596 kernel: CPU features: detected: RAS Extension Support Feb 13 15:16:45.905603 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 13 15:16:45.905610 kernel: CPU: All CPU(s) started at EL1 Feb 13 15:16:45.905617 kernel: alternatives: applying system-wide alternatives Feb 13 15:16:45.905624 kernel: devtmpfs: initialized Feb 13 15:16:45.905631 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 15:16:45.905640 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 15:16:45.905647 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 15:16:45.905654 kernel: SMBIOS 3.0.0 present. Feb 13 15:16:45.905661 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Feb 13 15:16:45.905668 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 15:16:45.905675 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 15:16:45.905682 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 15:16:45.905689 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 15:16:45.905696 kernel: audit: initializing netlink subsys (disabled) Feb 13 15:16:45.905705 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Feb 13 15:16:45.905713 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 15:16:45.905720 kernel: cpuidle: using governor menu Feb 13 15:16:45.905727 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 15:16:45.905734 kernel: ASID allocator initialised with 32768 entries Feb 13 15:16:45.905741 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 15:16:45.905748 kernel: Serial: AMBA PL011 UART driver Feb 13 15:16:45.905755 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 15:16:45.905762 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 15:16:45.905771 kernel: Modules: 509280 pages in range for PLT usage Feb 13 15:16:45.905778 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 15:16:45.905785 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 15:16:45.905792 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 15:16:45.905799 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 15:16:45.905806 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 15:16:45.905813 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 15:16:45.905820 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 15:16:45.905827 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 15:16:45.905835 kernel: ACPI: Added _OSI(Module Device) Feb 13 15:16:45.905842 kernel: ACPI: Added _OSI(Processor Device) Feb 13 15:16:45.905849 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 15:16:45.905856 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 15:16:45.905863 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 15:16:45.905870 kernel: ACPI: Interpreter enabled Feb 13 15:16:45.905877 kernel: ACPI: Using GIC for interrupt routing Feb 13 15:16:45.905884 kernel: ACPI: MCFG table detected, 1 entries Feb 13 15:16:45.905892 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 13 15:16:45.905900 kernel: printk: console [ttyAMA0] enabled Feb 13 15:16:45.905907 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 15:16:45.906051 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 15:16:45.906141 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 15:16:45.906206 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 15:16:45.906270 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 13 15:16:45.906331 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 13 15:16:45.906344 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 13 15:16:45.906351 kernel: PCI host bridge to bus 0000:00 Feb 13 15:16:45.906422 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 13 15:16:45.906482 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 15:16:45.906540 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 13 15:16:45.906596 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 15:16:45.906675 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 13 15:16:45.906755 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Feb 13 15:16:45.906822 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Feb 13 15:16:45.906887 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Feb 13 15:16:45.907343 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Feb 13 15:16:45.907425 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Feb 13 15:16:45.907500 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Feb 13 15:16:45.907573 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Feb 13 15:16:45.907646 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Feb 13 15:16:45.907711 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Feb 13 15:16:45.907789 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Feb 13 15:16:45.907871 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Feb 13 15:16:45.907970 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Feb 13 15:16:45.908053 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Feb 13 15:16:45.908141 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Feb 13 15:16:45.908208 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Feb 13 15:16:45.908281 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Feb 13 15:16:45.908348 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Feb 13 15:16:45.908431 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Feb 13 15:16:45.908513 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Feb 13 15:16:45.908597 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Feb 13 15:16:45.908663 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Feb 13 15:16:45.908821 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Feb 13 15:16:45.908893 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Feb 13 15:16:45.909751 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Feb 13 15:16:45.909852 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Feb 13 15:16:45.909923 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 15:16:45.910015 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Feb 13 15:16:45.910131 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Feb 13 15:16:45.910207 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Feb 13 15:16:45.910283 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Feb 13 15:16:45.911540 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Feb 13 15:16:45.911628 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Feb 13 15:16:45.911708 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Feb 13 15:16:45.911777 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Feb 13 15:16:45.911852 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Feb 13 15:16:45.911919 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Feb 13 15:16:45.914134 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Feb 13 15:16:45.914235 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Feb 13 15:16:45.914311 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Feb 13 15:16:45.914379 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Feb 13 15:16:45.914457 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Feb 13 15:16:45.914524 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Feb 13 15:16:45.914591 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Feb 13 15:16:45.914659 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Feb 13 15:16:45.914727 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Feb 13 15:16:45.914791 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Feb 13 15:16:45.914855 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Feb 13 15:16:45.914923 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Feb 13 15:16:45.915005 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Feb 13 15:16:45.915071 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Feb 13 15:16:45.915155 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Feb 13 15:16:45.915224 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Feb 13 15:16:45.915290 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Feb 13 15:16:45.915358 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Feb 13 15:16:45.915424 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Feb 13 15:16:45.915487 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Feb 13 15:16:45.915554 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Feb 13 15:16:45.915620 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Feb 13 15:16:45.915697 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Feb 13 15:16:45.915765 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Feb 13 15:16:45.915828 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Feb 13 15:16:45.915892 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Feb 13 15:16:45.918061 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Feb 13 15:16:45.918169 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Feb 13 15:16:45.918236 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Feb 13 15:16:45.918304 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Feb 13 15:16:45.918375 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Feb 13 15:16:45.918438 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Feb 13 15:16:45.918508 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Feb 13 15:16:45.918572 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Feb 13 15:16:45.918637 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Feb 13 15:16:45.918703 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Feb 13 15:16:45.918768 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Feb 13 15:16:45.918837 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Feb 13 15:16:45.918902 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Feb 13 15:16:45.919682 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Feb 13 15:16:45.919764 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Feb 13 15:16:45.919831 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Feb 13 15:16:45.919895 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Feb 13 15:16:45.920136 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Feb 13 15:16:45.920219 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Feb 13 15:16:45.920285 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Feb 13 15:16:45.920348 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Feb 13 15:16:45.920412 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Feb 13 15:16:45.920475 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Feb 13 15:16:45.920549 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Feb 13 15:16:45.920611 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Feb 13 15:16:45.920678 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Feb 13 15:16:45.920742 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Feb 13 15:16:45.920812 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Feb 13 15:16:45.920876 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Feb 13 15:16:45.920999 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Feb 13 15:16:45.921073 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Feb 13 15:16:45.921158 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Feb 13 15:16:45.921231 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Feb 13 15:16:45.921295 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Feb 13 15:16:45.921358 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Feb 13 15:16:45.921421 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Feb 13 15:16:45.921483 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Feb 13 15:16:45.921545 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Feb 13 15:16:45.921608 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Feb 13 15:16:45.921670 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Feb 13 15:16:45.921735 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Feb 13 15:16:45.921798 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Feb 13 15:16:45.921861 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Feb 13 15:16:45.921924 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Feb 13 15:16:45.922002 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Feb 13 15:16:45.922068 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Feb 13 15:16:45.922173 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Feb 13 15:16:45.922245 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Feb 13 15:16:45.922322 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Feb 13 15:16:45.922389 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 15:16:45.922455 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Feb 13 15:16:45.922518 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Feb 13 15:16:45.922581 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Feb 13 15:16:45.922643 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Feb 13 15:16:45.922706 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Feb 13 15:16:45.922775 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Feb 13 15:16:45.922842 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Feb 13 15:16:45.922905 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Feb 13 15:16:45.922984 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Feb 13 15:16:45.923049 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Feb 13 15:16:45.923137 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Feb 13 15:16:45.923207 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Feb 13 15:16:45.923270 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Feb 13 15:16:45.923332 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Feb 13 15:16:45.923395 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Feb 13 15:16:45.923457 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Feb 13 15:16:45.923527 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Feb 13 15:16:45.923590 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Feb 13 15:16:45.923658 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Feb 13 15:16:45.923723 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Feb 13 15:16:45.923786 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Feb 13 15:16:45.923855 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Feb 13 15:16:45.923921 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Feb 13 15:16:45.923999 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Feb 13 15:16:45.924066 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Feb 13 15:16:45.924142 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Feb 13 15:16:45.924210 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Feb 13 15:16:45.924281 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Feb 13 15:16:45.924349 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Feb 13 15:16:45.924414 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Feb 13 15:16:45.924477 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Feb 13 15:16:45.924542 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Feb 13 15:16:45.924608 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Feb 13 15:16:45.924683 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Feb 13 15:16:45.924754 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Feb 13 15:16:45.924850 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Feb 13 15:16:45.924920 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Feb 13 15:16:45.925039 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Feb 13 15:16:45.925145 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Feb 13 15:16:45.925212 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Feb 13 15:16:45.926100 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Feb 13 15:16:45.926203 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Feb 13 15:16:45.926279 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Feb 13 15:16:45.926346 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Feb 13 15:16:45.926414 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Feb 13 15:16:45.926492 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Feb 13 15:16:45.926557 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Feb 13 15:16:45.926622 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Feb 13 15:16:45.926689 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 13 15:16:45.926748 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 15:16:45.926810 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 13 15:16:45.926881 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Feb 13 15:16:45.926996 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Feb 13 15:16:45.927068 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Feb 13 15:16:45.927154 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Feb 13 15:16:45.927215 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Feb 13 15:16:45.927282 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Feb 13 15:16:45.927359 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Feb 13 15:16:45.927426 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Feb 13 15:16:45.927486 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Feb 13 15:16:45.927557 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Feb 13 15:16:45.927617 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Feb 13 15:16:45.927677 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Feb 13 15:16:45.927751 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Feb 13 15:16:45.927813 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Feb 13 15:16:45.927875 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Feb 13 15:16:45.928758 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Feb 13 15:16:45.928865 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Feb 13 15:16:45.928930 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Feb 13 15:16:45.929019 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Feb 13 15:16:45.929097 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Feb 13 15:16:45.929162 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Feb 13 15:16:45.929233 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Feb 13 15:16:45.929296 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Feb 13 15:16:45.929363 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Feb 13 15:16:45.929432 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Feb 13 15:16:45.929495 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Feb 13 15:16:45.929557 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Feb 13 15:16:45.929567 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 15:16:45.929576 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 15:16:45.929584 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 15:16:45.929594 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 15:16:45.929602 kernel: iommu: Default domain type: Translated Feb 13 15:16:45.929610 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 15:16:45.929618 kernel: efivars: Registered efivars operations Feb 13 15:16:45.929626 kernel: vgaarb: loaded Feb 13 15:16:45.929634 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 15:16:45.929641 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 15:16:45.929650 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 15:16:45.929658 kernel: pnp: PnP ACPI init Feb 13 15:16:45.929739 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 13 15:16:45.929751 kernel: pnp: PnP ACPI: found 1 devices Feb 13 15:16:45.929759 kernel: NET: Registered PF_INET protocol family Feb 13 15:16:45.929767 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 15:16:45.929775 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 15:16:45.929783 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 15:16:45.929791 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 15:16:45.929799 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 15:16:45.929809 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 15:16:45.929818 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:16:45.929826 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:16:45.929833 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 15:16:45.929908 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Feb 13 15:16:45.929919 kernel: PCI: CLS 0 bytes, default 64 Feb 13 15:16:45.929929 kernel: kvm [1]: HYP mode not available Feb 13 15:16:45.929948 kernel: Initialise system trusted keyrings Feb 13 15:16:45.929956 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 15:16:45.929967 kernel: Key type asymmetric registered Feb 13 15:16:45.929975 kernel: Asymmetric key parser 'x509' registered Feb 13 15:16:45.929983 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 15:16:45.929991 kernel: io scheduler mq-deadline registered Feb 13 15:16:45.929999 kernel: io scheduler kyber registered Feb 13 15:16:45.930007 kernel: io scheduler bfq registered Feb 13 15:16:45.930015 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Feb 13 15:16:45.930128 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Feb 13 15:16:45.930225 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Feb 13 15:16:45.930301 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:16:45.930374 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Feb 13 15:16:45.930461 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Feb 13 15:16:45.930532 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:16:45.930601 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Feb 13 15:16:45.930666 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Feb 13 15:16:45.930750 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:16:45.930820 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Feb 13 15:16:45.930884 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Feb 13 15:16:45.930964 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:16:45.931033 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Feb 13 15:16:45.931111 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Feb 13 15:16:45.931180 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:16:45.931246 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Feb 13 15:16:45.931312 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Feb 13 15:16:45.931378 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:16:45.931444 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Feb 13 15:16:45.931524 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Feb 13 15:16:45.931594 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:16:45.931662 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Feb 13 15:16:45.931727 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Feb 13 15:16:45.931793 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:16:45.931803 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Feb 13 15:16:45.931868 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Feb 13 15:16:45.931987 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Feb 13 15:16:45.932068 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:16:45.932086 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 15:16:45.932096 kernel: ACPI: button: Power Button [PWRB] Feb 13 15:16:45.932104 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 15:16:45.932182 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Feb 13 15:16:45.932256 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Feb 13 15:16:45.932267 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 15:16:45.932278 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Feb 13 15:16:45.932347 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Feb 13 15:16:45.932357 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Feb 13 15:16:45.932365 kernel: thunder_xcv, ver 1.0 Feb 13 15:16:45.932372 kernel: thunder_bgx, ver 1.0 Feb 13 15:16:45.932380 kernel: nicpf, ver 1.0 Feb 13 15:16:45.932387 kernel: nicvf, ver 1.0 Feb 13 15:16:45.932462 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 15:16:45.932528 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T15:16:45 UTC (1739459805) Feb 13 15:16:45.932538 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 15:16:45.932546 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 13 15:16:45.932554 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 15:16:45.932561 kernel: watchdog: Hard watchdog permanently disabled Feb 13 15:16:45.932568 kernel: NET: Registered PF_INET6 protocol family Feb 13 15:16:45.932576 kernel: Segment Routing with IPv6 Feb 13 15:16:45.932585 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 15:16:45.932593 kernel: NET: Registered PF_PACKET protocol family Feb 13 15:16:45.932602 kernel: Key type dns_resolver registered Feb 13 15:16:45.932609 kernel: registered taskstats version 1 Feb 13 15:16:45.932617 kernel: Loading compiled-in X.509 certificates Feb 13 15:16:45.932625 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 03c2ececc548f4ae45f50171451f5c036e2757d4' Feb 13 15:16:45.932632 kernel: Key type .fscrypt registered Feb 13 15:16:45.932639 kernel: Key type fscrypt-provisioning registered Feb 13 15:16:45.932647 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 15:16:45.932654 kernel: ima: Allocated hash algorithm: sha1 Feb 13 15:16:45.932663 kernel: ima: No architecture policies found Feb 13 15:16:45.932671 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 15:16:45.932678 kernel: clk: Disabling unused clocks Feb 13 15:16:45.932686 kernel: Freeing unused kernel memory: 38336K Feb 13 15:16:45.932693 kernel: Run /init as init process Feb 13 15:16:45.932700 kernel: with arguments: Feb 13 15:16:45.932708 kernel: /init Feb 13 15:16:45.932715 kernel: with environment: Feb 13 15:16:45.932722 kernel: HOME=/ Feb 13 15:16:45.932729 kernel: TERM=linux Feb 13 15:16:45.932738 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 15:16:45.932746 systemd[1]: Successfully made /usr/ read-only. Feb 13 15:16:45.932757 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 15:16:45.932766 systemd[1]: Detected virtualization kvm. Feb 13 15:16:45.932773 systemd[1]: Detected architecture arm64. Feb 13 15:16:45.932781 systemd[1]: Running in initrd. Feb 13 15:16:45.932789 systemd[1]: No hostname configured, using default hostname. Feb 13 15:16:45.932798 systemd[1]: Hostname set to . Feb 13 15:16:45.932806 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:16:45.932814 systemd[1]: Queued start job for default target initrd.target. Feb 13 15:16:45.932823 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:16:45.932831 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:16:45.932839 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 15:16:45.932847 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:16:45.932855 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 15:16:45.932866 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 15:16:45.932875 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 15:16:45.932883 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 15:16:45.932891 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:16:45.932899 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:16:45.932907 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:16:45.932915 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:16:45.932924 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:16:45.934950 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:16:45.934987 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:16:45.934997 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:16:45.935005 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 15:16:45.935013 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Feb 13 15:16:45.935022 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:16:45.935030 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:16:45.935044 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:16:45.935052 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:16:45.935060 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 15:16:45.935069 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:16:45.935077 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 15:16:45.935122 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 15:16:45.935130 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:16:45.935138 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:16:45.935146 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:16:45.935157 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 15:16:45.935165 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:16:45.935173 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 15:16:45.935217 systemd-journald[237]: Collecting audit messages is disabled. Feb 13 15:16:45.935240 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 15:16:45.935249 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:16:45.935257 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:16:45.935265 kernel: Bridge firewalling registered Feb 13 15:16:45.935276 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:16:45.935284 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:16:45.935293 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:16:45.935301 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:16:45.935310 systemd-journald[237]: Journal started Feb 13 15:16:45.935329 systemd-journald[237]: Runtime Journal (/run/log/journal/c61f4f18c4874c7f9e1a12ea138da4cb) is 8M, max 76.6M, 68.6M free. Feb 13 15:16:45.898834 systemd-modules-load[238]: Inserted module 'overlay' Feb 13 15:16:45.938139 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:16:45.913187 systemd-modules-load[238]: Inserted module 'br_netfilter' Feb 13 15:16:45.941430 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:16:45.943962 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:16:45.946694 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:16:45.954530 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:16:45.958276 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:16:45.966375 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 15:16:45.968175 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:16:45.971729 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:16:45.979366 dracut-cmdline[273]: dracut-dracut-053 Feb 13 15:16:45.982749 dracut-cmdline[273]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=26b1bb981574844309559baa9983d7ef1e1e8283aa92ecd6061030daf7cdbbef Feb 13 15:16:46.014319 systemd-resolved[276]: Positive Trust Anchors: Feb 13 15:16:46.014337 systemd-resolved[276]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:16:46.014368 systemd-resolved[276]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:16:46.019391 systemd-resolved[276]: Defaulting to hostname 'linux'. Feb 13 15:16:46.020907 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:16:46.021611 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:16:46.074986 kernel: SCSI subsystem initialized Feb 13 15:16:46.080021 kernel: Loading iSCSI transport class v2.0-870. Feb 13 15:16:46.086985 kernel: iscsi: registered transport (tcp) Feb 13 15:16:46.100141 kernel: iscsi: registered transport (qla4xxx) Feb 13 15:16:46.100235 kernel: QLogic iSCSI HBA Driver Feb 13 15:16:46.146141 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 15:16:46.159267 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 15:16:46.177014 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 15:16:46.177173 kernel: device-mapper: uevent: version 1.0.3 Feb 13 15:16:46.177213 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 15:16:46.226991 kernel: raid6: neonx8 gen() 15670 MB/s Feb 13 15:16:46.243980 kernel: raid6: neonx4 gen() 15717 MB/s Feb 13 15:16:46.261005 kernel: raid6: neonx2 gen() 13142 MB/s Feb 13 15:16:46.277979 kernel: raid6: neonx1 gen() 10489 MB/s Feb 13 15:16:46.295003 kernel: raid6: int64x8 gen() 6745 MB/s Feb 13 15:16:46.311993 kernel: raid6: int64x4 gen() 7321 MB/s Feb 13 15:16:46.329008 kernel: raid6: int64x2 gen() 6086 MB/s Feb 13 15:16:46.345979 kernel: raid6: int64x1 gen() 5033 MB/s Feb 13 15:16:46.346034 kernel: raid6: using algorithm neonx4 gen() 15717 MB/s Feb 13 15:16:46.362994 kernel: raid6: .... xor() 12332 MB/s, rmw enabled Feb 13 15:16:46.363049 kernel: raid6: using neon recovery algorithm Feb 13 15:16:46.368214 kernel: xor: measuring software checksum speed Feb 13 15:16:46.368288 kernel: 8regs : 20464 MB/sec Feb 13 15:16:46.368411 kernel: 32regs : 21687 MB/sec Feb 13 15:16:46.368441 kernel: arm64_neon : 27841 MB/sec Feb 13 15:16:46.368463 kernel: xor: using function: arm64_neon (27841 MB/sec) Feb 13 15:16:46.416993 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 15:16:46.430102 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:16:46.436147 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:16:46.449563 systemd-udevd[457]: Using default interface naming scheme 'v255'. Feb 13 15:16:46.453385 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:16:46.462109 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 15:16:46.475567 dracut-pre-trigger[462]: rd.md=0: removing MD RAID activation Feb 13 15:16:46.509990 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:16:46.515129 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:16:46.562353 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:16:46.568072 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 15:16:46.594054 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 15:16:46.595618 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:16:46.597513 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:16:46.599123 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:16:46.607407 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 15:16:46.630041 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:16:46.659264 kernel: ACPI: bus type USB registered Feb 13 15:16:46.659321 kernel: usbcore: registered new interface driver usbfs Feb 13 15:16:46.659332 kernel: usbcore: registered new interface driver hub Feb 13 15:16:46.659347 kernel: usbcore: registered new device driver usb Feb 13 15:16:46.680965 kernel: scsi host0: Virtio SCSI HBA Feb 13 15:16:46.688971 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 13 15:16:46.689045 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Feb 13 15:16:46.698596 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:16:46.698846 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:16:46.701905 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:16:46.702893 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:16:46.703165 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:16:46.705091 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:16:46.711223 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:16:46.722039 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Feb 13 15:16:46.731628 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Feb 13 15:16:46.731745 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Feb 13 15:16:46.731827 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Feb 13 15:16:46.731906 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Feb 13 15:16:46.732009 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Feb 13 15:16:46.732104 kernel: hub 1-0:1.0: USB hub found Feb 13 15:16:46.732212 kernel: hub 1-0:1.0: 4 ports detected Feb 13 15:16:46.732295 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Feb 13 15:16:46.732389 kernel: hub 2-0:1.0: USB hub found Feb 13 15:16:46.732473 kernel: hub 2-0:1.0: 4 ports detected Feb 13 15:16:46.723997 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:16:46.727368 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:16:46.741373 kernel: sr 0:0:0:0: Power-on or device reset occurred Feb 13 15:16:46.745342 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Feb 13 15:16:46.745463 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 15:16:46.745475 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Feb 13 15:16:46.755156 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:16:46.765287 kernel: sd 0:0:0:1: Power-on or device reset occurred Feb 13 15:16:46.775860 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Feb 13 15:16:46.776071 kernel: sd 0:0:0:1: [sda] Write Protect is off Feb 13 15:16:46.776186 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Feb 13 15:16:46.776268 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 13 15:16:46.776345 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 15:16:46.776355 kernel: GPT:17805311 != 80003071 Feb 13 15:16:46.776364 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 15:16:46.776373 kernel: GPT:17805311 != 80003071 Feb 13 15:16:46.776386 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 15:16:46.776395 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:16:46.776404 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Feb 13 15:16:46.816489 kernel: BTRFS: device fsid b3d3c5e7-c505-4391-bb7a-de2a572c0855 devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (523) Feb 13 15:16:46.820967 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (526) Feb 13 15:16:46.838188 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Feb 13 15:16:46.846647 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Feb 13 15:16:46.857599 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Feb 13 15:16:46.864327 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Feb 13 15:16:46.864989 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Feb 13 15:16:46.877238 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 15:16:46.884766 disk-uuid[576]: Primary Header is updated. Feb 13 15:16:46.884766 disk-uuid[576]: Secondary Entries is updated. Feb 13 15:16:46.884766 disk-uuid[576]: Secondary Header is updated. Feb 13 15:16:46.893786 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:16:46.973965 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Feb 13 15:16:47.219026 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Feb 13 15:16:47.353560 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Feb 13 15:16:47.353626 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Feb 13 15:16:47.355958 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Feb 13 15:16:47.409802 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Feb 13 15:16:47.410252 kernel: usbcore: registered new interface driver usbhid Feb 13 15:16:47.410285 kernel: usbhid: USB HID core driver Feb 13 15:16:47.906979 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:16:47.908053 disk-uuid[577]: The operation has completed successfully. Feb 13 15:16:47.965785 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 15:16:47.965900 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 15:16:47.999228 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 15:16:48.004355 sh[591]: Success Feb 13 15:16:48.018029 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 15:16:48.067338 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 15:16:48.079967 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 15:16:48.082577 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 15:16:48.109996 kernel: BTRFS info (device dm-0): first mount of filesystem b3d3c5e7-c505-4391-bb7a-de2a572c0855 Feb 13 15:16:48.110057 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:16:48.110088 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 15:16:48.111022 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 15:16:48.112422 kernel: BTRFS info (device dm-0): using free space tree Feb 13 15:16:48.118986 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 15:16:48.121263 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 15:16:48.122997 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 15:16:48.128233 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 15:16:48.133847 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 15:16:48.144485 kernel: BTRFS info (device sda6): first mount of filesystem c44a03df-bf46-42eb-b6fb-d68275519011 Feb 13 15:16:48.144570 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:16:48.144595 kernel: BTRFS info (device sda6): using free space tree Feb 13 15:16:48.149007 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 15:16:48.149104 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 15:16:48.161841 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 15:16:48.162990 kernel: BTRFS info (device sda6): last unmount of filesystem c44a03df-bf46-42eb-b6fb-d68275519011 Feb 13 15:16:48.169931 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 15:16:48.177128 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 15:16:48.267269 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:16:48.267652 ignition[681]: Ignition 2.20.0 Feb 13 15:16:48.270380 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:16:48.267658 ignition[681]: Stage: fetch-offline Feb 13 15:16:48.267695 ignition[681]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:16:48.267703 ignition[681]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 15:16:48.267861 ignition[681]: parsed url from cmdline: "" Feb 13 15:16:48.267864 ignition[681]: no config URL provided Feb 13 15:16:48.267868 ignition[681]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:16:48.267876 ignition[681]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:16:48.267881 ignition[681]: failed to fetch config: resource requires networking Feb 13 15:16:48.268200 ignition[681]: Ignition finished successfully Feb 13 15:16:48.289244 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:16:48.311238 systemd-networkd[780]: lo: Link UP Feb 13 15:16:48.311245 systemd-networkd[780]: lo: Gained carrier Feb 13 15:16:48.315963 systemd-networkd[780]: Enumeration completed Feb 13 15:16:48.316460 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:16:48.316760 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:16:48.316769 systemd-networkd[780]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:16:48.318702 systemd[1]: Reached target network.target - Network. Feb 13 15:16:48.318753 systemd-networkd[780]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:16:48.318756 systemd-networkd[780]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:16:48.319878 systemd-networkd[780]: eth0: Link UP Feb 13 15:16:48.319881 systemd-networkd[780]: eth0: Gained carrier Feb 13 15:16:48.319888 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:16:48.326282 systemd-networkd[780]: eth1: Link UP Feb 13 15:16:48.326285 systemd-networkd[780]: eth1: Gained carrier Feb 13 15:16:48.326294 systemd-networkd[780]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:16:48.329195 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 15:16:48.343104 ignition[783]: Ignition 2.20.0 Feb 13 15:16:48.343695 ignition[783]: Stage: fetch Feb 13 15:16:48.343895 ignition[783]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:16:48.343906 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 15:16:48.344019 ignition[783]: parsed url from cmdline: "" Feb 13 15:16:48.344022 ignition[783]: no config URL provided Feb 13 15:16:48.344027 ignition[783]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:16:48.344035 ignition[783]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:16:48.344148 ignition[783]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Feb 13 15:16:48.345010 ignition[783]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Feb 13 15:16:48.353039 systemd-networkd[780]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:16:48.384043 systemd-networkd[780]: eth0: DHCPv4 address 188.245.168.142/32, gateway 172.31.1.1 acquired from 172.31.1.1 Feb 13 15:16:48.545521 ignition[783]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Feb 13 15:16:48.552654 ignition[783]: GET result: OK Feb 13 15:16:48.552780 ignition[783]: parsing config with SHA512: 387122fa2f99793fb5f9d4142c6572cf49e9e7a679d17c5a73e6bbbb62797c20e54c93dbea6a35cb6fff08eaff248b1409d3edd8e397c49e68c8728ff5a4e0e4 Feb 13 15:16:48.560033 unknown[783]: fetched base config from "system" Feb 13 15:16:48.560043 unknown[783]: fetched base config from "system" Feb 13 15:16:48.560052 unknown[783]: fetched user config from "hetzner" Feb 13 15:16:48.561873 ignition[783]: fetch: fetch complete Feb 13 15:16:48.561883 ignition[783]: fetch: fetch passed Feb 13 15:16:48.561962 ignition[783]: Ignition finished successfully Feb 13 15:16:48.563552 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 15:16:48.570124 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 15:16:48.583750 ignition[791]: Ignition 2.20.0 Feb 13 15:16:48.583765 ignition[791]: Stage: kargs Feb 13 15:16:48.584063 ignition[791]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:16:48.584088 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 15:16:48.585151 ignition[791]: kargs: kargs passed Feb 13 15:16:48.585204 ignition[791]: Ignition finished successfully Feb 13 15:16:48.587693 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 15:16:48.594144 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 15:16:48.605281 ignition[798]: Ignition 2.20.0 Feb 13 15:16:48.605292 ignition[798]: Stage: disks Feb 13 15:16:48.605463 ignition[798]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:16:48.605473 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 15:16:48.608185 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 15:16:48.606414 ignition[798]: disks: disks passed Feb 13 15:16:48.609733 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 15:16:48.606463 ignition[798]: Ignition finished successfully Feb 13 15:16:48.611735 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 15:16:48.612671 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:16:48.613775 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:16:48.614662 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:16:48.625305 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 15:16:48.645667 systemd-fsck[807]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Feb 13 15:16:48.649453 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 15:16:49.113082 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 15:16:49.166204 kernel: EXT4-fs (sda9): mounted filesystem f78dcc36-7881-4d16-ad8b-28e23dfbdad0 r/w with ordered data mode. Quota mode: none. Feb 13 15:16:49.166910 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 15:16:49.167929 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 15:16:49.175032 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:16:49.178238 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 15:16:49.187236 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Feb 13 15:16:49.189566 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 15:16:49.189607 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:16:49.199588 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (815) Feb 13 15:16:49.199612 kernel: BTRFS info (device sda6): first mount of filesystem c44a03df-bf46-42eb-b6fb-d68275519011 Feb 13 15:16:49.199623 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:16:49.199633 kernel: BTRFS info (device sda6): using free space tree Feb 13 15:16:49.191903 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 15:16:49.204283 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 15:16:49.208508 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 15:16:49.208550 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 15:16:49.213564 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:16:49.246708 coreos-metadata[817]: Feb 13 15:16:49.246 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Feb 13 15:16:49.249408 coreos-metadata[817]: Feb 13 15:16:49.249 INFO Fetch successful Feb 13 15:16:49.250000 coreos-metadata[817]: Feb 13 15:16:49.249 INFO wrote hostname ci-4230-0-1-0-5f4e073373 to /sysroot/etc/hostname Feb 13 15:16:49.250762 initrd-setup-root[842]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 15:16:49.252493 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 15:16:49.257964 initrd-setup-root[850]: cut: /sysroot/etc/group: No such file or directory Feb 13 15:16:49.263398 initrd-setup-root[857]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 15:16:49.267950 initrd-setup-root[864]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 15:16:49.361322 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 15:16:49.368127 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 15:16:49.372503 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 15:16:49.379954 kernel: BTRFS info (device sda6): last unmount of filesystem c44a03df-bf46-42eb-b6fb-d68275519011 Feb 13 15:16:49.400673 ignition[932]: INFO : Ignition 2.20.0 Feb 13 15:16:49.401462 ignition[932]: INFO : Stage: mount Feb 13 15:16:49.402107 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:16:49.403505 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 15:16:49.404585 ignition[932]: INFO : mount: mount passed Feb 13 15:16:49.406186 ignition[932]: INFO : Ignition finished successfully Feb 13 15:16:49.406993 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 15:16:49.408319 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 15:16:49.413079 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 15:16:49.440189 systemd-networkd[780]: eth1: Gained IPv6LL Feb 13 15:16:49.760186 systemd-networkd[780]: eth0: Gained IPv6LL Feb 13 15:16:50.109151 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 15:16:50.116205 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:16:50.125988 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (944) Feb 13 15:16:50.130952 kernel: BTRFS info (device sda6): first mount of filesystem c44a03df-bf46-42eb-b6fb-d68275519011 Feb 13 15:16:50.131021 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:16:50.131046 kernel: BTRFS info (device sda6): using free space tree Feb 13 15:16:50.134283 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 15:16:50.134325 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 15:16:50.137238 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:16:50.155591 ignition[960]: INFO : Ignition 2.20.0 Feb 13 15:16:50.155591 ignition[960]: INFO : Stage: files Feb 13 15:16:50.156755 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:16:50.156755 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 15:16:50.158195 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Feb 13 15:16:50.159429 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 15:16:50.159429 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 15:16:50.163619 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 15:16:50.164945 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 15:16:50.164945 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 15:16:50.164131 unknown[960]: wrote ssh authorized keys file for user: core Feb 13 15:16:50.167615 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 15:16:50.167615 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 15:16:50.264989 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 15:16:51.318520 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 15:16:51.318520 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 15:16:51.321922 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Feb 13 15:16:51.991519 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 15:16:52.284725 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 15:16:52.284725 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 15:16:52.289546 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 15:16:52.289546 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:16:52.289546 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:16:52.289546 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:16:52.289546 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:16:52.289546 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:16:52.289546 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:16:52.289546 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:16:52.289546 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:16:52.289546 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 15:16:52.289546 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 15:16:52.289546 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 15:16:52.289546 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Feb 13 15:16:52.986690 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 15:16:54.122106 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 15:16:54.122106 ignition[960]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Feb 13 15:16:54.124542 ignition[960]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:16:54.124542 ignition[960]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:16:54.124542 ignition[960]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Feb 13 15:16:54.124542 ignition[960]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Feb 13 15:16:54.124542 ignition[960]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Feb 13 15:16:54.124542 ignition[960]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Feb 13 15:16:54.124542 ignition[960]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Feb 13 15:16:54.124542 ignition[960]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Feb 13 15:16:54.124542 ignition[960]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 15:16:54.124542 ignition[960]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:16:54.124542 ignition[960]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:16:54.124542 ignition[960]: INFO : files: files passed Feb 13 15:16:54.124542 ignition[960]: INFO : Ignition finished successfully Feb 13 15:16:54.128997 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 15:16:54.138133 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 15:16:54.142773 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 15:16:54.146117 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 15:16:54.151182 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 15:16:54.163979 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:16:54.163979 initrd-setup-root-after-ignition[990]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:16:54.167553 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:16:54.170014 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:16:54.170854 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 15:16:54.180173 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 15:16:54.208720 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 15:16:54.209855 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 15:16:54.210976 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 15:16:54.213213 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 15:16:54.214493 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 15:16:54.222243 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 15:16:54.236222 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:16:54.243182 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 15:16:54.252986 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:16:54.254460 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:16:54.255231 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 15:16:54.256906 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 15:16:54.257088 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:16:54.258894 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 15:16:54.259579 systemd[1]: Stopped target basic.target - Basic System. Feb 13 15:16:54.260648 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 15:16:54.261645 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:16:54.262639 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 15:16:54.263698 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 15:16:54.264743 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:16:54.265893 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 15:16:54.266871 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 15:16:54.267952 systemd[1]: Stopped target swap.target - Swaps. Feb 13 15:16:54.268843 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 15:16:54.268979 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:16:54.270244 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:16:54.270855 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:16:54.271878 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 15:16:54.271965 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:16:54.272991 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 15:16:54.273115 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 15:16:54.274586 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 15:16:54.274698 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:16:54.275831 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 15:16:54.275917 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 15:16:54.277019 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 13 15:16:54.277156 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 15:16:54.284159 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 15:16:54.288418 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 15:16:54.288873 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 15:16:54.289002 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:16:54.295129 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 15:16:54.295241 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:16:54.302173 ignition[1014]: INFO : Ignition 2.20.0 Feb 13 15:16:54.302173 ignition[1014]: INFO : Stage: umount Feb 13 15:16:54.302173 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:16:54.302173 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 15:16:54.307247 ignition[1014]: INFO : umount: umount passed Feb 13 15:16:54.307247 ignition[1014]: INFO : Ignition finished successfully Feb 13 15:16:54.303345 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 15:16:54.303441 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 15:16:54.308007 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 15:16:54.308679 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 15:16:54.310533 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 15:16:54.312560 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 15:16:54.312607 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 15:16:54.313431 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 15:16:54.313471 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 15:16:54.314445 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 15:16:54.314515 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 15:16:54.315331 systemd[1]: Stopped target network.target - Network. Feb 13 15:16:54.316120 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 15:16:54.316172 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:16:54.317148 systemd[1]: Stopped target paths.target - Path Units. Feb 13 15:16:54.317870 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 15:16:54.320997 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:16:54.324654 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 15:16:54.326012 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 15:16:54.327668 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 15:16:54.327754 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:16:54.329697 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 15:16:54.329768 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:16:54.330790 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 15:16:54.330840 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 15:16:54.332161 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 15:16:54.332202 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 15:16:54.333575 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 15:16:54.334856 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 15:16:54.336480 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 15:16:54.336687 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 15:16:54.338203 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 15:16:54.338283 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 15:16:54.341408 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 15:16:54.341507 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 15:16:54.343886 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Feb 13 15:16:54.344021 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 15:16:54.344071 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:16:54.350063 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 15:16:54.351336 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 15:16:54.352023 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:16:54.354071 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:16:54.356268 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 15:16:54.356396 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 15:16:54.359670 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Feb 13 15:16:54.364483 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 15:16:54.365548 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:16:54.366635 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 15:16:54.366713 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 15:16:54.371417 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 15:16:54.371475 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 15:16:54.372676 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 15:16:54.372709 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:16:54.373761 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 15:16:54.373808 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:16:54.375661 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 15:16:54.375711 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 15:16:54.377514 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:16:54.377562 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:16:54.384413 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 15:16:54.385121 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:16:54.385177 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:16:54.385998 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 15:16:54.386046 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 15:16:54.391130 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 15:16:54.391213 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:16:54.394129 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 15:16:54.394172 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:16:54.395282 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 15:16:54.395323 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:16:54.396092 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 15:16:54.396144 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:16:54.397682 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:16:54.397722 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:16:54.400504 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 13 15:16:54.400569 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 13 15:16:54.400617 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Feb 13 15:16:54.400655 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Feb 13 15:16:54.401185 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 15:16:54.401336 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 15:16:54.402679 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 15:16:54.410363 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 15:16:54.420002 systemd[1]: Switching root. Feb 13 15:16:54.456453 systemd-journald[237]: Journal stopped Feb 13 15:16:55.345293 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Feb 13 15:16:55.345364 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 15:16:55.345378 kernel: SELinux: policy capability open_perms=1 Feb 13 15:16:55.345387 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 15:16:55.345399 kernel: SELinux: policy capability always_check_network=0 Feb 13 15:16:55.345408 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 15:16:55.345417 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 15:16:55.345429 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 15:16:55.345438 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 15:16:55.345449 kernel: audit: type=1403 audit(1739459814.564:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 15:16:55.345460 systemd[1]: Successfully loaded SELinux policy in 32.718ms. Feb 13 15:16:55.345478 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.943ms. Feb 13 15:16:55.345489 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 15:16:55.345499 systemd[1]: Detected virtualization kvm. Feb 13 15:16:55.345509 systemd[1]: Detected architecture arm64. Feb 13 15:16:55.345519 systemd[1]: Detected first boot. Feb 13 15:16:55.345529 systemd[1]: Hostname set to . Feb 13 15:16:55.345540 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:16:55.345550 zram_generator::config[1058]: No configuration found. Feb 13 15:16:55.345560 kernel: NET: Registered PF_VSOCK protocol family Feb 13 15:16:55.345571 systemd[1]: Populated /etc with preset unit settings. Feb 13 15:16:55.345582 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Feb 13 15:16:55.345592 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 15:16:55.345601 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 15:16:55.345614 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 15:16:55.345626 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 15:16:55.345636 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 15:16:55.345645 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 15:16:55.345655 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 15:16:55.345665 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 15:16:55.345675 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 15:16:55.345685 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 15:16:55.345694 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 15:16:55.345704 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:16:55.345715 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:16:55.345726 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 15:16:55.345736 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 15:16:55.345746 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 15:16:55.345758 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:16:55.345769 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 13 15:16:55.345779 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:16:55.345790 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 15:16:55.345800 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 15:16:55.345810 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 15:16:55.345820 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 15:16:55.345830 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:16:55.345843 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:16:55.345853 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:16:55.345863 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:16:55.345876 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 15:16:55.345887 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 15:16:55.345899 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Feb 13 15:16:55.345909 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:16:55.345919 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:16:55.345930 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:16:55.353035 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 15:16:55.353094 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 15:16:55.353107 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 15:16:55.353118 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 15:16:55.353128 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 15:16:55.353139 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 15:16:55.353150 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 15:16:55.353161 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 15:16:55.353174 systemd[1]: Reached target machines.target - Containers. Feb 13 15:16:55.353187 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 15:16:55.353197 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:16:55.353208 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:16:55.353219 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 15:16:55.353229 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:16:55.353239 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:16:55.353248 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:16:55.353258 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 15:16:55.353271 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:16:55.353281 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 15:16:55.353292 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 15:16:55.353302 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 15:16:55.353312 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 15:16:55.353322 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 15:16:55.353333 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 15:16:55.353344 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:16:55.353355 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:16:55.353366 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 15:16:55.353376 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 15:16:55.353386 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Feb 13 15:16:55.353397 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:16:55.353409 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 15:16:55.353419 systemd[1]: Stopped verity-setup.service. Feb 13 15:16:55.353429 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 15:16:55.353440 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 15:16:55.353452 kernel: fuse: init (API version 7.39) Feb 13 15:16:55.353463 kernel: loop: module loaded Feb 13 15:16:55.353474 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 15:16:55.353484 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 15:16:55.353495 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 15:16:55.353506 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 15:16:55.353516 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:16:55.353526 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 15:16:55.353536 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 15:16:55.353547 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:16:55.353557 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:16:55.353570 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:16:55.353581 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:16:55.353591 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 15:16:55.353602 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 15:16:55.353614 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:16:55.353624 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:16:55.353634 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:16:55.353646 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 15:16:55.353657 kernel: ACPI: bus type drm_connector registered Feb 13 15:16:55.353699 systemd-journald[1129]: Collecting audit messages is disabled. Feb 13 15:16:55.353727 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 15:16:55.353738 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 15:16:55.353748 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 15:16:55.353759 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:16:55.353769 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Feb 13 15:16:55.353782 systemd-journald[1129]: Journal started Feb 13 15:16:55.353804 systemd-journald[1129]: Runtime Journal (/run/log/journal/c61f4f18c4874c7f9e1a12ea138da4cb) is 8M, max 76.6M, 68.6M free. Feb 13 15:16:55.084827 systemd[1]: Queued start job for default target multi-user.target. Feb 13 15:16:55.097123 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Feb 13 15:16:55.097598 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 15:16:55.362074 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 15:16:55.366569 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 15:16:55.367981 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:16:55.376144 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 15:16:55.379020 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:16:55.382166 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 15:16:55.383977 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:16:55.391959 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:16:55.398437 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 15:16:55.404561 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:16:55.411497 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:16:55.413006 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 15:16:55.414362 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:16:55.415034 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:16:55.417362 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 15:16:55.418980 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Feb 13 15:16:55.420157 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 15:16:55.422965 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 15:16:55.425285 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 15:16:55.449408 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 15:16:55.460119 kernel: loop0: detected capacity change from 0 to 194096 Feb 13 15:16:55.462848 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 15:16:55.465162 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 15:16:55.475952 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 15:16:55.480470 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Feb 13 15:16:55.483497 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:16:55.495733 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:16:55.498584 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 15:16:55.502606 systemd-tmpfiles[1162]: ACLs are not supported, ignoring. Feb 13 15:16:55.510407 systemd-journald[1129]: Time spent on flushing to /var/log/journal/c61f4f18c4874c7f9e1a12ea138da4cb is 52.033ms for 1153 entries. Feb 13 15:16:55.510407 systemd-journald[1129]: System Journal (/var/log/journal/c61f4f18c4874c7f9e1a12ea138da4cb) is 8M, max 584.8M, 576.8M free. Feb 13 15:16:55.574872 systemd-journald[1129]: Received client request to flush runtime journal. Feb 13 15:16:55.574929 kernel: loop1: detected capacity change from 0 to 8 Feb 13 15:16:55.574966 kernel: loop2: detected capacity change from 0 to 123192 Feb 13 15:16:55.502862 systemd-tmpfiles[1162]: ACLs are not supported, ignoring. Feb 13 15:16:55.510306 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 15:16:55.512502 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:16:55.522732 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 15:16:55.538825 udevadm[1194]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 15:16:55.573111 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Feb 13 15:16:55.577546 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 15:16:55.599995 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 15:16:55.607369 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:16:55.622003 kernel: loop3: detected capacity change from 0 to 113512 Feb 13 15:16:55.630703 systemd-tmpfiles[1207]: ACLs are not supported, ignoring. Feb 13 15:16:55.630724 systemd-tmpfiles[1207]: ACLs are not supported, ignoring. Feb 13 15:16:55.640976 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:16:55.657007 kernel: loop4: detected capacity change from 0 to 194096 Feb 13 15:16:55.685029 kernel: loop5: detected capacity change from 0 to 8 Feb 13 15:16:55.690733 kernel: loop6: detected capacity change from 0 to 123192 Feb 13 15:16:55.711958 kernel: loop7: detected capacity change from 0 to 113512 Feb 13 15:16:55.728466 (sd-merge)[1211]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Feb 13 15:16:55.731145 (sd-merge)[1211]: Merged extensions into '/usr'. Feb 13 15:16:55.736753 systemd[1]: Reload requested from client PID 1161 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 15:16:55.736864 systemd[1]: Reloading... Feb 13 15:16:55.859968 zram_generator::config[1239]: No configuration found. Feb 13 15:16:55.880923 ldconfig[1157]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 15:16:55.989200 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:16:56.050723 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 15:16:56.051168 systemd[1]: Reloading finished in 313 ms. Feb 13 15:16:56.067476 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 15:16:56.068719 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 15:16:56.081171 systemd[1]: Starting ensure-sysext.service... Feb 13 15:16:56.089113 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:16:56.106105 systemd[1]: Reload requested from client PID 1276 ('systemctl') (unit ensure-sysext.service)... Feb 13 15:16:56.106124 systemd[1]: Reloading... Feb 13 15:16:56.119875 systemd-tmpfiles[1277]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 15:16:56.122276 systemd-tmpfiles[1277]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 15:16:56.122916 systemd-tmpfiles[1277]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 15:16:56.123167 systemd-tmpfiles[1277]: ACLs are not supported, ignoring. Feb 13 15:16:56.123212 systemd-tmpfiles[1277]: ACLs are not supported, ignoring. Feb 13 15:16:56.128323 systemd-tmpfiles[1277]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:16:56.128338 systemd-tmpfiles[1277]: Skipping /boot Feb 13 15:16:56.143126 systemd-tmpfiles[1277]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:16:56.143137 systemd-tmpfiles[1277]: Skipping /boot Feb 13 15:16:56.164970 zram_generator::config[1302]: No configuration found. Feb 13 15:16:56.281425 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:16:56.344305 systemd[1]: Reloading finished in 237 ms. Feb 13 15:16:56.359172 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 15:16:56.370007 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:16:56.382336 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:16:56.387369 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 15:16:56.392972 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 15:16:56.405514 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:16:56.409098 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:16:56.423191 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 15:16:56.427447 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:16:56.434293 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:16:56.440294 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:16:56.449325 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:16:56.450111 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:16:56.450217 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 15:16:56.461867 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 15:16:56.466968 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 15:16:56.469085 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:16:56.470077 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:16:56.472115 systemd-udevd[1356]: Using default interface naming scheme 'v255'. Feb 13 15:16:56.479232 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:16:56.480857 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:16:56.483761 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:16:56.485649 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:16:56.490261 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 15:16:56.498601 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:16:56.502361 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:16:56.504221 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:16:56.507411 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:16:56.508032 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:16:56.508177 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 15:16:56.511564 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 15:16:56.516486 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 15:16:56.517453 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:16:56.522504 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:16:56.524294 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:16:56.533084 systemd[1]: Finished ensure-sysext.service. Feb 13 15:16:56.540828 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:16:56.552777 augenrules[1403]: No rules Feb 13 15:16:56.556287 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:16:56.556914 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:16:56.557116 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 15:16:56.571177 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:16:56.578297 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 15:16:56.579512 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 15:16:56.579985 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:16:56.582004 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:16:56.583356 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:16:56.583549 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:16:56.593135 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Feb 13 15:16:56.593382 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:16:56.595326 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:16:56.596986 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:16:56.599710 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:16:56.599898 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:16:56.603668 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:16:56.608522 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 15:16:56.618521 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 15:16:56.751398 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 15:16:56.753245 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 15:16:56.765150 systemd-networkd[1413]: lo: Link UP Feb 13 15:16:56.765161 systemd-networkd[1413]: lo: Gained carrier Feb 13 15:16:56.766720 systemd-networkd[1413]: Enumeration completed Feb 13 15:16:56.766817 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:16:56.770280 systemd-networkd[1413]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:16:56.770290 systemd-networkd[1413]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:16:56.773234 systemd-networkd[1413]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:16:56.773263 systemd-networkd[1413]: eth1: Link UP Feb 13 15:16:56.773266 systemd-networkd[1413]: eth1: Gained carrier Feb 13 15:16:56.773275 systemd-networkd[1413]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:16:56.777287 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Feb 13 15:16:56.781459 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 15:16:56.799609 systemd-resolved[1352]: Positive Trust Anchors: Feb 13 15:16:56.799992 systemd-resolved[1352]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:16:56.800098 systemd-resolved[1352]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:16:56.802135 systemd-networkd[1413]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:16:56.803370 systemd-networkd[1413]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:16:56.803379 systemd-networkd[1413]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:16:56.803418 systemd-timesyncd[1414]: Network configuration changed, trying to establish connection. Feb 13 15:16:56.804306 systemd-networkd[1413]: eth0: Link UP Feb 13 15:16:56.804315 systemd-networkd[1413]: eth0: Gained carrier Feb 13 15:16:56.804328 systemd-networkd[1413]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:16:56.805026 systemd-timesyncd[1414]: Network configuration changed, trying to establish connection. Feb 13 15:16:56.806565 systemd-timesyncd[1414]: Network configuration changed, trying to establish connection. Feb 13 15:16:56.806861 systemd-resolved[1352]: Using system hostname 'ci-4230-0-1-0-5f4e073373'. Feb 13 15:16:56.809143 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:16:56.811897 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Feb 13 15:16:56.812791 systemd[1]: Reached target network.target - Network. Feb 13 15:16:56.813572 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:16:56.831160 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1392) Feb 13 15:16:56.870137 systemd-networkd[1413]: eth0: DHCPv4 address 188.245.168.142/32, gateway 172.31.1.1 acquired from 172.31.1.1 Feb 13 15:16:56.870699 systemd-timesyncd[1414]: Network configuration changed, trying to establish connection. Feb 13 15:16:56.873986 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 15:16:56.898697 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Feb 13 15:16:56.898817 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:16:56.907146 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:16:56.910132 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:16:56.928268 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:16:56.928887 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:16:56.928929 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 15:16:56.929328 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 15:16:56.929726 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:16:56.930347 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:16:56.934568 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Feb 13 15:16:56.936279 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:16:56.936456 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:16:56.937541 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:16:56.937886 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:16:56.952646 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 15:16:56.952959 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Feb 13 15:16:56.953968 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Feb 13 15:16:56.954017 kernel: [drm] features: -context_init Feb 13 15:16:56.954130 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:16:56.954182 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:16:56.955135 kernel: [drm] number of scanouts: 1 Feb 13 15:16:56.955176 kernel: [drm] number of cap sets: 0 Feb 13 15:16:56.958964 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Feb 13 15:16:56.966008 kernel: Console: switching to colour frame buffer device 160x50 Feb 13 15:16:56.965667 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:16:56.978231 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Feb 13 15:16:56.983346 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 15:16:56.993182 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:16:56.994002 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:16:57.004390 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:16:57.065106 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:16:57.103052 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 15:16:57.111238 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 15:16:57.123991 lvm[1473]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:16:57.151002 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 15:16:57.152797 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:16:57.153474 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:16:57.154110 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 15:16:57.154734 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 15:16:57.155734 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 15:16:57.156434 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 15:16:57.157159 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 15:16:57.157813 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 15:16:57.157849 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:16:57.158381 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:16:57.160302 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 15:16:57.162149 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 15:16:57.165155 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Feb 13 15:16:57.165999 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Feb 13 15:16:57.166716 systemd[1]: Reached target ssh-access.target - SSH Access Available. Feb 13 15:16:57.169603 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 15:16:57.170773 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Feb 13 15:16:57.172875 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 15:16:57.174275 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 15:16:57.174886 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:16:57.175451 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:16:57.176016 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:16:57.176058 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:16:57.182162 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 15:16:57.186131 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 15:16:57.187945 lvm[1477]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:16:57.191543 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 15:16:57.197115 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 15:16:57.198709 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 15:16:57.199381 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 15:16:57.202120 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 15:16:57.205053 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 15:16:57.210550 jq[1483]: false Feb 13 15:16:57.213995 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Feb 13 15:16:57.229853 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 15:16:57.236189 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 15:16:57.240773 coreos-metadata[1479]: Feb 13 15:16:57.240 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Feb 13 15:16:57.241659 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 15:16:57.243903 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 15:16:57.245530 coreos-metadata[1479]: Feb 13 15:16:57.244 INFO Fetch successful Feb 13 15:16:57.245530 coreos-metadata[1479]: Feb 13 15:16:57.245 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Feb 13 15:16:57.246415 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 15:16:57.246534 coreos-metadata[1479]: Feb 13 15:16:57.246 INFO Fetch successful Feb 13 15:16:57.251185 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 15:16:57.253648 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 15:16:57.260234 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 15:16:57.261973 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 15:16:57.270899 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 15:16:57.272356 dbus-daemon[1480]: [system] SELinux support is enabled Feb 13 15:16:57.272970 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 15:16:57.273878 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 15:16:57.280449 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 15:16:57.280657 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 15:16:57.282576 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 15:16:57.282625 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 15:16:57.284218 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 15:16:57.284242 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 15:16:57.288552 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 15:16:57.298646 extend-filesystems[1484]: Found loop4 Feb 13 15:16:57.298646 extend-filesystems[1484]: Found loop5 Feb 13 15:16:57.298646 extend-filesystems[1484]: Found loop6 Feb 13 15:16:57.298646 extend-filesystems[1484]: Found loop7 Feb 13 15:16:57.298646 extend-filesystems[1484]: Found sda Feb 13 15:16:57.298646 extend-filesystems[1484]: Found sda1 Feb 13 15:16:57.298646 extend-filesystems[1484]: Found sda2 Feb 13 15:16:57.298646 extend-filesystems[1484]: Found sda3 Feb 13 15:16:57.298646 extend-filesystems[1484]: Found usr Feb 13 15:16:57.298646 extend-filesystems[1484]: Found sda4 Feb 13 15:16:57.298646 extend-filesystems[1484]: Found sda6 Feb 13 15:16:57.298646 extend-filesystems[1484]: Found sda7 Feb 13 15:16:57.298646 extend-filesystems[1484]: Found sda9 Feb 13 15:16:57.298646 extend-filesystems[1484]: Checking size of /dev/sda9 Feb 13 15:16:57.331512 jq[1499]: true Feb 13 15:16:57.335148 (ntainerd)[1515]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 15:16:57.343303 extend-filesystems[1484]: Resized partition /dev/sda9 Feb 13 15:16:57.350314 jq[1518]: true Feb 13 15:16:57.352601 extend-filesystems[1523]: resize2fs 1.47.1 (20-May-2024) Feb 13 15:16:57.357049 tar[1503]: linux-arm64/helm Feb 13 15:16:57.363193 update_engine[1494]: I20250213 15:16:57.357249 1494 main.cc:92] Flatcar Update Engine starting Feb 13 15:16:57.372026 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Feb 13 15:16:57.372108 update_engine[1494]: I20250213 15:16:57.370029 1494 update_check_scheduler.cc:74] Next update check in 8m28s Feb 13 15:16:57.368705 systemd[1]: Started update-engine.service - Update Engine. Feb 13 15:16:57.380181 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 15:16:57.457625 systemd-logind[1492]: New seat seat0. Feb 13 15:16:57.466562 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 15:16:57.468572 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 15:16:57.472966 systemd-logind[1492]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 15:16:57.473017 systemd-logind[1492]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Feb 13 15:16:57.473957 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 15:16:57.490065 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1405) Feb 13 15:16:57.523001 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Feb 13 15:16:57.533391 bash[1548]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:16:57.534238 extend-filesystems[1523]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Feb 13 15:16:57.534238 extend-filesystems[1523]: old_desc_blocks = 1, new_desc_blocks = 5 Feb 13 15:16:57.534238 extend-filesystems[1523]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Feb 13 15:16:57.537651 extend-filesystems[1484]: Resized filesystem in /dev/sda9 Feb 13 15:16:57.537651 extend-filesystems[1484]: Found sr0 Feb 13 15:16:57.539415 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 15:16:57.541980 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 15:16:57.543203 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 15:16:57.563866 systemd[1]: Starting sshkeys.service... Feb 13 15:16:57.592016 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 15:16:57.602231 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 15:16:57.653998 coreos-metadata[1561]: Feb 13 15:16:57.653 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Feb 13 15:16:57.653998 coreos-metadata[1561]: Feb 13 15:16:57.653 INFO Fetch successful Feb 13 15:16:57.656721 unknown[1561]: wrote ssh authorized keys file for user: core Feb 13 15:16:57.687701 containerd[1515]: time="2025-02-13T15:16:57.687239080Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 15:16:57.691411 update-ssh-keys[1566]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:16:57.693045 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 15:16:57.696434 systemd[1]: Finished sshkeys.service. Feb 13 15:16:57.745283 locksmithd[1532]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 15:16:57.757427 containerd[1515]: time="2025-02-13T15:16:57.757373200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:16:57.760275 containerd[1515]: time="2025-02-13T15:16:57.760237560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:16:57.760275 containerd[1515]: time="2025-02-13T15:16:57.760273200Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 15:16:57.760404 containerd[1515]: time="2025-02-13T15:16:57.760384360Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 15:16:57.760569 containerd[1515]: time="2025-02-13T15:16:57.760546640Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 15:16:57.760596 containerd[1515]: time="2025-02-13T15:16:57.760570240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 15:16:57.760655 containerd[1515]: time="2025-02-13T15:16:57.760634960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:16:57.760655 containerd[1515]: time="2025-02-13T15:16:57.760649840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:16:57.760868 containerd[1515]: time="2025-02-13T15:16:57.760844480Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:16:57.760868 containerd[1515]: time="2025-02-13T15:16:57.760865200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 15:16:57.760921 containerd[1515]: time="2025-02-13T15:16:57.760897120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:16:57.760921 containerd[1515]: time="2025-02-13T15:16:57.760908120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 15:16:57.762105 containerd[1515]: time="2025-02-13T15:16:57.762074640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:16:57.762315 containerd[1515]: time="2025-02-13T15:16:57.762289320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:16:57.762454 containerd[1515]: time="2025-02-13T15:16:57.762424760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:16:57.762454 containerd[1515]: time="2025-02-13T15:16:57.762442680Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 15:16:57.762574 containerd[1515]: time="2025-02-13T15:16:57.762544520Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 15:16:57.762638 containerd[1515]: time="2025-02-13T15:16:57.762616320Z" level=info msg="metadata content store policy set" policy=shared Feb 13 15:16:57.769552 containerd[1515]: time="2025-02-13T15:16:57.769515080Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 15:16:57.769626 containerd[1515]: time="2025-02-13T15:16:57.769567840Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 15:16:57.769626 containerd[1515]: time="2025-02-13T15:16:57.769583400Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 15:16:57.769626 containerd[1515]: time="2025-02-13T15:16:57.769604600Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 15:16:57.769626 containerd[1515]: time="2025-02-13T15:16:57.769620200Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 15:16:57.769800 containerd[1515]: time="2025-02-13T15:16:57.769776280Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 15:16:57.770132 containerd[1515]: time="2025-02-13T15:16:57.770066800Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 15:16:57.770478 containerd[1515]: time="2025-02-13T15:16:57.770200000Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 15:16:57.770478 containerd[1515]: time="2025-02-13T15:16:57.770221600Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 15:16:57.770478 containerd[1515]: time="2025-02-13T15:16:57.770235960Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 15:16:57.770478 containerd[1515]: time="2025-02-13T15:16:57.770249520Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 15:16:57.770478 containerd[1515]: time="2025-02-13T15:16:57.770261440Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 15:16:57.770478 containerd[1515]: time="2025-02-13T15:16:57.770273280Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 15:16:57.770478 containerd[1515]: time="2025-02-13T15:16:57.770286480Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 15:16:57.770478 containerd[1515]: time="2025-02-13T15:16:57.770317800Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 15:16:57.770478 containerd[1515]: time="2025-02-13T15:16:57.770332120Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 15:16:57.770478 containerd[1515]: time="2025-02-13T15:16:57.770345360Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 15:16:57.770478 containerd[1515]: time="2025-02-13T15:16:57.770355960Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 15:16:57.770478 containerd[1515]: time="2025-02-13T15:16:57.770376280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 15:16:57.770478 containerd[1515]: time="2025-02-13T15:16:57.770391240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 15:16:57.770478 containerd[1515]: time="2025-02-13T15:16:57.770405040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 15:16:57.770755 containerd[1515]: time="2025-02-13T15:16:57.770419400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 15:16:57.770755 containerd[1515]: time="2025-02-13T15:16:57.770430680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 15:16:57.770755 containerd[1515]: time="2025-02-13T15:16:57.770442840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 15:16:57.770755 containerd[1515]: time="2025-02-13T15:16:57.770453240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 15:16:57.770755 containerd[1515]: time="2025-02-13T15:16:57.770465000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 15:16:57.770755 containerd[1515]: time="2025-02-13T15:16:57.770478000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 15:16:57.770755 containerd[1515]: time="2025-02-13T15:16:57.770491400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 15:16:57.770755 containerd[1515]: time="2025-02-13T15:16:57.770502520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 15:16:57.770755 containerd[1515]: time="2025-02-13T15:16:57.770514520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 15:16:57.770755 containerd[1515]: time="2025-02-13T15:16:57.770525960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 15:16:57.770755 containerd[1515]: time="2025-02-13T15:16:57.770539120Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 15:16:57.770755 containerd[1515]: time="2025-02-13T15:16:57.770569840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 15:16:57.770755 containerd[1515]: time="2025-02-13T15:16:57.770592880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 15:16:57.770755 containerd[1515]: time="2025-02-13T15:16:57.770604360Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 15:16:57.771470 containerd[1515]: time="2025-02-13T15:16:57.770810520Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 15:16:57.771470 containerd[1515]: time="2025-02-13T15:16:57.770831440Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 15:16:57.771470 containerd[1515]: time="2025-02-13T15:16:57.770841480Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 15:16:57.771470 containerd[1515]: time="2025-02-13T15:16:57.770852600Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 15:16:57.771470 containerd[1515]: time="2025-02-13T15:16:57.770862160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 15:16:57.771470 containerd[1515]: time="2025-02-13T15:16:57.770873880Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 15:16:57.771470 containerd[1515]: time="2025-02-13T15:16:57.770882960Z" level=info msg="NRI interface is disabled by configuration." Feb 13 15:16:57.771470 containerd[1515]: time="2025-02-13T15:16:57.770892320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 15:16:57.771625 containerd[1515]: time="2025-02-13T15:16:57.771313520Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 15:16:57.771625 containerd[1515]: time="2025-02-13T15:16:57.771364080Z" level=info msg="Connect containerd service" Feb 13 15:16:57.771625 containerd[1515]: time="2025-02-13T15:16:57.771398600Z" level=info msg="using legacy CRI server" Feb 13 15:16:57.771625 containerd[1515]: time="2025-02-13T15:16:57.771405200Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 15:16:57.771774 containerd[1515]: time="2025-02-13T15:16:57.771647000Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 15:16:57.775555 containerd[1515]: time="2025-02-13T15:16:57.774352040Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:16:57.775555 containerd[1515]: time="2025-02-13T15:16:57.774812320Z" level=info msg="Start subscribing containerd event" Feb 13 15:16:57.775555 containerd[1515]: time="2025-02-13T15:16:57.774865680Z" level=info msg="Start recovering state" Feb 13 15:16:57.775555 containerd[1515]: time="2025-02-13T15:16:57.774926240Z" level=info msg="Start event monitor" Feb 13 15:16:57.775555 containerd[1515]: time="2025-02-13T15:16:57.774952280Z" level=info msg="Start snapshots syncer" Feb 13 15:16:57.775555 containerd[1515]: time="2025-02-13T15:16:57.774961840Z" level=info msg="Start cni network conf syncer for default" Feb 13 15:16:57.775555 containerd[1515]: time="2025-02-13T15:16:57.774970200Z" level=info msg="Start streaming server" Feb 13 15:16:57.777064 containerd[1515]: time="2025-02-13T15:16:57.777030320Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 15:16:57.777112 containerd[1515]: time="2025-02-13T15:16:57.777091560Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 15:16:57.777230 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 15:16:57.779840 containerd[1515]: time="2025-02-13T15:16:57.777836040Z" level=info msg="containerd successfully booted in 0.095400s" Feb 13 15:16:57.990141 tar[1503]: linux-arm64/LICENSE Feb 13 15:16:57.990340 tar[1503]: linux-arm64/README.md Feb 13 15:16:58.002805 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 15:16:58.208317 systemd-networkd[1413]: eth0: Gained IPv6LL Feb 13 15:16:58.209202 systemd-timesyncd[1414]: Network configuration changed, trying to establish connection. Feb 13 15:16:58.211120 systemd-networkd[1413]: eth1: Gained IPv6LL Feb 13 15:16:58.212580 systemd-timesyncd[1414]: Network configuration changed, trying to establish connection. Feb 13 15:16:58.215286 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 15:16:58.216514 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 15:16:58.226457 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:16:58.230095 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 15:16:58.258493 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 15:16:58.894121 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:16:58.899174 (kubelet)[1593]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:16:59.192778 sshd_keygen[1516]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 15:16:59.214865 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 15:16:59.221451 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 15:16:59.228673 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 15:16:59.229958 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 15:16:59.241625 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 15:16:59.253992 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 15:16:59.261316 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 15:16:59.269245 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 13 15:16:59.270257 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 15:16:59.271519 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 15:16:59.272704 systemd[1]: Startup finished in 785ms (kernel) + 8.881s (initrd) + 4.740s (userspace) = 14.407s. Feb 13 15:16:59.466915 kubelet[1593]: E0213 15:16:59.466797 1593 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:16:59.471504 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:16:59.471721 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:16:59.472252 systemd[1]: kubelet.service: Consumed 851ms CPU time, 237.9M memory peak. Feb 13 15:17:09.586833 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 15:17:09.594598 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:17:09.686196 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:17:09.689614 (kubelet)[1630]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:17:09.740697 kubelet[1630]: E0213 15:17:09.740638 1630 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:17:09.744554 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:17:09.744722 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:17:09.745415 systemd[1]: kubelet.service: Consumed 138ms CPU time, 95.1M memory peak. Feb 13 15:17:19.834735 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 15:17:19.842159 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:17:19.930728 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:17:19.944852 (kubelet)[1646]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:17:20.001234 kubelet[1646]: E0213 15:17:20.001146 1646 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:17:20.004740 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:17:20.005503 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:17:20.005810 systemd[1]: kubelet.service: Consumed 146ms CPU time, 95M memory peak. Feb 13 15:17:28.452740 systemd-timesyncd[1414]: Contacted time server 129.250.35.251:123 (2.flatcar.pool.ntp.org). Feb 13 15:17:28.452839 systemd-timesyncd[1414]: Initial clock synchronization to Thu 2025-02-13 15:17:28.780792 UTC. Feb 13 15:17:30.085777 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Feb 13 15:17:30.092340 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:17:30.197474 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:17:30.203717 (kubelet)[1661]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:17:30.252901 kubelet[1661]: E0213 15:17:30.252840 1661 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:17:30.256483 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:17:30.256796 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:17:30.257571 systemd[1]: kubelet.service: Consumed 138ms CPU time, 95.4M memory peak. Feb 13 15:17:40.334746 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Feb 13 15:17:40.352443 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:17:40.450204 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:17:40.459678 (kubelet)[1678]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:17:40.510736 kubelet[1678]: E0213 15:17:40.510690 1678 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:17:40.513798 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:17:40.513980 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:17:40.514483 systemd[1]: kubelet.service: Consumed 144ms CPU time, 96.3M memory peak. Feb 13 15:17:42.881123 update_engine[1494]: I20250213 15:17:42.880275 1494 update_attempter.cc:509] Updating boot flags... Feb 13 15:17:42.930960 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1695) Feb 13 15:17:42.988012 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1694) Feb 13 15:17:43.047082 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1694) Feb 13 15:17:46.088385 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 15:17:46.098459 systemd[1]: Started sshd@0-188.245.168.142:22-139.178.68.195:34926.service - OpenSSH per-connection server daemon (139.178.68.195:34926). Feb 13 15:17:47.096572 sshd[1708]: Accepted publickey for core from 139.178.68.195 port 34926 ssh2: RSA SHA256:dDBYffbys7IwrjEqnD+nC8HZkuMa8NXLOQVKUB+uHPI Feb 13 15:17:47.100861 sshd-session[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:17:47.114140 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 15:17:47.120570 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 15:17:47.131305 systemd-logind[1492]: New session 1 of user core. Feb 13 15:17:47.137554 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 15:17:47.154596 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 15:17:47.159733 (systemd)[1712]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 15:17:47.163342 systemd-logind[1492]: New session c1 of user core. Feb 13 15:17:47.296670 systemd[1712]: Queued start job for default target default.target. Feb 13 15:17:47.307015 systemd[1712]: Created slice app.slice - User Application Slice. Feb 13 15:17:47.307071 systemd[1712]: Reached target paths.target - Paths. Feb 13 15:17:47.307150 systemd[1712]: Reached target timers.target - Timers. Feb 13 15:17:47.310014 systemd[1712]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 15:17:47.332695 systemd[1712]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 15:17:47.332844 systemd[1712]: Reached target sockets.target - Sockets. Feb 13 15:17:47.332895 systemd[1712]: Reached target basic.target - Basic System. Feb 13 15:17:47.332982 systemd[1712]: Reached target default.target - Main User Target. Feb 13 15:17:47.333151 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 15:17:47.333529 systemd[1712]: Startup finished in 162ms. Feb 13 15:17:47.344224 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 15:17:48.048972 systemd[1]: Started sshd@1-188.245.168.142:22-139.178.68.195:55564.service - OpenSSH per-connection server daemon (139.178.68.195:55564). Feb 13 15:17:49.038320 sshd[1723]: Accepted publickey for core from 139.178.68.195 port 55564 ssh2: RSA SHA256:dDBYffbys7IwrjEqnD+nC8HZkuMa8NXLOQVKUB+uHPI Feb 13 15:17:49.040125 sshd-session[1723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:17:49.045970 systemd-logind[1492]: New session 2 of user core. Feb 13 15:17:49.051161 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 15:17:49.720608 sshd[1725]: Connection closed by 139.178.68.195 port 55564 Feb 13 15:17:49.721573 sshd-session[1723]: pam_unix(sshd:session): session closed for user core Feb 13 15:17:49.726405 systemd-logind[1492]: Session 2 logged out. Waiting for processes to exit. Feb 13 15:17:49.726538 systemd[1]: sshd@1-188.245.168.142:22-139.178.68.195:55564.service: Deactivated successfully. Feb 13 15:17:49.730140 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 15:17:49.731995 systemd-logind[1492]: Removed session 2. Feb 13 15:17:49.899382 systemd[1]: Started sshd@2-188.245.168.142:22-139.178.68.195:55574.service - OpenSSH per-connection server daemon (139.178.68.195:55574). Feb 13 15:17:50.584461 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Feb 13 15:17:50.594258 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:17:50.692514 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:17:50.704493 (kubelet)[1740]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:17:50.750215 kubelet[1740]: E0213 15:17:50.750158 1740 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:17:50.752221 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:17:50.752364 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:17:50.753011 systemd[1]: kubelet.service: Consumed 136ms CPU time, 96.5M memory peak. Feb 13 15:17:50.887197 sshd[1731]: Accepted publickey for core from 139.178.68.195 port 55574 ssh2: RSA SHA256:dDBYffbys7IwrjEqnD+nC8HZkuMa8NXLOQVKUB+uHPI Feb 13 15:17:50.889397 sshd-session[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:17:50.895089 systemd-logind[1492]: New session 3 of user core. Feb 13 15:17:50.904208 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 15:17:51.565429 sshd[1748]: Connection closed by 139.178.68.195 port 55574 Feb 13 15:17:51.566487 sshd-session[1731]: pam_unix(sshd:session): session closed for user core Feb 13 15:17:51.570953 systemd-logind[1492]: Session 3 logged out. Waiting for processes to exit. Feb 13 15:17:51.571867 systemd[1]: sshd@2-188.245.168.142:22-139.178.68.195:55574.service: Deactivated successfully. Feb 13 15:17:51.573585 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 15:17:51.574907 systemd-logind[1492]: Removed session 3. Feb 13 15:17:51.739136 systemd[1]: Started sshd@3-188.245.168.142:22-139.178.68.195:55580.service - OpenSSH per-connection server daemon (139.178.68.195:55580). Feb 13 15:17:52.722219 sshd[1754]: Accepted publickey for core from 139.178.68.195 port 55580 ssh2: RSA SHA256:dDBYffbys7IwrjEqnD+nC8HZkuMa8NXLOQVKUB+uHPI Feb 13 15:17:52.724055 sshd-session[1754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:17:52.729363 systemd-logind[1492]: New session 4 of user core. Feb 13 15:17:52.745267 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 15:17:53.396857 sshd[1756]: Connection closed by 139.178.68.195 port 55580 Feb 13 15:17:53.397933 sshd-session[1754]: pam_unix(sshd:session): session closed for user core Feb 13 15:17:53.401466 systemd[1]: sshd@3-188.245.168.142:22-139.178.68.195:55580.service: Deactivated successfully. Feb 13 15:17:53.402995 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 15:17:53.405387 systemd-logind[1492]: Session 4 logged out. Waiting for processes to exit. Feb 13 15:17:53.406882 systemd-logind[1492]: Removed session 4. Feb 13 15:17:53.574399 systemd[1]: Started sshd@4-188.245.168.142:22-139.178.68.195:55596.service - OpenSSH per-connection server daemon (139.178.68.195:55596). Feb 13 15:17:54.554991 sshd[1762]: Accepted publickey for core from 139.178.68.195 port 55596 ssh2: RSA SHA256:dDBYffbys7IwrjEqnD+nC8HZkuMa8NXLOQVKUB+uHPI Feb 13 15:17:54.557114 sshd-session[1762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:17:54.561676 systemd-logind[1492]: New session 5 of user core. Feb 13 15:17:54.573238 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 15:17:55.084121 sudo[1765]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 15:17:55.084470 sudo[1765]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:17:55.099293 sudo[1765]: pam_unix(sudo:session): session closed for user root Feb 13 15:17:55.258064 sshd[1764]: Connection closed by 139.178.68.195 port 55596 Feb 13 15:17:55.259356 sshd-session[1762]: pam_unix(sshd:session): session closed for user core Feb 13 15:17:55.263109 systemd[1]: sshd@4-188.245.168.142:22-139.178.68.195:55596.service: Deactivated successfully. Feb 13 15:17:55.264880 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 15:17:55.266440 systemd-logind[1492]: Session 5 logged out. Waiting for processes to exit. Feb 13 15:17:55.268211 systemd-logind[1492]: Removed session 5. Feb 13 15:17:55.440263 systemd[1]: Started sshd@5-188.245.168.142:22-139.178.68.195:55598.service - OpenSSH per-connection server daemon (139.178.68.195:55598). Feb 13 15:17:56.431044 sshd[1771]: Accepted publickey for core from 139.178.68.195 port 55598 ssh2: RSA SHA256:dDBYffbys7IwrjEqnD+nC8HZkuMa8NXLOQVKUB+uHPI Feb 13 15:17:56.433408 sshd-session[1771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:17:56.440050 systemd-logind[1492]: New session 6 of user core. Feb 13 15:17:56.445186 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 15:17:56.958231 sudo[1775]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 15:17:56.958528 sudo[1775]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:17:56.963747 sudo[1775]: pam_unix(sudo:session): session closed for user root Feb 13 15:17:56.969079 sudo[1774]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 15:17:56.969361 sudo[1774]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:17:56.987485 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:17:57.017158 augenrules[1797]: No rules Feb 13 15:17:57.018397 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:17:57.018654 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:17:57.020527 sudo[1774]: pam_unix(sudo:session): session closed for user root Feb 13 15:17:57.179962 sshd[1773]: Connection closed by 139.178.68.195 port 55598 Feb 13 15:17:57.180847 sshd-session[1771]: pam_unix(sshd:session): session closed for user core Feb 13 15:17:57.185363 systemd[1]: sshd@5-188.245.168.142:22-139.178.68.195:55598.service: Deactivated successfully. Feb 13 15:17:57.187200 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 15:17:57.188691 systemd-logind[1492]: Session 6 logged out. Waiting for processes to exit. Feb 13 15:17:57.189677 systemd-logind[1492]: Removed session 6. Feb 13 15:17:57.357464 systemd[1]: Started sshd@6-188.245.168.142:22-139.178.68.195:44384.service - OpenSSH per-connection server daemon (139.178.68.195:44384). Feb 13 15:17:58.339632 sshd[1806]: Accepted publickey for core from 139.178.68.195 port 44384 ssh2: RSA SHA256:dDBYffbys7IwrjEqnD+nC8HZkuMa8NXLOQVKUB+uHPI Feb 13 15:17:58.341192 sshd-session[1806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:17:58.347290 systemd-logind[1492]: New session 7 of user core. Feb 13 15:17:58.350113 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 15:17:58.858593 sudo[1809]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 15:17:58.858863 sudo[1809]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:17:59.176308 (dockerd)[1825]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 15:17:59.176340 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 15:17:59.409891 dockerd[1825]: time="2025-02-13T15:17:59.409470546Z" level=info msg="Starting up" Feb 13 15:17:59.512218 dockerd[1825]: time="2025-02-13T15:17:59.512075180Z" level=info msg="Loading containers: start." Feb 13 15:17:59.669014 kernel: Initializing XFRM netlink socket Feb 13 15:17:59.753214 systemd-networkd[1413]: docker0: Link UP Feb 13 15:17:59.793387 dockerd[1825]: time="2025-02-13T15:17:59.793234345Z" level=info msg="Loading containers: done." Feb 13 15:17:59.808809 dockerd[1825]: time="2025-02-13T15:17:59.808703693Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 15:17:59.808809 dockerd[1825]: time="2025-02-13T15:17:59.808814056Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Feb 13 15:17:59.809139 dockerd[1825]: time="2025-02-13T15:17:59.809043187Z" level=info msg="Daemon has completed initialization" Feb 13 15:17:59.847796 dockerd[1825]: time="2025-02-13T15:17:59.847740954Z" level=info msg="API listen on /run/docker.sock" Feb 13 15:17:59.848180 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 15:18:00.834100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Feb 13 15:18:00.847303 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:18:00.951234 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:18:00.955991 (kubelet)[2025]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:18:00.996839 containerd[1515]: time="2025-02-13T15:18:00.996463215Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\"" Feb 13 15:18:01.012566 kubelet[2025]: E0213 15:18:01.012432 2025 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:18:01.016432 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:18:01.016669 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:18:01.018150 systemd[1]: kubelet.service: Consumed 133ms CPU time, 94.3M memory peak. Feb 13 15:18:01.642695 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount8142616.mount: Deactivated successfully. Feb 13 15:18:04.794977 containerd[1515]: time="2025-02-13T15:18:04.793695345Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:04.794977 containerd[1515]: time="2025-02-13T15:18:04.794855336Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.10: active requests=0, bytes read=29865299" Feb 13 15:18:04.796403 containerd[1515]: time="2025-02-13T15:18:04.796294496Z" level=info msg="ImageCreate event name:\"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:04.804748 containerd[1515]: time="2025-02-13T15:18:04.804675908Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:04.809101 containerd[1515]: time="2025-02-13T15:18:04.808059928Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.10\" with image id \"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\", size \"29862007\" in 3.811552639s" Feb 13 15:18:04.809101 containerd[1515]: time="2025-02-13T15:18:04.808112247Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\" returns image reference \"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\"" Feb 13 15:18:04.834615 containerd[1515]: time="2025-02-13T15:18:04.834568708Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\"" Feb 13 15:18:07.581470 containerd[1515]: time="2025-02-13T15:18:07.581329341Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:07.582759 containerd[1515]: time="2025-02-13T15:18:07.582672355Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.10: active requests=0, bytes read=26898614" Feb 13 15:18:07.583560 containerd[1515]: time="2025-02-13T15:18:07.583497680Z" level=info msg="ImageCreate event name:\"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:07.587639 containerd[1515]: time="2025-02-13T15:18:07.587530325Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:07.588907 containerd[1515]: time="2025-02-13T15:18:07.588851566Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.10\" with image id \"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\", size \"28302323\" in 2.754230419s" Feb 13 15:18:07.588907 containerd[1515]: time="2025-02-13T15:18:07.588896394Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\" returns image reference \"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\"" Feb 13 15:18:07.611015 containerd[1515]: time="2025-02-13T15:18:07.610968193Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\"" Feb 13 15:18:09.310994 containerd[1515]: time="2025-02-13T15:18:09.310857880Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:09.313780 containerd[1515]: time="2025-02-13T15:18:09.313734923Z" level=info msg="ImageCreate event name:\"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:09.314964 containerd[1515]: time="2025-02-13T15:18:09.314086243Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.10: active requests=0, bytes read=16164954" Feb 13 15:18:09.320860 containerd[1515]: time="2025-02-13T15:18:09.320812164Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:09.322257 containerd[1515]: time="2025-02-13T15:18:09.322225091Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.10\" with image id \"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\", size \"17568681\" in 1.711212431s" Feb 13 15:18:09.322368 containerd[1515]: time="2025-02-13T15:18:09.322351683Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\" returns image reference \"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\"" Feb 13 15:18:09.345245 containerd[1515]: time="2025-02-13T15:18:09.345203652Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 13 15:18:10.465282 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3352418597.mount: Deactivated successfully. Feb 13 15:18:11.083739 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Feb 13 15:18:11.090272 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:18:11.142702 containerd[1515]: time="2025-02-13T15:18:11.142020099Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:11.146032 containerd[1515]: time="2025-02-13T15:18:11.145985577Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=25663396" Feb 13 15:18:11.149268 containerd[1515]: time="2025-02-13T15:18:11.149232286Z" level=info msg="ImageCreate event name:\"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:11.152295 containerd[1515]: time="2025-02-13T15:18:11.152255559Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:11.153652 containerd[1515]: time="2025-02-13T15:18:11.152928385Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"25662389\" in 1.807678667s" Feb 13 15:18:11.153652 containerd[1515]: time="2025-02-13T15:18:11.153291772Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\"" Feb 13 15:18:11.185573 containerd[1515]: time="2025-02-13T15:18:11.185514210Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 15:18:11.214592 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:18:11.218574 (kubelet)[2131]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:18:11.272339 kubelet[2131]: E0213 15:18:11.272293 2131 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:18:11.275838 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:18:11.276194 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:18:11.278058 systemd[1]: kubelet.service: Consumed 147ms CPU time, 94.9M memory peak. Feb 13 15:18:11.771496 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1356429609.mount: Deactivated successfully. Feb 13 15:18:12.383201 containerd[1515]: time="2025-02-13T15:18:12.383153910Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:12.384791 containerd[1515]: time="2025-02-13T15:18:12.384085044Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485461" Feb 13 15:18:12.386550 containerd[1515]: time="2025-02-13T15:18:12.386517231Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:12.389757 containerd[1515]: time="2025-02-13T15:18:12.389700304Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:12.393050 containerd[1515]: time="2025-02-13T15:18:12.392985947Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.207412387s" Feb 13 15:18:12.393257 containerd[1515]: time="2025-02-13T15:18:12.393219021Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 13 15:18:12.421628 containerd[1515]: time="2025-02-13T15:18:12.421400692Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 15:18:13.001145 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount988270412.mount: Deactivated successfully. Feb 13 15:18:13.009439 containerd[1515]: time="2025-02-13T15:18:13.009347247Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:13.010997 containerd[1515]: time="2025-02-13T15:18:13.010888762Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268841" Feb 13 15:18:13.012616 containerd[1515]: time="2025-02-13T15:18:13.012540287Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:13.017348 containerd[1515]: time="2025-02-13T15:18:13.016548786Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:13.017929 containerd[1515]: time="2025-02-13T15:18:13.017895891Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 596.459181ms" Feb 13 15:18:13.017929 containerd[1515]: time="2025-02-13T15:18:13.017926545Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 13 15:18:13.041713 containerd[1515]: time="2025-02-13T15:18:13.041662350Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Feb 13 15:18:13.651337 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1608247403.mount: Deactivated successfully. Feb 13 15:18:17.378053 containerd[1515]: time="2025-02-13T15:18:17.376857710Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:17.380195 containerd[1515]: time="2025-02-13T15:18:17.380114189Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191552" Feb 13 15:18:17.381507 containerd[1515]: time="2025-02-13T15:18:17.381395837Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:17.386761 containerd[1515]: time="2025-02-13T15:18:17.386688452Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:17.388310 containerd[1515]: time="2025-02-13T15:18:17.388174698Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 4.34647061s" Feb 13 15:18:17.388310 containerd[1515]: time="2025-02-13T15:18:17.388210432Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Feb 13 15:18:21.335370 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Feb 13 15:18:21.346259 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:18:21.457139 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:18:21.460209 (kubelet)[2305]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:18:21.500846 kubelet[2305]: E0213 15:18:21.500796 2305 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:18:21.505260 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:18:21.505409 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:18:21.505721 systemd[1]: kubelet.service: Consumed 120ms CPU time, 93.8M memory peak. Feb 13 15:18:22.051433 systemd[1]: Started sshd@7-188.245.168.142:22-188.151.169.196:39854.service - OpenSSH per-connection server daemon (188.151.169.196:39854). Feb 13 15:18:22.308139 sshd[2314]: Connection closed by 188.151.169.196 port 39854 Feb 13 15:18:22.311115 systemd[1]: sshd@7-188.245.168.142:22-188.151.169.196:39854.service: Deactivated successfully. Feb 13 15:18:22.687684 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:18:22.688092 systemd[1]: kubelet.service: Consumed 120ms CPU time, 93.8M memory peak. Feb 13 15:18:22.703426 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:18:22.729172 systemd[1]: Reload requested from client PID 2323 ('systemctl') (unit session-7.scope)... Feb 13 15:18:22.729318 systemd[1]: Reloading... Feb 13 15:18:22.869000 zram_generator::config[2386]: No configuration found. Feb 13 15:18:22.943015 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:18:23.033476 systemd[1]: Reloading finished in 303 ms. Feb 13 15:18:23.077623 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:18:23.081814 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:18:23.087577 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:18:23.088171 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:18:23.088255 systemd[1]: kubelet.service: Consumed 80ms CPU time, 82.3M memory peak. Feb 13 15:18:23.094418 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:18:23.197260 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:18:23.207919 (kubelet)[2418]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:18:23.256314 kubelet[2418]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:18:23.256980 kubelet[2418]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:18:23.256980 kubelet[2418]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:18:23.256980 kubelet[2418]: I0213 15:18:23.256823 2418 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:18:23.742967 kubelet[2418]: I0213 15:18:23.741188 2418 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 15:18:23.742967 kubelet[2418]: I0213 15:18:23.741222 2418 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:18:23.742967 kubelet[2418]: I0213 15:18:23.741533 2418 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 15:18:23.761494 kubelet[2418]: I0213 15:18:23.761449 2418 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:18:23.761726 kubelet[2418]: E0213 15:18:23.761689 2418 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://188.245.168.142:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 188.245.168.142:6443: connect: connection refused Feb 13 15:18:23.771555 kubelet[2418]: I0213 15:18:23.771518 2418 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:18:23.773308 kubelet[2418]: I0213 15:18:23.773250 2418 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:18:23.773674 kubelet[2418]: I0213 15:18:23.773408 2418 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-0-1-0-5f4e073373","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:18:23.775668 kubelet[2418]: I0213 15:18:23.775494 2418 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:18:23.775668 kubelet[2418]: I0213 15:18:23.775670 2418 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:18:23.776007 kubelet[2418]: I0213 15:18:23.775989 2418 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:18:23.777064 kubelet[2418]: I0213 15:18:23.777046 2418 kubelet.go:400] "Attempting to sync node with API server" Feb 13 15:18:23.779022 kubelet[2418]: I0213 15:18:23.777068 2418 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:18:23.779022 kubelet[2418]: I0213 15:18:23.777267 2418 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:18:23.779022 kubelet[2418]: I0213 15:18:23.777385 2418 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:18:23.779022 kubelet[2418]: W0213 15:18:23.777842 2418 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://188.245.168.142:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-0-1-0-5f4e073373&limit=500&resourceVersion=0": dial tcp 188.245.168.142:6443: connect: connection refused Feb 13 15:18:23.779022 kubelet[2418]: E0213 15:18:23.777890 2418 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://188.245.168.142:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-0-1-0-5f4e073373&limit=500&resourceVersion=0": dial tcp 188.245.168.142:6443: connect: connection refused Feb 13 15:18:23.779022 kubelet[2418]: W0213 15:18:23.778367 2418 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://188.245.168.142:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 188.245.168.142:6443: connect: connection refused Feb 13 15:18:23.779022 kubelet[2418]: E0213 15:18:23.778402 2418 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://188.245.168.142:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 188.245.168.142:6443: connect: connection refused Feb 13 15:18:23.779418 kubelet[2418]: I0213 15:18:23.779390 2418 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:18:23.779813 kubelet[2418]: I0213 15:18:23.779789 2418 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:18:23.779911 kubelet[2418]: W0213 15:18:23.779896 2418 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 15:18:23.780929 kubelet[2418]: I0213 15:18:23.780746 2418 server.go:1264] "Started kubelet" Feb 13 15:18:23.785635 kubelet[2418]: I0213 15:18:23.785562 2418 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:18:23.786756 kubelet[2418]: E0213 15:18:23.786484 2418 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://188.245.168.142:6443/api/v1/namespaces/default/events\": dial tcp 188.245.168.142:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230-0-1-0-5f4e073373.1823cd9101e2a971 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-0-1-0-5f4e073373,UID:ci-4230-0-1-0-5f4e073373,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230-0-1-0-5f4e073373,},FirstTimestamp:2025-02-13 15:18:23.780727153 +0000 UTC m=+0.569014761,LastTimestamp:2025-02-13 15:18:23.780727153 +0000 UTC m=+0.569014761,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-0-1-0-5f4e073373,}" Feb 13 15:18:23.791060 kubelet[2418]: I0213 15:18:23.791023 2418 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:18:23.792213 kubelet[2418]: I0213 15:18:23.792190 2418 server.go:455] "Adding debug handlers to kubelet server" Feb 13 15:18:23.792614 kubelet[2418]: I0213 15:18:23.792577 2418 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:18:23.793247 kubelet[2418]: I0213 15:18:23.793197 2418 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:18:23.793520 kubelet[2418]: I0213 15:18:23.793504 2418 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:18:23.794137 kubelet[2418]: E0213 15:18:23.794108 2418 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.168.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-0-1-0-5f4e073373?timeout=10s\": dial tcp 188.245.168.142:6443: connect: connection refused" interval="200ms" Feb 13 15:18:23.794248 kubelet[2418]: I0213 15:18:23.794236 2418 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 15:18:23.795049 kubelet[2418]: W0213 15:18:23.795007 2418 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://188.245.168.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 188.245.168.142:6443: connect: connection refused Feb 13 15:18:23.795145 kubelet[2418]: E0213 15:18:23.795134 2418 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://188.245.168.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 188.245.168.142:6443: connect: connection refused Feb 13 15:18:23.795415 kubelet[2418]: I0213 15:18:23.795394 2418 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:18:23.795603 kubelet[2418]: I0213 15:18:23.795584 2418 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:18:23.795961 kubelet[2418]: E0213 15:18:23.795923 2418 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:18:23.797061 kubelet[2418]: I0213 15:18:23.797042 2418 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:18:23.798495 kubelet[2418]: I0213 15:18:23.798464 2418 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:18:23.809317 kubelet[2418]: I0213 15:18:23.809258 2418 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:18:23.810384 kubelet[2418]: I0213 15:18:23.810352 2418 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:18:23.810449 kubelet[2418]: I0213 15:18:23.810404 2418 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:18:23.810449 kubelet[2418]: I0213 15:18:23.810431 2418 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 15:18:23.810512 kubelet[2418]: E0213 15:18:23.810488 2418 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:18:23.817459 kubelet[2418]: W0213 15:18:23.817358 2418 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://188.245.168.142:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 188.245.168.142:6443: connect: connection refused Feb 13 15:18:23.817544 kubelet[2418]: E0213 15:18:23.817473 2418 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://188.245.168.142:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 188.245.168.142:6443: connect: connection refused Feb 13 15:18:23.825355 kubelet[2418]: I0213 15:18:23.825271 2418 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:18:23.825355 kubelet[2418]: I0213 15:18:23.825299 2418 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:18:23.825355 kubelet[2418]: I0213 15:18:23.825317 2418 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:18:23.828024 kubelet[2418]: I0213 15:18:23.827963 2418 policy_none.go:49] "None policy: Start" Feb 13 15:18:23.829599 kubelet[2418]: I0213 15:18:23.829063 2418 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:18:23.829599 kubelet[2418]: I0213 15:18:23.829123 2418 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:18:23.837517 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 15:18:23.847257 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 15:18:23.851354 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 15:18:23.863083 kubelet[2418]: I0213 15:18:23.863045 2418 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:18:23.864119 kubelet[2418]: I0213 15:18:23.863646 2418 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:18:23.864119 kubelet[2418]: I0213 15:18:23.863806 2418 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:18:23.867296 kubelet[2418]: E0213 15:18:23.867192 2418 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230-0-1-0-5f4e073373\" not found" Feb 13 15:18:23.897124 kubelet[2418]: I0213 15:18:23.896490 2418 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-0-1-0-5f4e073373" Feb 13 15:18:23.897124 kubelet[2418]: E0213 15:18:23.897016 2418 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://188.245.168.142:6443/api/v1/nodes\": dial tcp 188.245.168.142:6443: connect: connection refused" node="ci-4230-0-1-0-5f4e073373" Feb 13 15:18:23.910986 kubelet[2418]: I0213 15:18:23.910858 2418 topology_manager.go:215] "Topology Admit Handler" podUID="cba2bdb8a6da7a83b9f81d841f553f50" podNamespace="kube-system" podName="kube-apiserver-ci-4230-0-1-0-5f4e073373" Feb 13 15:18:23.913591 kubelet[2418]: I0213 15:18:23.913536 2418 topology_manager.go:215] "Topology Admit Handler" podUID="5937ebd0784bc85dd183172a4e9a08b7" podNamespace="kube-system" podName="kube-controller-manager-ci-4230-0-1-0-5f4e073373" Feb 13 15:18:23.916234 kubelet[2418]: I0213 15:18:23.916199 2418 topology_manager.go:215] "Topology Admit Handler" podUID="3221b5376ab4b7265e714144d794d33d" podNamespace="kube-system" podName="kube-scheduler-ci-4230-0-1-0-5f4e073373" Feb 13 15:18:23.924668 systemd[1]: Created slice kubepods-burstable-podcba2bdb8a6da7a83b9f81d841f553f50.slice - libcontainer container kubepods-burstable-podcba2bdb8a6da7a83b9f81d841f553f50.slice. Feb 13 15:18:23.948894 systemd[1]: Created slice kubepods-burstable-pod5937ebd0784bc85dd183172a4e9a08b7.slice - libcontainer container kubepods-burstable-pod5937ebd0784bc85dd183172a4e9a08b7.slice. Feb 13 15:18:23.953116 systemd[1]: Created slice kubepods-burstable-pod3221b5376ab4b7265e714144d794d33d.slice - libcontainer container kubepods-burstable-pod3221b5376ab4b7265e714144d794d33d.slice. Feb 13 15:18:23.995743 kubelet[2418]: E0213 15:18:23.994827 2418 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.168.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-0-1-0-5f4e073373?timeout=10s\": dial tcp 188.245.168.142:6443: connect: connection refused" interval="400ms" Feb 13 15:18:24.000579 kubelet[2418]: I0213 15:18:24.000172 2418 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cba2bdb8a6da7a83b9f81d841f553f50-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-0-1-0-5f4e073373\" (UID: \"cba2bdb8a6da7a83b9f81d841f553f50\") " pod="kube-system/kube-apiserver-ci-4230-0-1-0-5f4e073373" Feb 13 15:18:24.000579 kubelet[2418]: I0213 15:18:24.000231 2418 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3221b5376ab4b7265e714144d794d33d-kubeconfig\") pod \"kube-scheduler-ci-4230-0-1-0-5f4e073373\" (UID: \"3221b5376ab4b7265e714144d794d33d\") " pod="kube-system/kube-scheduler-ci-4230-0-1-0-5f4e073373" Feb 13 15:18:24.000579 kubelet[2418]: I0213 15:18:24.000278 2418 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cba2bdb8a6da7a83b9f81d841f553f50-ca-certs\") pod \"kube-apiserver-ci-4230-0-1-0-5f4e073373\" (UID: \"cba2bdb8a6da7a83b9f81d841f553f50\") " pod="kube-system/kube-apiserver-ci-4230-0-1-0-5f4e073373" Feb 13 15:18:24.000579 kubelet[2418]: I0213 15:18:24.000307 2418 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cba2bdb8a6da7a83b9f81d841f553f50-k8s-certs\") pod \"kube-apiserver-ci-4230-0-1-0-5f4e073373\" (UID: \"cba2bdb8a6da7a83b9f81d841f553f50\") " pod="kube-system/kube-apiserver-ci-4230-0-1-0-5f4e073373" Feb 13 15:18:24.000579 kubelet[2418]: I0213 15:18:24.000342 2418 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5937ebd0784bc85dd183172a4e9a08b7-kubeconfig\") pod \"kube-controller-manager-ci-4230-0-1-0-5f4e073373\" (UID: \"5937ebd0784bc85dd183172a4e9a08b7\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-0-5f4e073373" Feb 13 15:18:24.000903 kubelet[2418]: I0213 15:18:24.000375 2418 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5937ebd0784bc85dd183172a4e9a08b7-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-0-1-0-5f4e073373\" (UID: \"5937ebd0784bc85dd183172a4e9a08b7\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-0-5f4e073373" Feb 13 15:18:24.000903 kubelet[2418]: I0213 15:18:24.000405 2418 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5937ebd0784bc85dd183172a4e9a08b7-ca-certs\") pod \"kube-controller-manager-ci-4230-0-1-0-5f4e073373\" (UID: \"5937ebd0784bc85dd183172a4e9a08b7\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-0-5f4e073373" Feb 13 15:18:24.000903 kubelet[2418]: I0213 15:18:24.000432 2418 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5937ebd0784bc85dd183172a4e9a08b7-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-0-1-0-5f4e073373\" (UID: \"5937ebd0784bc85dd183172a4e9a08b7\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-0-5f4e073373" Feb 13 15:18:24.000903 kubelet[2418]: I0213 15:18:24.000460 2418 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5937ebd0784bc85dd183172a4e9a08b7-k8s-certs\") pod \"kube-controller-manager-ci-4230-0-1-0-5f4e073373\" (UID: \"5937ebd0784bc85dd183172a4e9a08b7\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-0-5f4e073373" Feb 13 15:18:24.101034 kubelet[2418]: I0213 15:18:24.100480 2418 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-0-1-0-5f4e073373" Feb 13 15:18:24.101182 kubelet[2418]: E0213 15:18:24.101110 2418 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://188.245.168.142:6443/api/v1/nodes\": dial tcp 188.245.168.142:6443: connect: connection refused" node="ci-4230-0-1-0-5f4e073373" Feb 13 15:18:24.245432 containerd[1515]: time="2025-02-13T15:18:24.245307759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-0-1-0-5f4e073373,Uid:cba2bdb8a6da7a83b9f81d841f553f50,Namespace:kube-system,Attempt:0,}" Feb 13 15:18:24.254192 containerd[1515]: time="2025-02-13T15:18:24.253693693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-0-1-0-5f4e073373,Uid:5937ebd0784bc85dd183172a4e9a08b7,Namespace:kube-system,Attempt:0,}" Feb 13 15:18:24.257491 containerd[1515]: time="2025-02-13T15:18:24.257122767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-0-1-0-5f4e073373,Uid:3221b5376ab4b7265e714144d794d33d,Namespace:kube-system,Attempt:0,}" Feb 13 15:18:24.395878 kubelet[2418]: E0213 15:18:24.395749 2418 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.168.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-0-1-0-5f4e073373?timeout=10s\": dial tcp 188.245.168.142:6443: connect: connection refused" interval="800ms" Feb 13 15:18:24.503878 kubelet[2418]: I0213 15:18:24.503831 2418 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-0-1-0-5f4e073373" Feb 13 15:18:24.504680 kubelet[2418]: E0213 15:18:24.504569 2418 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://188.245.168.142:6443/api/v1/nodes\": dial tcp 188.245.168.142:6443: connect: connection refused" node="ci-4230-0-1-0-5f4e073373" Feb 13 15:18:24.711733 kubelet[2418]: W0213 15:18:24.711580 2418 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://188.245.168.142:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 188.245.168.142:6443: connect: connection refused Feb 13 15:18:24.711733 kubelet[2418]: E0213 15:18:24.711699 2418 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://188.245.168.142:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 188.245.168.142:6443: connect: connection refused Feb 13 15:18:24.789072 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3824778754.mount: Deactivated successfully. Feb 13 15:18:24.797411 containerd[1515]: time="2025-02-13T15:18:24.797346232Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:18:24.799268 containerd[1515]: time="2025-02-13T15:18:24.799226235Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:18:24.800807 containerd[1515]: time="2025-02-13T15:18:24.800761702Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Feb 13 15:18:24.801631 containerd[1515]: time="2025-02-13T15:18:24.801587092Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:18:24.803133 containerd[1515]: time="2025-02-13T15:18:24.803081588Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:18:24.804849 containerd[1515]: time="2025-02-13T15:18:24.804785982Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:18:24.808651 containerd[1515]: time="2025-02-13T15:18:24.808291358Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 562.866806ms" Feb 13 15:18:24.808842 containerd[1515]: time="2025-02-13T15:18:24.808790136Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:18:24.810407 containerd[1515]: time="2025-02-13T15:18:24.810339368Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 556.550448ms" Feb 13 15:18:24.810833 containerd[1515]: time="2025-02-13T15:18:24.810804537Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:18:24.814201 containerd[1515]: time="2025-02-13T15:18:24.814135584Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 556.91775ms" Feb 13 15:18:24.863673 kubelet[2418]: W0213 15:18:24.863529 2418 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://188.245.168.142:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-0-1-0-5f4e073373&limit=500&resourceVersion=0": dial tcp 188.245.168.142:6443: connect: connection refused Feb 13 15:18:24.863673 kubelet[2418]: E0213 15:18:24.863634 2418 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://188.245.168.142:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-0-1-0-5f4e073373&limit=500&resourceVersion=0": dial tcp 188.245.168.142:6443: connect: connection refused Feb 13 15:18:24.872291 kubelet[2418]: W0213 15:18:24.872078 2418 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://188.245.168.142:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 188.245.168.142:6443: connect: connection refused Feb 13 15:18:24.872291 kubelet[2418]: E0213 15:18:24.872162 2418 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://188.245.168.142:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 188.245.168.142:6443: connect: connection refused Feb 13 15:18:24.872291 kubelet[2418]: W0213 15:18:24.872125 2418 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://188.245.168.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 188.245.168.142:6443: connect: connection refused Feb 13 15:18:24.872291 kubelet[2418]: E0213 15:18:24.872257 2418 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://188.245.168.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 188.245.168.142:6443: connect: connection refused Feb 13 15:18:24.935538 containerd[1515]: time="2025-02-13T15:18:24.935161546Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:18:24.935538 containerd[1515]: time="2025-02-13T15:18:24.935236607Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:18:24.935538 containerd[1515]: time="2025-02-13T15:18:24.935251451Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:18:24.938319 containerd[1515]: time="2025-02-13T15:18:24.938139454Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:18:24.943056 containerd[1515]: time="2025-02-13T15:18:24.942791189Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:18:24.943056 containerd[1515]: time="2025-02-13T15:18:24.942869771Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:18:24.943056 containerd[1515]: time="2025-02-13T15:18:24.942886656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:18:24.943056 containerd[1515]: time="2025-02-13T15:18:24.942990605Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:18:24.944369 containerd[1515]: time="2025-02-13T15:18:24.943987602Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:18:24.944369 containerd[1515]: time="2025-02-13T15:18:24.944059622Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:18:24.944369 containerd[1515]: time="2025-02-13T15:18:24.944071945Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:18:24.944369 containerd[1515]: time="2025-02-13T15:18:24.944142685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:18:24.962825 systemd[1]: Started cri-containerd-b96f827a04aae87c94276f1a1ca84c4ce45554bd8cc736087d8310b1f116508e.scope - libcontainer container b96f827a04aae87c94276f1a1ca84c4ce45554bd8cc736087d8310b1f116508e. Feb 13 15:18:24.973135 systemd[1]: Started cri-containerd-9902228fbfe002441e341624b180c5a9d81e453297002062092ceb4995e54a6a.scope - libcontainer container 9902228fbfe002441e341624b180c5a9d81e453297002062092ceb4995e54a6a. Feb 13 15:18:24.983446 systemd[1]: Started cri-containerd-690364f155e48bc4c63783b7f1d9e8c955f39189e5ed56a65ec53be92d77d6b5.scope - libcontainer container 690364f155e48bc4c63783b7f1d9e8c955f39189e5ed56a65ec53be92d77d6b5. Feb 13 15:18:25.018102 containerd[1515]: time="2025-02-13T15:18:25.017696968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-0-1-0-5f4e073373,Uid:5937ebd0784bc85dd183172a4e9a08b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"b96f827a04aae87c94276f1a1ca84c4ce45554bd8cc736087d8310b1f116508e\"" Feb 13 15:18:25.029183 containerd[1515]: time="2025-02-13T15:18:25.028347212Z" level=info msg="CreateContainer within sandbox \"b96f827a04aae87c94276f1a1ca84c4ce45554bd8cc736087d8310b1f116508e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 15:18:25.034951 containerd[1515]: time="2025-02-13T15:18:25.034902883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-0-1-0-5f4e073373,Uid:3221b5376ab4b7265e714144d794d33d,Namespace:kube-system,Attempt:0,} returns sandbox id \"9902228fbfe002441e341624b180c5a9d81e453297002062092ceb4995e54a6a\"" Feb 13 15:18:25.040513 containerd[1515]: time="2025-02-13T15:18:25.040377545Z" level=info msg="CreateContainer within sandbox \"9902228fbfe002441e341624b180c5a9d81e453297002062092ceb4995e54a6a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 15:18:25.047173 containerd[1515]: time="2025-02-13T15:18:25.046891645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-0-1-0-5f4e073373,Uid:cba2bdb8a6da7a83b9f81d841f553f50,Namespace:kube-system,Attempt:0,} returns sandbox id \"690364f155e48bc4c63783b7f1d9e8c955f39189e5ed56a65ec53be92d77d6b5\"" Feb 13 15:18:25.051049 containerd[1515]: time="2025-02-13T15:18:25.051001903Z" level=info msg="CreateContainer within sandbox \"690364f155e48bc4c63783b7f1d9e8c955f39189e5ed56a65ec53be92d77d6b5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 15:18:25.057126 containerd[1515]: time="2025-02-13T15:18:25.057082007Z" level=info msg="CreateContainer within sandbox \"b96f827a04aae87c94276f1a1ca84c4ce45554bd8cc736087d8310b1f116508e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"42fd44728532ed77771ac6505e5dd7f130f2b66c09e09c4235d6ca4422bae52f\"" Feb 13 15:18:25.057876 containerd[1515]: time="2025-02-13T15:18:25.057852253Z" level=info msg="StartContainer for \"42fd44728532ed77771ac6505e5dd7f130f2b66c09e09c4235d6ca4422bae52f\"" Feb 13 15:18:25.059323 containerd[1515]: time="2025-02-13T15:18:25.059288116Z" level=info msg="CreateContainer within sandbox \"9902228fbfe002441e341624b180c5a9d81e453297002062092ceb4995e54a6a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8813717b87e377ff73bf1b0f183a5941c3d1c7553107edcd38abc5e2e5460fe1\"" Feb 13 15:18:25.059895 containerd[1515]: time="2025-02-13T15:18:25.059857748Z" level=info msg="StartContainer for \"8813717b87e377ff73bf1b0f183a5941c3d1c7553107edcd38abc5e2e5460fe1\"" Feb 13 15:18:25.070176 containerd[1515]: time="2025-02-13T15:18:25.070129012Z" level=info msg="CreateContainer within sandbox \"690364f155e48bc4c63783b7f1d9e8c955f39189e5ed56a65ec53be92d77d6b5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b7eb131931577020e8ebf3901fbedec116f32dc52e86b4e47aadccb933a77d2f\"" Feb 13 15:18:25.070734 containerd[1515]: time="2025-02-13T15:18:25.070703885Z" level=info msg="StartContainer for \"b7eb131931577020e8ebf3901fbedec116f32dc52e86b4e47aadccb933a77d2f\"" Feb 13 15:18:25.096333 systemd[1]: Started cri-containerd-8813717b87e377ff73bf1b0f183a5941c3d1c7553107edcd38abc5e2e5460fe1.scope - libcontainer container 8813717b87e377ff73bf1b0f183a5941c3d1c7553107edcd38abc5e2e5460fe1. Feb 13 15:18:25.110245 systemd[1]: Started cri-containerd-42fd44728532ed77771ac6505e5dd7f130f2b66c09e09c4235d6ca4422bae52f.scope - libcontainer container 42fd44728532ed77771ac6505e5dd7f130f2b66c09e09c4235d6ca4422bae52f. Feb 13 15:18:25.112064 systemd[1]: Started cri-containerd-b7eb131931577020e8ebf3901fbedec116f32dc52e86b4e47aadccb933a77d2f.scope - libcontainer container b7eb131931577020e8ebf3901fbedec116f32dc52e86b4e47aadccb933a77d2f. Feb 13 15:18:25.174806 containerd[1515]: time="2025-02-13T15:18:25.174753196Z" level=info msg="StartContainer for \"8813717b87e377ff73bf1b0f183a5941c3d1c7553107edcd38abc5e2e5460fe1\" returns successfully" Feb 13 15:18:25.175246 containerd[1515]: time="2025-02-13T15:18:25.174792287Z" level=info msg="StartContainer for \"b7eb131931577020e8ebf3901fbedec116f32dc52e86b4e47aadccb933a77d2f\" returns successfully" Feb 13 15:18:25.183080 containerd[1515]: time="2025-02-13T15:18:25.183004600Z" level=info msg="StartContainer for \"42fd44728532ed77771ac6505e5dd7f130f2b66c09e09c4235d6ca4422bae52f\" returns successfully" Feb 13 15:18:25.198423 kubelet[2418]: E0213 15:18:25.198371 2418 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.168.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-0-1-0-5f4e073373?timeout=10s\": dial tcp 188.245.168.142:6443: connect: connection refused" interval="1.6s" Feb 13 15:18:25.307171 kubelet[2418]: I0213 15:18:25.306772 2418 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-0-1-0-5f4e073373" Feb 13 15:18:27.512158 kubelet[2418]: E0213 15:18:27.512108 2418 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230-0-1-0-5f4e073373\" not found" node="ci-4230-0-1-0-5f4e073373" Feb 13 15:18:27.571298 kubelet[2418]: I0213 15:18:27.571060 2418 kubelet_node_status.go:76] "Successfully registered node" node="ci-4230-0-1-0-5f4e073373" Feb 13 15:18:27.780990 kubelet[2418]: I0213 15:18:27.780829 2418 apiserver.go:52] "Watching apiserver" Feb 13 15:18:27.795005 kubelet[2418]: I0213 15:18:27.794954 2418 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 15:18:27.861451 kubelet[2418]: E0213 15:18:27.861407 2418 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4230-0-1-0-5f4e073373\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4230-0-1-0-5f4e073373" Feb 13 15:18:29.648099 systemd[1]: Reload requested from client PID 2692 ('systemctl') (unit session-7.scope)... Feb 13 15:18:29.648499 systemd[1]: Reloading... Feb 13 15:18:29.763966 zram_generator::config[2746]: No configuration found. Feb 13 15:18:29.860417 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:18:29.962354 systemd[1]: Reloading finished in 313 ms. Feb 13 15:18:29.988003 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:18:30.001234 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:18:30.001598 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:18:30.001667 systemd[1]: kubelet.service: Consumed 968ms CPU time, 113.5M memory peak. Feb 13 15:18:30.007532 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:18:30.127104 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:18:30.139561 (kubelet)[2781]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:18:30.196312 kubelet[2781]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:18:30.197297 kubelet[2781]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:18:30.197297 kubelet[2781]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:18:30.197431 kubelet[2781]: I0213 15:18:30.197370 2781 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:18:30.203545 kubelet[2781]: I0213 15:18:30.203482 2781 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 15:18:30.203545 kubelet[2781]: I0213 15:18:30.203508 2781 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:18:30.206043 kubelet[2781]: I0213 15:18:30.205996 2781 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 15:18:30.209042 kubelet[2781]: I0213 15:18:30.209016 2781 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 15:18:30.210910 kubelet[2781]: I0213 15:18:30.210696 2781 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:18:30.221057 kubelet[2781]: I0213 15:18:30.220916 2781 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:18:30.221522 kubelet[2781]: I0213 15:18:30.221196 2781 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:18:30.221522 kubelet[2781]: I0213 15:18:30.221230 2781 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-0-1-0-5f4e073373","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:18:30.221522 kubelet[2781]: I0213 15:18:30.221439 2781 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:18:30.221522 kubelet[2781]: I0213 15:18:30.221448 2781 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:18:30.221734 kubelet[2781]: I0213 15:18:30.221486 2781 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:18:30.221734 kubelet[2781]: I0213 15:18:30.221597 2781 kubelet.go:400] "Attempting to sync node with API server" Feb 13 15:18:30.221734 kubelet[2781]: I0213 15:18:30.221611 2781 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:18:30.223974 kubelet[2781]: I0213 15:18:30.222531 2781 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:18:30.223974 kubelet[2781]: I0213 15:18:30.222568 2781 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:18:30.227082 kubelet[2781]: I0213 15:18:30.227057 2781 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:18:30.229056 kubelet[2781]: I0213 15:18:30.229017 2781 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:18:30.229512 kubelet[2781]: I0213 15:18:30.229475 2781 server.go:1264] "Started kubelet" Feb 13 15:18:30.241198 kubelet[2781]: I0213 15:18:30.235789 2781 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:18:30.243969 kubelet[2781]: I0213 15:18:30.243171 2781 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:18:30.244246 kubelet[2781]: I0213 15:18:30.244216 2781 server.go:455] "Adding debug handlers to kubelet server" Feb 13 15:18:30.247158 kubelet[2781]: I0213 15:18:30.247106 2781 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:18:30.247358 kubelet[2781]: I0213 15:18:30.247320 2781 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:18:30.250083 kubelet[2781]: I0213 15:18:30.249219 2781 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:18:30.251168 kubelet[2781]: I0213 15:18:30.251018 2781 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 15:18:30.251168 kubelet[2781]: I0213 15:18:30.251152 2781 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:18:30.254966 kubelet[2781]: I0213 15:18:30.254055 2781 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:18:30.257084 kubelet[2781]: I0213 15:18:30.255750 2781 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:18:30.257084 kubelet[2781]: I0213 15:18:30.255787 2781 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:18:30.257084 kubelet[2781]: I0213 15:18:30.255804 2781 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 15:18:30.257084 kubelet[2781]: E0213 15:18:30.255843 2781 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:18:30.266596 kubelet[2781]: I0213 15:18:30.266326 2781 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:18:30.267026 kubelet[2781]: I0213 15:18:30.266876 2781 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:18:30.272755 kubelet[2781]: I0213 15:18:30.272728 2781 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:18:30.295149 kubelet[2781]: E0213 15:18:30.295120 2781 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:18:30.338211 kubelet[2781]: I0213 15:18:30.338185 2781 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:18:30.338411 kubelet[2781]: I0213 15:18:30.338394 2781 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:18:30.338494 kubelet[2781]: I0213 15:18:30.338484 2781 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:18:30.338701 kubelet[2781]: I0213 15:18:30.338683 2781 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 15:18:30.338786 kubelet[2781]: I0213 15:18:30.338761 2781 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 15:18:30.338835 kubelet[2781]: I0213 15:18:30.338826 2781 policy_none.go:49] "None policy: Start" Feb 13 15:18:30.339900 kubelet[2781]: I0213 15:18:30.339881 2781 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:18:30.340059 kubelet[2781]: I0213 15:18:30.340048 2781 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:18:30.340291 kubelet[2781]: I0213 15:18:30.340275 2781 state_mem.go:75] "Updated machine memory state" Feb 13 15:18:30.345128 kubelet[2781]: I0213 15:18:30.345093 2781 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:18:30.345339 kubelet[2781]: I0213 15:18:30.345294 2781 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:18:30.345994 kubelet[2781]: I0213 15:18:30.345453 2781 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:18:30.356107 kubelet[2781]: I0213 15:18:30.356060 2781 topology_manager.go:215] "Topology Admit Handler" podUID="cba2bdb8a6da7a83b9f81d841f553f50" podNamespace="kube-system" podName="kube-apiserver-ci-4230-0-1-0-5f4e073373" Feb 13 15:18:30.356188 kubelet[2781]: I0213 15:18:30.356166 2781 topology_manager.go:215] "Topology Admit Handler" podUID="5937ebd0784bc85dd183172a4e9a08b7" podNamespace="kube-system" podName="kube-controller-manager-ci-4230-0-1-0-5f4e073373" Feb 13 15:18:30.356231 kubelet[2781]: I0213 15:18:30.356203 2781 topology_manager.go:215] "Topology Admit Handler" podUID="3221b5376ab4b7265e714144d794d33d" podNamespace="kube-system" podName="kube-scheduler-ci-4230-0-1-0-5f4e073373" Feb 13 15:18:30.358970 kubelet[2781]: I0213 15:18:30.357018 2781 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-0-1-0-5f4e073373" Feb 13 15:18:30.376221 kubelet[2781]: I0213 15:18:30.376063 2781 kubelet_node_status.go:112] "Node was previously registered" node="ci-4230-0-1-0-5f4e073373" Feb 13 15:18:30.376221 kubelet[2781]: I0213 15:18:30.376171 2781 kubelet_node_status.go:76] "Successfully registered node" node="ci-4230-0-1-0-5f4e073373" Feb 13 15:18:30.553213 kubelet[2781]: I0213 15:18:30.552449 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cba2bdb8a6da7a83b9f81d841f553f50-ca-certs\") pod \"kube-apiserver-ci-4230-0-1-0-5f4e073373\" (UID: \"cba2bdb8a6da7a83b9f81d841f553f50\") " pod="kube-system/kube-apiserver-ci-4230-0-1-0-5f4e073373" Feb 13 15:18:30.553213 kubelet[2781]: I0213 15:18:30.552749 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cba2bdb8a6da7a83b9f81d841f553f50-k8s-certs\") pod \"kube-apiserver-ci-4230-0-1-0-5f4e073373\" (UID: \"cba2bdb8a6da7a83b9f81d841f553f50\") " pod="kube-system/kube-apiserver-ci-4230-0-1-0-5f4e073373" Feb 13 15:18:30.554256 kubelet[2781]: I0213 15:18:30.553194 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cba2bdb8a6da7a83b9f81d841f553f50-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-0-1-0-5f4e073373\" (UID: \"cba2bdb8a6da7a83b9f81d841f553f50\") " pod="kube-system/kube-apiserver-ci-4230-0-1-0-5f4e073373" Feb 13 15:18:30.554256 kubelet[2781]: I0213 15:18:30.553822 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5937ebd0784bc85dd183172a4e9a08b7-k8s-certs\") pod \"kube-controller-manager-ci-4230-0-1-0-5f4e073373\" (UID: \"5937ebd0784bc85dd183172a4e9a08b7\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-0-5f4e073373" Feb 13 15:18:30.554256 kubelet[2781]: I0213 15:18:30.553898 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3221b5376ab4b7265e714144d794d33d-kubeconfig\") pod \"kube-scheduler-ci-4230-0-1-0-5f4e073373\" (UID: \"3221b5376ab4b7265e714144d794d33d\") " pod="kube-system/kube-scheduler-ci-4230-0-1-0-5f4e073373" Feb 13 15:18:30.554256 kubelet[2781]: I0213 15:18:30.553961 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5937ebd0784bc85dd183172a4e9a08b7-ca-certs\") pod \"kube-controller-manager-ci-4230-0-1-0-5f4e073373\" (UID: \"5937ebd0784bc85dd183172a4e9a08b7\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-0-5f4e073373" Feb 13 15:18:30.554256 kubelet[2781]: I0213 15:18:30.554018 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5937ebd0784bc85dd183172a4e9a08b7-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-0-1-0-5f4e073373\" (UID: \"5937ebd0784bc85dd183172a4e9a08b7\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-0-5f4e073373" Feb 13 15:18:30.554620 kubelet[2781]: I0213 15:18:30.554065 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5937ebd0784bc85dd183172a4e9a08b7-kubeconfig\") pod \"kube-controller-manager-ci-4230-0-1-0-5f4e073373\" (UID: \"5937ebd0784bc85dd183172a4e9a08b7\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-0-5f4e073373" Feb 13 15:18:30.554620 kubelet[2781]: I0213 15:18:30.554097 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5937ebd0784bc85dd183172a4e9a08b7-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-0-1-0-5f4e073373\" (UID: \"5937ebd0784bc85dd183172a4e9a08b7\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-0-5f4e073373" Feb 13 15:18:30.644633 sudo[2814]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 13 15:18:30.644920 sudo[2814]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Feb 13 15:18:31.111064 sudo[2814]: pam_unix(sudo:session): session closed for user root Feb 13 15:18:31.226914 kubelet[2781]: I0213 15:18:31.226867 2781 apiserver.go:52] "Watching apiserver" Feb 13 15:18:31.251266 kubelet[2781]: I0213 15:18:31.251222 2781 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 15:18:31.349790 kubelet[2781]: I0213 15:18:31.349727 2781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230-0-1-0-5f4e073373" podStartSLOduration=1.349698054 podStartE2EDuration="1.349698054s" podCreationTimestamp="2025-02-13 15:18:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:18:31.335843581 +0000 UTC m=+1.192110699" watchObservedRunningTime="2025-02-13 15:18:31.349698054 +0000 UTC m=+1.205965092" Feb 13 15:18:31.365282 kubelet[2781]: I0213 15:18:31.365131 2781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230-0-1-0-5f4e073373" podStartSLOduration=1.365109178 podStartE2EDuration="1.365109178s" podCreationTimestamp="2025-02-13 15:18:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:18:31.350496344 +0000 UTC m=+1.206763422" watchObservedRunningTime="2025-02-13 15:18:31.365109178 +0000 UTC m=+1.221376296" Feb 13 15:18:31.365282 kubelet[2781]: I0213 15:18:31.365239 2781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230-0-1-0-5f4e073373" podStartSLOduration=1.365232004 podStartE2EDuration="1.365232004s" podCreationTimestamp="2025-02-13 15:18:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:18:31.360666151 +0000 UTC m=+1.216933229" watchObservedRunningTime="2025-02-13 15:18:31.365232004 +0000 UTC m=+1.221499122" Feb 13 15:18:32.724581 sudo[1809]: pam_unix(sudo:session): session closed for user root Feb 13 15:18:32.883009 sshd[1808]: Connection closed by 139.178.68.195 port 44384 Feb 13 15:18:32.883861 sshd-session[1806]: pam_unix(sshd:session): session closed for user core Feb 13 15:18:32.888655 systemd[1]: sshd@6-188.245.168.142:22-139.178.68.195:44384.service: Deactivated successfully. Feb 13 15:18:32.889212 systemd-logind[1492]: Session 7 logged out. Waiting for processes to exit. Feb 13 15:18:32.892328 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 15:18:32.892632 systemd[1]: session-7.scope: Consumed 7.202s CPU time, 292M memory peak. Feb 13 15:18:32.895063 systemd-logind[1492]: Removed session 7. Feb 13 15:18:43.824359 kubelet[2781]: I0213 15:18:43.824321 2781 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 15:18:43.824801 containerd[1515]: time="2025-02-13T15:18:43.824637362Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 15:18:43.826363 kubelet[2781]: I0213 15:18:43.824930 2781 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 15:18:44.544404 kubelet[2781]: I0213 15:18:44.544135 2781 topology_manager.go:215] "Topology Admit Handler" podUID="5d55b29d-3571-4f04-9301-f8ff13cc85b7" podNamespace="kube-system" podName="kube-proxy-98tqq" Feb 13 15:18:44.555355 kubelet[2781]: I0213 15:18:44.554871 2781 topology_manager.go:215] "Topology Admit Handler" podUID="6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194" podNamespace="kube-system" podName="cilium-862hs" Feb 13 15:18:44.564245 systemd[1]: Created slice kubepods-besteffort-pod5d55b29d_3571_4f04_9301_f8ff13cc85b7.slice - libcontainer container kubepods-besteffort-pod5d55b29d_3571_4f04_9301_f8ff13cc85b7.slice. Feb 13 15:18:44.578641 systemd[1]: Created slice kubepods-burstable-pod6d8f8fdb_ac3c_4bf4_82e8_5a620cfbe194.slice - libcontainer container kubepods-burstable-pod6d8f8fdb_ac3c_4bf4_82e8_5a620cfbe194.slice. Feb 13 15:18:44.658006 kubelet[2781]: I0213 15:18:44.657393 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194-cilium-config-path\") pod \"cilium-862hs\" (UID: \"6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194\") " pod="kube-system/cilium-862hs" Feb 13 15:18:44.658006 kubelet[2781]: I0213 15:18:44.657467 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74prm\" (UniqueName: \"kubernetes.io/projected/6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194-kube-api-access-74prm\") pod \"cilium-862hs\" (UID: \"6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194\") " pod="kube-system/cilium-862hs" Feb 13 15:18:44.658006 kubelet[2781]: I0213 15:18:44.657513 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194-cni-path\") pod \"cilium-862hs\" (UID: \"6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194\") " pod="kube-system/cilium-862hs" Feb 13 15:18:44.658006 kubelet[2781]: I0213 15:18:44.657550 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194-etc-cni-netd\") pod \"cilium-862hs\" (UID: \"6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194\") " pod="kube-system/cilium-862hs" Feb 13 15:18:44.658006 kubelet[2781]: I0213 15:18:44.657584 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194-xtables-lock\") pod \"cilium-862hs\" (UID: \"6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194\") " pod="kube-system/cilium-862hs" Feb 13 15:18:44.658006 kubelet[2781]: I0213 15:18:44.657640 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5d55b29d-3571-4f04-9301-f8ff13cc85b7-xtables-lock\") pod \"kube-proxy-98tqq\" (UID: \"5d55b29d-3571-4f04-9301-f8ff13cc85b7\") " pod="kube-system/kube-proxy-98tqq" Feb 13 15:18:44.658483 kubelet[2781]: I0213 15:18:44.657674 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194-lib-modules\") pod \"cilium-862hs\" (UID: \"6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194\") " pod="kube-system/cilium-862hs" Feb 13 15:18:44.658483 kubelet[2781]: I0213 15:18:44.657710 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194-cilium-run\") pod \"cilium-862hs\" (UID: \"6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194\") " pod="kube-system/cilium-862hs" Feb 13 15:18:44.658483 kubelet[2781]: I0213 15:18:44.657745 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5d55b29d-3571-4f04-9301-f8ff13cc85b7-lib-modules\") pod \"kube-proxy-98tqq\" (UID: \"5d55b29d-3571-4f04-9301-f8ff13cc85b7\") " pod="kube-system/kube-proxy-98tqq" Feb 13 15:18:44.658483 kubelet[2781]: I0213 15:18:44.657782 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47rq5\" (UniqueName: \"kubernetes.io/projected/5d55b29d-3571-4f04-9301-f8ff13cc85b7-kube-api-access-47rq5\") pod \"kube-proxy-98tqq\" (UID: \"5d55b29d-3571-4f04-9301-f8ff13cc85b7\") " pod="kube-system/kube-proxy-98tqq" Feb 13 15:18:44.658483 kubelet[2781]: I0213 15:18:44.657821 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194-clustermesh-secrets\") pod \"cilium-862hs\" (UID: \"6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194\") " pod="kube-system/cilium-862hs" Feb 13 15:18:44.658807 kubelet[2781]: I0213 15:18:44.657857 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194-host-proc-sys-kernel\") pod \"cilium-862hs\" (UID: \"6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194\") " pod="kube-system/cilium-862hs" Feb 13 15:18:44.658807 kubelet[2781]: I0213 15:18:44.657957 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194-hubble-tls\") pod \"cilium-862hs\" (UID: \"6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194\") " pod="kube-system/cilium-862hs" Feb 13 15:18:44.659547 kubelet[2781]: I0213 15:18:44.659129 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5d55b29d-3571-4f04-9301-f8ff13cc85b7-kube-proxy\") pod \"kube-proxy-98tqq\" (UID: \"5d55b29d-3571-4f04-9301-f8ff13cc85b7\") " pod="kube-system/kube-proxy-98tqq" Feb 13 15:18:44.659547 kubelet[2781]: I0213 15:18:44.659203 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194-hostproc\") pod \"cilium-862hs\" (UID: \"6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194\") " pod="kube-system/cilium-862hs" Feb 13 15:18:44.659547 kubelet[2781]: I0213 15:18:44.659243 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194-bpf-maps\") pod \"cilium-862hs\" (UID: \"6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194\") " pod="kube-system/cilium-862hs" Feb 13 15:18:44.659547 kubelet[2781]: I0213 15:18:44.659303 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194-cilium-cgroup\") pod \"cilium-862hs\" (UID: \"6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194\") " pod="kube-system/cilium-862hs" Feb 13 15:18:44.659547 kubelet[2781]: I0213 15:18:44.659358 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194-host-proc-sys-net\") pod \"cilium-862hs\" (UID: \"6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194\") " pod="kube-system/cilium-862hs" Feb 13 15:18:44.778604 kubelet[2781]: E0213 15:18:44.778316 2781 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Feb 13 15:18:44.778604 kubelet[2781]: E0213 15:18:44.778349 2781 projected.go:200] Error preparing data for projected volume kube-api-access-47rq5 for pod kube-system/kube-proxy-98tqq: configmap "kube-root-ca.crt" not found Feb 13 15:18:44.778604 kubelet[2781]: E0213 15:18:44.778411 2781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5d55b29d-3571-4f04-9301-f8ff13cc85b7-kube-api-access-47rq5 podName:5d55b29d-3571-4f04-9301-f8ff13cc85b7 nodeName:}" failed. No retries permitted until 2025-02-13 15:18:45.27839261 +0000 UTC m=+15.134659648 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-47rq5" (UniqueName: "kubernetes.io/projected/5d55b29d-3571-4f04-9301-f8ff13cc85b7-kube-api-access-47rq5") pod "kube-proxy-98tqq" (UID: "5d55b29d-3571-4f04-9301-f8ff13cc85b7") : configmap "kube-root-ca.crt" not found Feb 13 15:18:44.790852 kubelet[2781]: E0213 15:18:44.790706 2781 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Feb 13 15:18:44.790852 kubelet[2781]: E0213 15:18:44.790747 2781 projected.go:200] Error preparing data for projected volume kube-api-access-74prm for pod kube-system/cilium-862hs: configmap "kube-root-ca.crt" not found Feb 13 15:18:44.790852 kubelet[2781]: E0213 15:18:44.790808 2781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194-kube-api-access-74prm podName:6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194 nodeName:}" failed. No retries permitted until 2025-02-13 15:18:45.290791969 +0000 UTC m=+15.147059047 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-74prm" (UniqueName: "kubernetes.io/projected/6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194-kube-api-access-74prm") pod "cilium-862hs" (UID: "6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194") : configmap "kube-root-ca.crt" not found Feb 13 15:18:44.937980 kubelet[2781]: I0213 15:18:44.937702 2781 topology_manager.go:215] "Topology Admit Handler" podUID="2ac7ca86-45e7-462c-bff4-b2d510a52da3" podNamespace="kube-system" podName="cilium-operator-599987898-kv89v" Feb 13 15:18:44.948082 systemd[1]: Created slice kubepods-besteffort-pod2ac7ca86_45e7_462c_bff4_b2d510a52da3.slice - libcontainer container kubepods-besteffort-pod2ac7ca86_45e7_462c_bff4_b2d510a52da3.slice. Feb 13 15:18:45.062257 kubelet[2781]: I0213 15:18:45.062204 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2ac7ca86-45e7-462c-bff4-b2d510a52da3-cilium-config-path\") pod \"cilium-operator-599987898-kv89v\" (UID: \"2ac7ca86-45e7-462c-bff4-b2d510a52da3\") " pod="kube-system/cilium-operator-599987898-kv89v" Feb 13 15:18:45.062562 kubelet[2781]: I0213 15:18:45.062534 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g95mh\" (UniqueName: \"kubernetes.io/projected/2ac7ca86-45e7-462c-bff4-b2d510a52da3-kube-api-access-g95mh\") pod \"cilium-operator-599987898-kv89v\" (UID: \"2ac7ca86-45e7-462c-bff4-b2d510a52da3\") " pod="kube-system/cilium-operator-599987898-kv89v" Feb 13 15:18:45.251871 containerd[1515]: time="2025-02-13T15:18:45.251514436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-kv89v,Uid:2ac7ca86-45e7-462c-bff4-b2d510a52da3,Namespace:kube-system,Attempt:0,}" Feb 13 15:18:45.278858 containerd[1515]: time="2025-02-13T15:18:45.278755672Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:18:45.279400 containerd[1515]: time="2025-02-13T15:18:45.279214658Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:18:45.279400 containerd[1515]: time="2025-02-13T15:18:45.279263825Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:18:45.279879 containerd[1515]: time="2025-02-13T15:18:45.279837669Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:18:45.297113 systemd[1]: Started cri-containerd-491101b83bb32eb2726f899152d28496a071fdf5ccc824f787c7b008f8155066.scope - libcontainer container 491101b83bb32eb2726f899152d28496a071fdf5ccc824f787c7b008f8155066. Feb 13 15:18:45.326520 containerd[1515]: time="2025-02-13T15:18:45.326473800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-kv89v,Uid:2ac7ca86-45e7-462c-bff4-b2d510a52da3,Namespace:kube-system,Attempt:0,} returns sandbox id \"491101b83bb32eb2726f899152d28496a071fdf5ccc824f787c7b008f8155066\"" Feb 13 15:18:45.329299 containerd[1515]: time="2025-02-13T15:18:45.329265566Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 15:18:45.477529 containerd[1515]: time="2025-02-13T15:18:45.476890441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-98tqq,Uid:5d55b29d-3571-4f04-9301-f8ff13cc85b7,Namespace:kube-system,Attempt:0,}" Feb 13 15:18:45.482027 containerd[1515]: time="2025-02-13T15:18:45.481965378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-862hs,Uid:6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194,Namespace:kube-system,Attempt:0,}" Feb 13 15:18:45.504794 containerd[1515]: time="2025-02-13T15:18:45.504366591Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:18:45.504794 containerd[1515]: time="2025-02-13T15:18:45.504430200Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:18:45.504794 containerd[1515]: time="2025-02-13T15:18:45.504447123Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:18:45.504794 containerd[1515]: time="2025-02-13T15:18:45.504517733Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:18:45.514825 containerd[1515]: time="2025-02-13T15:18:45.514186457Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:18:45.514825 containerd[1515]: time="2025-02-13T15:18:45.514250586Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:18:45.514825 containerd[1515]: time="2025-02-13T15:18:45.514265548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:18:45.514825 containerd[1515]: time="2025-02-13T15:18:45.514356922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:18:45.537242 systemd[1]: Started cri-containerd-48f32b602351b5e05bcb295b42235a64bb4e03ff822f103e385b64bc491ae1f9.scope - libcontainer container 48f32b602351b5e05bcb295b42235a64bb4e03ff822f103e385b64bc491ae1f9. Feb 13 15:18:45.540418 systemd[1]: Started cri-containerd-fd6d9f49149a018cc617f8b9f3397bca194a2fdc634ac4450c557fc0a9656d9b.scope - libcontainer container fd6d9f49149a018cc617f8b9f3397bca194a2fdc634ac4450c557fc0a9656d9b. Feb 13 15:18:45.575441 containerd[1515]: time="2025-02-13T15:18:45.574463769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-862hs,Uid:6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194,Namespace:kube-system,Attempt:0,} returns sandbox id \"fd6d9f49149a018cc617f8b9f3397bca194a2fdc634ac4450c557fc0a9656d9b\"" Feb 13 15:18:45.576959 containerd[1515]: time="2025-02-13T15:18:45.576810790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-98tqq,Uid:5d55b29d-3571-4f04-9301-f8ff13cc85b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"48f32b602351b5e05bcb295b42235a64bb4e03ff822f103e385b64bc491ae1f9\"" Feb 13 15:18:45.580708 containerd[1515]: time="2025-02-13T15:18:45.580645467Z" level=info msg="CreateContainer within sandbox \"48f32b602351b5e05bcb295b42235a64bb4e03ff822f103e385b64bc491ae1f9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 15:18:45.603188 containerd[1515]: time="2025-02-13T15:18:45.603047520Z" level=info msg="CreateContainer within sandbox \"48f32b602351b5e05bcb295b42235a64bb4e03ff822f103e385b64bc491ae1f9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"466c245673572ef3bccb58a3bb80b150ce25258bfe0429fe103e61d2bd69d004\"" Feb 13 15:18:45.605179 containerd[1515]: time="2025-02-13T15:18:45.604126116Z" level=info msg="StartContainer for \"466c245673572ef3bccb58a3bb80b150ce25258bfe0429fe103e61d2bd69d004\"" Feb 13 15:18:45.633154 systemd[1]: Started cri-containerd-466c245673572ef3bccb58a3bb80b150ce25258bfe0429fe103e61d2bd69d004.scope - libcontainer container 466c245673572ef3bccb58a3bb80b150ce25258bfe0429fe103e61d2bd69d004. Feb 13 15:18:45.667409 containerd[1515]: time="2025-02-13T15:18:45.667367139Z" level=info msg="StartContainer for \"466c245673572ef3bccb58a3bb80b150ce25258bfe0429fe103e61d2bd69d004\" returns successfully" Feb 13 15:18:46.943690 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3783595879.mount: Deactivated successfully. Feb 13 15:18:47.985025 containerd[1515]: time="2025-02-13T15:18:47.984905210Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:47.986990 containerd[1515]: time="2025-02-13T15:18:47.986502633Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Feb 13 15:18:47.988119 containerd[1515]: time="2025-02-13T15:18:47.988074413Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:47.989890 containerd[1515]: time="2025-02-13T15:18:47.989431082Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.660132232s" Feb 13 15:18:47.989890 containerd[1515]: time="2025-02-13T15:18:47.989702720Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 13 15:18:47.991671 containerd[1515]: time="2025-02-13T15:18:47.991503852Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 15:18:47.993297 containerd[1515]: time="2025-02-13T15:18:47.993130599Z" level=info msg="CreateContainer within sandbox \"491101b83bb32eb2726f899152d28496a071fdf5ccc824f787c7b008f8155066\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 15:18:48.018499 containerd[1515]: time="2025-02-13T15:18:48.018202415Z" level=info msg="CreateContainer within sandbox \"491101b83bb32eb2726f899152d28496a071fdf5ccc824f787c7b008f8155066\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"bf7d3b8e4892e1b2255f9b252b5dd89bfdaabcd92e1ba1548b00b2c80c6c7e71\"" Feb 13 15:18:48.020206 containerd[1515]: time="2025-02-13T15:18:48.019120341Z" level=info msg="StartContainer for \"bf7d3b8e4892e1b2255f9b252b5dd89bfdaabcd92e1ba1548b00b2c80c6c7e71\"" Feb 13 15:18:48.056123 systemd[1]: Started cri-containerd-bf7d3b8e4892e1b2255f9b252b5dd89bfdaabcd92e1ba1548b00b2c80c6c7e71.scope - libcontainer container bf7d3b8e4892e1b2255f9b252b5dd89bfdaabcd92e1ba1548b00b2c80c6c7e71. Feb 13 15:18:48.089313 containerd[1515]: time="2025-02-13T15:18:48.089260914Z" level=info msg="StartContainer for \"bf7d3b8e4892e1b2255f9b252b5dd89bfdaabcd92e1ba1548b00b2c80c6c7e71\" returns successfully" Feb 13 15:18:48.365952 kubelet[2781]: I0213 15:18:48.365827 2781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-98tqq" podStartSLOduration=4.365801857 podStartE2EDuration="4.365801857s" podCreationTimestamp="2025-02-13 15:18:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:18:46.357918017 +0000 UTC m=+16.214185095" watchObservedRunningTime="2025-02-13 15:18:48.365801857 +0000 UTC m=+18.222068975" Feb 13 15:18:52.161494 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount68156590.mount: Deactivated successfully. Feb 13 15:18:53.643818 containerd[1515]: time="2025-02-13T15:18:53.643759845Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:53.645762 containerd[1515]: time="2025-02-13T15:18:53.645708492Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Feb 13 15:18:53.647083 containerd[1515]: time="2025-02-13T15:18:53.647040620Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:53.648869 containerd[1515]: time="2025-02-13T15:18:53.648837008Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 5.657302512s" Feb 13 15:18:53.648972 containerd[1515]: time="2025-02-13T15:18:53.648876813Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 13 15:18:53.653717 containerd[1515]: time="2025-02-13T15:18:53.653684781Z" level=info msg="CreateContainer within sandbox \"fd6d9f49149a018cc617f8b9f3397bca194a2fdc634ac4450c557fc0a9656d9b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 15:18:53.672127 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2043305238.mount: Deactivated successfully. Feb 13 15:18:53.675226 containerd[1515]: time="2025-02-13T15:18:53.675162499Z" level=info msg="CreateContainer within sandbox \"fd6d9f49149a018cc617f8b9f3397bca194a2fdc634ac4450c557fc0a9656d9b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"57fb25ca75100ea1368a6b7a184e41a93992f17206ddeec4d476a562078873a9\"" Feb 13 15:18:53.676765 containerd[1515]: time="2025-02-13T15:18:53.675979723Z" level=info msg="StartContainer for \"57fb25ca75100ea1368a6b7a184e41a93992f17206ddeec4d476a562078873a9\"" Feb 13 15:18:53.716172 systemd[1]: Started cri-containerd-57fb25ca75100ea1368a6b7a184e41a93992f17206ddeec4d476a562078873a9.scope - libcontainer container 57fb25ca75100ea1368a6b7a184e41a93992f17206ddeec4d476a562078873a9. Feb 13 15:18:53.747772 containerd[1515]: time="2025-02-13T15:18:53.747676156Z" level=info msg="StartContainer for \"57fb25ca75100ea1368a6b7a184e41a93992f17206ddeec4d476a562078873a9\" returns successfully" Feb 13 15:18:53.761127 systemd[1]: cri-containerd-57fb25ca75100ea1368a6b7a184e41a93992f17206ddeec4d476a562078873a9.scope: Deactivated successfully. Feb 13 15:18:53.946192 containerd[1515]: time="2025-02-13T15:18:53.945747822Z" level=info msg="shim disconnected" id=57fb25ca75100ea1368a6b7a184e41a93992f17206ddeec4d476a562078873a9 namespace=k8s.io Feb 13 15:18:53.946192 containerd[1515]: time="2025-02-13T15:18:53.945829192Z" level=warning msg="cleaning up after shim disconnected" id=57fb25ca75100ea1368a6b7a184e41a93992f17206ddeec4d476a562078873a9 namespace=k8s.io Feb 13 15:18:53.946192 containerd[1515]: time="2025-02-13T15:18:53.945841953Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:18:54.369167 containerd[1515]: time="2025-02-13T15:18:54.369062037Z" level=info msg="CreateContainer within sandbox \"fd6d9f49149a018cc617f8b9f3397bca194a2fdc634ac4450c557fc0a9656d9b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 15:18:54.386754 containerd[1515]: time="2025-02-13T15:18:54.386686037Z" level=info msg="CreateContainer within sandbox \"fd6d9f49149a018cc617f8b9f3397bca194a2fdc634ac4450c557fc0a9656d9b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9aa50006f62617c3e794695c0c96f692085489c8514f328d6c49a4963ae673f2\"" Feb 13 15:18:54.388540 containerd[1515]: time="2025-02-13T15:18:54.387766612Z" level=info msg="StartContainer for \"9aa50006f62617c3e794695c0c96f692085489c8514f328d6c49a4963ae673f2\"" Feb 13 15:18:54.396627 kubelet[2781]: I0213 15:18:54.396565 2781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-kv89v" podStartSLOduration=7.733859113 podStartE2EDuration="10.396548548s" podCreationTimestamp="2025-02-13 15:18:44 +0000 UTC" firstStartedPulling="2025-02-13 15:18:45.328038388 +0000 UTC m=+15.184305466" lastFinishedPulling="2025-02-13 15:18:47.990727823 +0000 UTC m=+17.846994901" observedRunningTime="2025-02-13 15:18:48.367286221 +0000 UTC m=+18.223553339" watchObservedRunningTime="2025-02-13 15:18:54.396548548 +0000 UTC m=+24.252815626" Feb 13 15:18:54.421085 systemd[1]: Started cri-containerd-9aa50006f62617c3e794695c0c96f692085489c8514f328d6c49a4963ae673f2.scope - libcontainer container 9aa50006f62617c3e794695c0c96f692085489c8514f328d6c49a4963ae673f2. Feb 13 15:18:54.447202 containerd[1515]: time="2025-02-13T15:18:54.447021929Z" level=info msg="StartContainer for \"9aa50006f62617c3e794695c0c96f692085489c8514f328d6c49a4963ae673f2\" returns successfully" Feb 13 15:18:54.460974 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:18:54.461232 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:18:54.462086 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:18:54.470716 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:18:54.470962 systemd[1]: cri-containerd-9aa50006f62617c3e794695c0c96f692085489c8514f328d6c49a4963ae673f2.scope: Deactivated successfully. Feb 13 15:18:54.495340 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:18:54.504409 containerd[1515]: time="2025-02-13T15:18:54.504334323Z" level=info msg="shim disconnected" id=9aa50006f62617c3e794695c0c96f692085489c8514f328d6c49a4963ae673f2 namespace=k8s.io Feb 13 15:18:54.504409 containerd[1515]: time="2025-02-13T15:18:54.504385929Z" level=warning msg="cleaning up after shim disconnected" id=9aa50006f62617c3e794695c0c96f692085489c8514f328d6c49a4963ae673f2 namespace=k8s.io Feb 13 15:18:54.504409 containerd[1515]: time="2025-02-13T15:18:54.504394130Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:18:54.667741 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-57fb25ca75100ea1368a6b7a184e41a93992f17206ddeec4d476a562078873a9-rootfs.mount: Deactivated successfully. Feb 13 15:18:55.374183 containerd[1515]: time="2025-02-13T15:18:55.374142452Z" level=info msg="CreateContainer within sandbox \"fd6d9f49149a018cc617f8b9f3397bca194a2fdc634ac4450c557fc0a9656d9b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 15:18:55.396477 containerd[1515]: time="2025-02-13T15:18:55.396361030Z" level=info msg="CreateContainer within sandbox \"fd6d9f49149a018cc617f8b9f3397bca194a2fdc634ac4450c557fc0a9656d9b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d30bd339b1a6d757b5be32299f37988977ae98041f8904761d4659a83171e4c2\"" Feb 13 15:18:55.397057 containerd[1515]: time="2025-02-13T15:18:55.396984827Z" level=info msg="StartContainer for \"d30bd339b1a6d757b5be32299f37988977ae98041f8904761d4659a83171e4c2\"" Feb 13 15:18:55.428102 systemd[1]: Started cri-containerd-d30bd339b1a6d757b5be32299f37988977ae98041f8904761d4659a83171e4c2.scope - libcontainer container d30bd339b1a6d757b5be32299f37988977ae98041f8904761d4659a83171e4c2. Feb 13 15:18:55.460252 containerd[1515]: time="2025-02-13T15:18:55.460007512Z" level=info msg="StartContainer for \"d30bd339b1a6d757b5be32299f37988977ae98041f8904761d4659a83171e4c2\" returns successfully" Feb 13 15:18:55.466167 systemd[1]: cri-containerd-d30bd339b1a6d757b5be32299f37988977ae98041f8904761d4659a83171e4c2.scope: Deactivated successfully. Feb 13 15:18:55.487983 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d30bd339b1a6d757b5be32299f37988977ae98041f8904761d4659a83171e4c2-rootfs.mount: Deactivated successfully. Feb 13 15:18:55.494217 containerd[1515]: time="2025-02-13T15:18:55.494144918Z" level=info msg="shim disconnected" id=d30bd339b1a6d757b5be32299f37988977ae98041f8904761d4659a83171e4c2 namespace=k8s.io Feb 13 15:18:55.494217 containerd[1515]: time="2025-02-13T15:18:55.494212286Z" level=warning msg="cleaning up after shim disconnected" id=d30bd339b1a6d757b5be32299f37988977ae98041f8904761d4659a83171e4c2 namespace=k8s.io Feb 13 15:18:55.494217 containerd[1515]: time="2025-02-13T15:18:55.494223127Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:18:56.379308 containerd[1515]: time="2025-02-13T15:18:56.379253155Z" level=info msg="CreateContainer within sandbox \"fd6d9f49149a018cc617f8b9f3397bca194a2fdc634ac4450c557fc0a9656d9b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 15:18:56.398654 containerd[1515]: time="2025-02-13T15:18:56.398526181Z" level=info msg="CreateContainer within sandbox \"fd6d9f49149a018cc617f8b9f3397bca194a2fdc634ac4450c557fc0a9656d9b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7c9eea12b3d1ba408d208763718c9428707037e83d1a04ca03e43397a83bea03\"" Feb 13 15:18:56.399472 containerd[1515]: time="2025-02-13T15:18:56.399434891Z" level=info msg="StartContainer for \"7c9eea12b3d1ba408d208763718c9428707037e83d1a04ca03e43397a83bea03\"" Feb 13 15:18:56.436232 systemd[1]: Started cri-containerd-7c9eea12b3d1ba408d208763718c9428707037e83d1a04ca03e43397a83bea03.scope - libcontainer container 7c9eea12b3d1ba408d208763718c9428707037e83d1a04ca03e43397a83bea03. Feb 13 15:18:56.463185 systemd[1]: cri-containerd-7c9eea12b3d1ba408d208763718c9428707037e83d1a04ca03e43397a83bea03.scope: Deactivated successfully. Feb 13 15:18:56.465055 containerd[1515]: time="2025-02-13T15:18:56.464930021Z" level=info msg="StartContainer for \"7c9eea12b3d1ba408d208763718c9428707037e83d1a04ca03e43397a83bea03\" returns successfully" Feb 13 15:18:56.489006 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7c9eea12b3d1ba408d208763718c9428707037e83d1a04ca03e43397a83bea03-rootfs.mount: Deactivated successfully. Feb 13 15:18:56.494489 containerd[1515]: time="2025-02-13T15:18:56.494436092Z" level=info msg="shim disconnected" id=7c9eea12b3d1ba408d208763718c9428707037e83d1a04ca03e43397a83bea03 namespace=k8s.io Feb 13 15:18:56.494888 containerd[1515]: time="2025-02-13T15:18:56.494687242Z" level=warning msg="cleaning up after shim disconnected" id=7c9eea12b3d1ba408d208763718c9428707037e83d1a04ca03e43397a83bea03 namespace=k8s.io Feb 13 15:18:56.494888 containerd[1515]: time="2025-02-13T15:18:56.494704004Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:18:57.385032 containerd[1515]: time="2025-02-13T15:18:57.384971356Z" level=info msg="CreateContainer within sandbox \"fd6d9f49149a018cc617f8b9f3397bca194a2fdc634ac4450c557fc0a9656d9b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 15:18:57.402962 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount971937723.mount: Deactivated successfully. Feb 13 15:18:57.405327 containerd[1515]: time="2025-02-13T15:18:57.405289959Z" level=info msg="CreateContainer within sandbox \"fd6d9f49149a018cc617f8b9f3397bca194a2fdc634ac4450c557fc0a9656d9b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5e99f0f8ef5cbfdad5f442c6e6b1652ac4a1cf76e16a50bda92b0157f02bc440\"" Feb 13 15:18:57.405819 containerd[1515]: time="2025-02-13T15:18:57.405774098Z" level=info msg="StartContainer for \"5e99f0f8ef5cbfdad5f442c6e6b1652ac4a1cf76e16a50bda92b0157f02bc440\"" Feb 13 15:18:57.440438 systemd[1]: Started cri-containerd-5e99f0f8ef5cbfdad5f442c6e6b1652ac4a1cf76e16a50bda92b0157f02bc440.scope - libcontainer container 5e99f0f8ef5cbfdad5f442c6e6b1652ac4a1cf76e16a50bda92b0157f02bc440. Feb 13 15:18:57.475486 containerd[1515]: time="2025-02-13T15:18:57.475274896Z" level=info msg="StartContainer for \"5e99f0f8ef5cbfdad5f442c6e6b1652ac4a1cf76e16a50bda92b0157f02bc440\" returns successfully" Feb 13 15:18:57.621800 kubelet[2781]: I0213 15:18:57.620843 2781 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 15:18:57.665186 kubelet[2781]: I0213 15:18:57.665057 2781 topology_manager.go:215] "Topology Admit Handler" podUID="f5154aa1-c5f8-4fdb-a795-a6b8fbace965" podNamespace="kube-system" podName="coredns-7db6d8ff4d-m7c26" Feb 13 15:18:57.672194 kubelet[2781]: I0213 15:18:57.672147 2781 topology_manager.go:215] "Topology Admit Handler" podUID="715e7996-b52b-41fc-8005-86bb13b3de10" podNamespace="kube-system" podName="coredns-7db6d8ff4d-bp6tl" Feb 13 15:18:57.675841 systemd[1]: Created slice kubepods-burstable-podf5154aa1_c5f8_4fdb_a795_a6b8fbace965.slice - libcontainer container kubepods-burstable-podf5154aa1_c5f8_4fdb_a795_a6b8fbace965.slice. Feb 13 15:18:57.685061 systemd[1]: Created slice kubepods-burstable-pod715e7996_b52b_41fc_8005_86bb13b3de10.slice - libcontainer container kubepods-burstable-pod715e7996_b52b_41fc_8005_86bb13b3de10.slice. Feb 13 15:18:57.755451 kubelet[2781]: I0213 15:18:57.755392 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frgdr\" (UniqueName: \"kubernetes.io/projected/715e7996-b52b-41fc-8005-86bb13b3de10-kube-api-access-frgdr\") pod \"coredns-7db6d8ff4d-bp6tl\" (UID: \"715e7996-b52b-41fc-8005-86bb13b3de10\") " pod="kube-system/coredns-7db6d8ff4d-bp6tl" Feb 13 15:18:57.755625 kubelet[2781]: I0213 15:18:57.755468 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/715e7996-b52b-41fc-8005-86bb13b3de10-config-volume\") pod \"coredns-7db6d8ff4d-bp6tl\" (UID: \"715e7996-b52b-41fc-8005-86bb13b3de10\") " pod="kube-system/coredns-7db6d8ff4d-bp6tl" Feb 13 15:18:57.755625 kubelet[2781]: I0213 15:18:57.755507 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f5154aa1-c5f8-4fdb-a795-a6b8fbace965-config-volume\") pod \"coredns-7db6d8ff4d-m7c26\" (UID: \"f5154aa1-c5f8-4fdb-a795-a6b8fbace965\") " pod="kube-system/coredns-7db6d8ff4d-m7c26" Feb 13 15:18:57.755625 kubelet[2781]: I0213 15:18:57.755580 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7854j\" (UniqueName: \"kubernetes.io/projected/f5154aa1-c5f8-4fdb-a795-a6b8fbace965-kube-api-access-7854j\") pod \"coredns-7db6d8ff4d-m7c26\" (UID: \"f5154aa1-c5f8-4fdb-a795-a6b8fbace965\") " pod="kube-system/coredns-7db6d8ff4d-m7c26" Feb 13 15:18:57.982808 containerd[1515]: time="2025-02-13T15:18:57.980678921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-m7c26,Uid:f5154aa1-c5f8-4fdb-a795-a6b8fbace965,Namespace:kube-system,Attempt:0,}" Feb 13 15:18:57.990031 containerd[1515]: time="2025-02-13T15:18:57.989780936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bp6tl,Uid:715e7996-b52b-41fc-8005-86bb13b3de10,Namespace:kube-system,Attempt:0,}" Feb 13 15:18:58.425120 kubelet[2781]: I0213 15:18:58.425049 2781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-862hs" podStartSLOduration=6.351945855 podStartE2EDuration="14.425030399s" podCreationTimestamp="2025-02-13 15:18:44 +0000 UTC" firstStartedPulling="2025-02-13 15:18:45.577179884 +0000 UTC m=+15.433446962" lastFinishedPulling="2025-02-13 15:18:53.650264388 +0000 UTC m=+23.506531506" observedRunningTime="2025-02-13 15:18:58.422960593 +0000 UTC m=+28.279227671" watchObservedRunningTime="2025-02-13 15:18:58.425030399 +0000 UTC m=+28.281297477" Feb 13 15:18:59.653080 systemd-networkd[1413]: cilium_host: Link UP Feb 13 15:18:59.654851 systemd-networkd[1413]: cilium_net: Link UP Feb 13 15:18:59.655070 systemd-networkd[1413]: cilium_net: Gained carrier Feb 13 15:18:59.655194 systemd-networkd[1413]: cilium_host: Gained carrier Feb 13 15:18:59.756404 systemd-networkd[1413]: cilium_vxlan: Link UP Feb 13 15:18:59.756411 systemd-networkd[1413]: cilium_vxlan: Gained carrier Feb 13 15:19:00.029034 kernel: NET: Registered PF_ALG protocol family Feb 13 15:19:00.320359 systemd-networkd[1413]: cilium_net: Gained IPv6LL Feb 13 15:19:00.577362 systemd-networkd[1413]: cilium_host: Gained IPv6LL Feb 13 15:19:00.748089 systemd-networkd[1413]: lxc_health: Link UP Feb 13 15:19:00.760700 systemd-networkd[1413]: lxc_health: Gained carrier Feb 13 15:19:01.040044 kernel: eth0: renamed from tmpa438a Feb 13 15:19:01.044710 systemd-networkd[1413]: lxc19fdca5cccaf: Link UP Feb 13 15:19:01.046835 systemd-networkd[1413]: lxc19fdca5cccaf: Gained carrier Feb 13 15:19:01.083086 kernel: eth0: renamed from tmp2044e Feb 13 15:19:01.090494 systemd-networkd[1413]: lxca5a9542cba21: Link UP Feb 13 15:19:01.093088 systemd-networkd[1413]: lxca5a9542cba21: Gained carrier Feb 13 15:19:01.344274 systemd-networkd[1413]: cilium_vxlan: Gained IPv6LL Feb 13 15:19:02.241038 systemd-networkd[1413]: lxc19fdca5cccaf: Gained IPv6LL Feb 13 15:19:02.368510 systemd-networkd[1413]: lxc_health: Gained IPv6LL Feb 13 15:19:02.753904 systemd-networkd[1413]: lxca5a9542cba21: Gained IPv6LL Feb 13 15:19:04.132814 kubelet[2781]: I0213 15:19:04.129955 2781 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:19:05.031967 containerd[1515]: time="2025-02-13T15:19:05.029781507Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:19:05.031967 containerd[1515]: time="2025-02-13T15:19:05.030104710Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:19:05.031967 containerd[1515]: time="2025-02-13T15:19:05.030125787Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:19:05.031967 containerd[1515]: time="2025-02-13T15:19:05.030450190Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:19:05.056383 containerd[1515]: time="2025-02-13T15:19:05.054695074Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:19:05.056383 containerd[1515]: time="2025-02-13T15:19:05.055656803Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:19:05.056383 containerd[1515]: time="2025-02-13T15:19:05.055672041Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:19:05.056908 containerd[1515]: time="2025-02-13T15:19:05.056844746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:19:05.058430 systemd[1]: Started cri-containerd-a438aca39f96fc486fef0a7d9b6bc59a003ae5de8f77d4efc54d27a640fc5c0b.scope - libcontainer container a438aca39f96fc486fef0a7d9b6bc59a003ae5de8f77d4efc54d27a640fc5c0b. Feb 13 15:19:05.088239 systemd[1]: Started cri-containerd-2044e828743a93be6bf310c261ac452575653730472b1b5e28e338b289721398.scope - libcontainer container 2044e828743a93be6bf310c261ac452575653730472b1b5e28e338b289721398. Feb 13 15:19:05.136800 containerd[1515]: time="2025-02-13T15:19:05.136544796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-m7c26,Uid:f5154aa1-c5f8-4fdb-a795-a6b8fbace965,Namespace:kube-system,Attempt:0,} returns sandbox id \"a438aca39f96fc486fef0a7d9b6bc59a003ae5de8f77d4efc54d27a640fc5c0b\"" Feb 13 15:19:05.143805 containerd[1515]: time="2025-02-13T15:19:05.143694852Z" level=info msg="CreateContainer within sandbox \"a438aca39f96fc486fef0a7d9b6bc59a003ae5de8f77d4efc54d27a640fc5c0b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:19:05.146563 containerd[1515]: time="2025-02-13T15:19:05.146318669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bp6tl,Uid:715e7996-b52b-41fc-8005-86bb13b3de10,Namespace:kube-system,Attempt:0,} returns sandbox id \"2044e828743a93be6bf310c261ac452575653730472b1b5e28e338b289721398\"" Feb 13 15:19:05.152648 containerd[1515]: time="2025-02-13T15:19:05.152507156Z" level=info msg="CreateContainer within sandbox \"2044e828743a93be6bf310c261ac452575653730472b1b5e28e338b289721398\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:19:05.167149 containerd[1515]: time="2025-02-13T15:19:05.166980167Z" level=info msg="CreateContainer within sandbox \"a438aca39f96fc486fef0a7d9b6bc59a003ae5de8f77d4efc54d27a640fc5c0b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"292820370d0d8587aa661b0feb7969c7b9862c6e1aba974ad065d7fb8278ba2f\"" Feb 13 15:19:05.169141 containerd[1515]: time="2025-02-13T15:19:05.168637416Z" level=info msg="StartContainer for \"292820370d0d8587aa661b0feb7969c7b9862c6e1aba974ad065d7fb8278ba2f\"" Feb 13 15:19:05.182523 containerd[1515]: time="2025-02-13T15:19:05.182480860Z" level=info msg="CreateContainer within sandbox \"2044e828743a93be6bf310c261ac452575653730472b1b5e28e338b289721398\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"aec3c0e3ee69046c3f6a8ba5c473276d94ed27f60df65a973229c3c1c7d2a0b1\"" Feb 13 15:19:05.183488 containerd[1515]: time="2025-02-13T15:19:05.183458907Z" level=info msg="StartContainer for \"aec3c0e3ee69046c3f6a8ba5c473276d94ed27f60df65a973229c3c1c7d2a0b1\"" Feb 13 15:19:05.226114 systemd[1]: Started cri-containerd-292820370d0d8587aa661b0feb7969c7b9862c6e1aba974ad065d7fb8278ba2f.scope - libcontainer container 292820370d0d8587aa661b0feb7969c7b9862c6e1aba974ad065d7fb8278ba2f. Feb 13 15:19:05.237124 systemd[1]: Started cri-containerd-aec3c0e3ee69046c3f6a8ba5c473276d94ed27f60df65a973229c3c1c7d2a0b1.scope - libcontainer container aec3c0e3ee69046c3f6a8ba5c473276d94ed27f60df65a973229c3c1c7d2a0b1. Feb 13 15:19:05.270634 containerd[1515]: time="2025-02-13T15:19:05.270591260Z" level=info msg="StartContainer for \"292820370d0d8587aa661b0feb7969c7b9862c6e1aba974ad065d7fb8278ba2f\" returns successfully" Feb 13 15:19:05.274286 containerd[1515]: time="2025-02-13T15:19:05.274178326Z" level=info msg="StartContainer for \"aec3c0e3ee69046c3f6a8ba5c473276d94ed27f60df65a973229c3c1c7d2a0b1\" returns successfully" Feb 13 15:19:05.424996 kubelet[2781]: I0213 15:19:05.424728 2781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-bp6tl" podStartSLOduration=21.424710009 podStartE2EDuration="21.424710009s" podCreationTimestamp="2025-02-13 15:18:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:19:05.42444068 +0000 UTC m=+35.280707758" watchObservedRunningTime="2025-02-13 15:19:05.424710009 +0000 UTC m=+35.280977087" Feb 13 15:19:05.447351 kubelet[2781]: I0213 15:19:05.447239 2781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-m7c26" podStartSLOduration=21.447208015 podStartE2EDuration="21.447208015s" podCreationTimestamp="2025-02-13 15:18:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:19:05.445106897 +0000 UTC m=+35.301373935" watchObservedRunningTime="2025-02-13 15:19:05.447208015 +0000 UTC m=+35.303475133" Feb 13 15:19:06.037692 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount202791112.mount: Deactivated successfully. Feb 13 15:21:15.282621 systemd[1]: Started sshd@8-188.245.168.142:22-83.168.95.92:39970.service - OpenSSH per-connection server daemon (83.168.95.92:39970). Feb 13 15:21:15.426131 sshd[4180]: Invalid user from 83.168.95.92 port 39970 Feb 13 15:21:23.262600 sshd[4180]: Connection closed by invalid user 83.168.95.92 port 39970 [preauth] Feb 13 15:21:23.265663 systemd[1]: sshd@8-188.245.168.142:22-83.168.95.92:39970.service: Deactivated successfully. Feb 13 15:23:19.011443 systemd[1]: Started sshd@9-188.245.168.142:22-139.178.68.195:55178.service - OpenSSH per-connection server daemon (139.178.68.195:55178). Feb 13 15:23:20.011471 sshd[4204]: Accepted publickey for core from 139.178.68.195 port 55178 ssh2: RSA SHA256:dDBYffbys7IwrjEqnD+nC8HZkuMa8NXLOQVKUB+uHPI Feb 13 15:23:20.013725 sshd-session[4204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:23:20.020605 systemd-logind[1492]: New session 8 of user core. Feb 13 15:23:20.026139 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 15:23:20.777300 sshd[4206]: Connection closed by 139.178.68.195 port 55178 Feb 13 15:23:20.778846 sshd-session[4204]: pam_unix(sshd:session): session closed for user core Feb 13 15:23:20.785396 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 15:23:20.788647 systemd[1]: sshd@9-188.245.168.142:22-139.178.68.195:55178.service: Deactivated successfully. Feb 13 15:23:20.792233 systemd-logind[1492]: Session 8 logged out. Waiting for processes to exit. Feb 13 15:23:20.793631 systemd-logind[1492]: Removed session 8. Feb 13 15:23:25.949246 systemd[1]: Started sshd@10-188.245.168.142:22-139.178.68.195:55194.service - OpenSSH per-connection server daemon (139.178.68.195:55194). Feb 13 15:23:26.926496 sshd[4219]: Accepted publickey for core from 139.178.68.195 port 55194 ssh2: RSA SHA256:dDBYffbys7IwrjEqnD+nC8HZkuMa8NXLOQVKUB+uHPI Feb 13 15:23:26.928528 sshd-session[4219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:23:26.934363 systemd-logind[1492]: New session 9 of user core. Feb 13 15:23:26.939233 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 15:23:27.718709 sshd[4221]: Connection closed by 139.178.68.195 port 55194 Feb 13 15:23:27.719415 sshd-session[4219]: pam_unix(sshd:session): session closed for user core Feb 13 15:23:27.724283 systemd-logind[1492]: Session 9 logged out. Waiting for processes to exit. Feb 13 15:23:27.725290 systemd[1]: sshd@10-188.245.168.142:22-139.178.68.195:55194.service: Deactivated successfully. Feb 13 15:23:27.728238 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 15:23:27.731290 systemd-logind[1492]: Removed session 9. Feb 13 15:23:32.893319 systemd[1]: Started sshd@11-188.245.168.142:22-139.178.68.195:60136.service - OpenSSH per-connection server daemon (139.178.68.195:60136). Feb 13 15:23:33.870634 sshd[4236]: Accepted publickey for core from 139.178.68.195 port 60136 ssh2: RSA SHA256:dDBYffbys7IwrjEqnD+nC8HZkuMa8NXLOQVKUB+uHPI Feb 13 15:23:33.872511 sshd-session[4236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:23:33.879038 systemd-logind[1492]: New session 10 of user core. Feb 13 15:23:33.887170 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 15:23:34.621173 sshd[4238]: Connection closed by 139.178.68.195 port 60136 Feb 13 15:23:34.622178 sshd-session[4236]: pam_unix(sshd:session): session closed for user core Feb 13 15:23:34.627018 systemd[1]: sshd@11-188.245.168.142:22-139.178.68.195:60136.service: Deactivated successfully. Feb 13 15:23:34.629152 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 15:23:34.630452 systemd-logind[1492]: Session 10 logged out. Waiting for processes to exit. Feb 13 15:23:34.632130 systemd-logind[1492]: Removed session 10. Feb 13 15:23:34.802408 systemd[1]: Started sshd@12-188.245.168.142:22-139.178.68.195:60138.service - OpenSSH per-connection server daemon (139.178.68.195:60138). Feb 13 15:23:35.792503 sshd[4250]: Accepted publickey for core from 139.178.68.195 port 60138 ssh2: RSA SHA256:dDBYffbys7IwrjEqnD+nC8HZkuMa8NXLOQVKUB+uHPI Feb 13 15:23:35.794767 sshd-session[4250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:23:35.800541 systemd-logind[1492]: New session 11 of user core. Feb 13 15:23:35.806299 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 15:23:36.587874 sshd[4252]: Connection closed by 139.178.68.195 port 60138 Feb 13 15:23:36.588917 sshd-session[4250]: pam_unix(sshd:session): session closed for user core Feb 13 15:23:36.593738 systemd[1]: sshd@12-188.245.168.142:22-139.178.68.195:60138.service: Deactivated successfully. Feb 13 15:23:36.596732 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 15:23:36.597850 systemd-logind[1492]: Session 11 logged out. Waiting for processes to exit. Feb 13 15:23:36.599655 systemd-logind[1492]: Removed session 11. Feb 13 15:23:36.765512 systemd[1]: Started sshd@13-188.245.168.142:22-139.178.68.195:46018.service - OpenSSH per-connection server daemon (139.178.68.195:46018). Feb 13 15:23:37.742151 sshd[4262]: Accepted publickey for core from 139.178.68.195 port 46018 ssh2: RSA SHA256:dDBYffbys7IwrjEqnD+nC8HZkuMa8NXLOQVKUB+uHPI Feb 13 15:23:37.744748 sshd-session[4262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:23:37.750366 systemd-logind[1492]: New session 12 of user core. Feb 13 15:23:37.753094 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 15:23:38.498319 sshd[4264]: Connection closed by 139.178.68.195 port 46018 Feb 13 15:23:38.499222 sshd-session[4262]: pam_unix(sshd:session): session closed for user core Feb 13 15:23:38.503714 systemd[1]: sshd@13-188.245.168.142:22-139.178.68.195:46018.service: Deactivated successfully. Feb 13 15:23:38.508443 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 15:23:38.510294 systemd-logind[1492]: Session 12 logged out. Waiting for processes to exit. Feb 13 15:23:38.511666 systemd-logind[1492]: Removed session 12. Feb 13 15:23:43.674375 systemd[1]: Started sshd@14-188.245.168.142:22-139.178.68.195:46026.service - OpenSSH per-connection server daemon (139.178.68.195:46026). Feb 13 15:23:44.653209 sshd[4276]: Accepted publickey for core from 139.178.68.195 port 46026 ssh2: RSA SHA256:dDBYffbys7IwrjEqnD+nC8HZkuMa8NXLOQVKUB+uHPI Feb 13 15:23:44.655276 sshd-session[4276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:23:44.662363 systemd-logind[1492]: New session 13 of user core. Feb 13 15:23:44.669171 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 15:23:45.405236 sshd[4278]: Connection closed by 139.178.68.195 port 46026 Feb 13 15:23:45.406266 sshd-session[4276]: pam_unix(sshd:session): session closed for user core Feb 13 15:23:45.412009 systemd[1]: sshd@14-188.245.168.142:22-139.178.68.195:46026.service: Deactivated successfully. Feb 13 15:23:45.414977 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 15:23:45.416230 systemd-logind[1492]: Session 13 logged out. Waiting for processes to exit. Feb 13 15:23:45.418201 systemd-logind[1492]: Removed session 13. Feb 13 15:23:50.586292 systemd[1]: Started sshd@15-188.245.168.142:22-139.178.68.195:42738.service - OpenSSH per-connection server daemon (139.178.68.195:42738). Feb 13 15:23:51.566171 sshd[4291]: Accepted publickey for core from 139.178.68.195 port 42738 ssh2: RSA SHA256:dDBYffbys7IwrjEqnD+nC8HZkuMa8NXLOQVKUB+uHPI Feb 13 15:23:51.568387 sshd-session[4291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:23:51.574492 systemd-logind[1492]: New session 14 of user core. Feb 13 15:23:51.584261 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 15:23:52.314307 sshd[4293]: Connection closed by 139.178.68.195 port 42738 Feb 13 15:23:52.314817 sshd-session[4291]: pam_unix(sshd:session): session closed for user core Feb 13 15:23:52.320345 systemd[1]: sshd@15-188.245.168.142:22-139.178.68.195:42738.service: Deactivated successfully. Feb 13 15:23:52.323248 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 15:23:52.324511 systemd-logind[1492]: Session 14 logged out. Waiting for processes to exit. Feb 13 15:23:52.327740 systemd-logind[1492]: Removed session 14. Feb 13 15:23:52.495329 systemd[1]: Started sshd@16-188.245.168.142:22-139.178.68.195:42750.service - OpenSSH per-connection server daemon (139.178.68.195:42750). Feb 13 15:23:53.492656 sshd[4305]: Accepted publickey for core from 139.178.68.195 port 42750 ssh2: RSA SHA256:dDBYffbys7IwrjEqnD+nC8HZkuMa8NXLOQVKUB+uHPI Feb 13 15:23:53.495045 sshd-session[4305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:23:53.500770 systemd-logind[1492]: New session 15 of user core. Feb 13 15:23:53.508500 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 15:23:54.309124 sshd[4307]: Connection closed by 139.178.68.195 port 42750 Feb 13 15:23:54.310092 sshd-session[4305]: pam_unix(sshd:session): session closed for user core Feb 13 15:23:54.315675 systemd[1]: sshd@16-188.245.168.142:22-139.178.68.195:42750.service: Deactivated successfully. Feb 13 15:23:54.318620 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 15:23:54.320427 systemd-logind[1492]: Session 15 logged out. Waiting for processes to exit. Feb 13 15:23:54.322633 systemd-logind[1492]: Removed session 15. Feb 13 15:23:54.488211 systemd[1]: Started sshd@17-188.245.168.142:22-139.178.68.195:42760.service - OpenSSH per-connection server daemon (139.178.68.195:42760). Feb 13 15:23:55.467303 sshd[4317]: Accepted publickey for core from 139.178.68.195 port 42760 ssh2: RSA SHA256:dDBYffbys7IwrjEqnD+nC8HZkuMa8NXLOQVKUB+uHPI Feb 13 15:23:55.469138 sshd-session[4317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:23:55.474880 systemd-logind[1492]: New session 16 of user core. Feb 13 15:23:55.484416 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 15:23:57.778278 sshd[4319]: Connection closed by 139.178.68.195 port 42760 Feb 13 15:23:57.778843 sshd-session[4317]: pam_unix(sshd:session): session closed for user core Feb 13 15:23:57.785406 systemd[1]: sshd@17-188.245.168.142:22-139.178.68.195:42760.service: Deactivated successfully. Feb 13 15:23:57.787776 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 15:23:57.787974 systemd[1]: session-16.scope: Consumed 483ms CPU time, 65.5M memory peak. Feb 13 15:23:57.788680 systemd-logind[1492]: Session 16 logged out. Waiting for processes to exit. Feb 13 15:23:57.789998 systemd-logind[1492]: Removed session 16. Feb 13 15:23:57.963688 systemd[1]: Started sshd@18-188.245.168.142:22-139.178.68.195:52510.service - OpenSSH per-connection server daemon (139.178.68.195:52510). Feb 13 15:23:58.960159 sshd[4337]: Accepted publickey for core from 139.178.68.195 port 52510 ssh2: RSA SHA256:dDBYffbys7IwrjEqnD+nC8HZkuMa8NXLOQVKUB+uHPI Feb 13 15:23:58.962366 sshd-session[4337]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:23:58.968340 systemd-logind[1492]: New session 17 of user core. Feb 13 15:23:58.974235 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 15:23:59.835044 sshd[4339]: Connection closed by 139.178.68.195 port 52510 Feb 13 15:23:59.836022 sshd-session[4337]: pam_unix(sshd:session): session closed for user core Feb 13 15:23:59.841924 systemd[1]: sshd@18-188.245.168.142:22-139.178.68.195:52510.service: Deactivated successfully. Feb 13 15:23:59.843859 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 15:23:59.844836 systemd-logind[1492]: Session 17 logged out. Waiting for processes to exit. Feb 13 15:23:59.846224 systemd-logind[1492]: Removed session 17. Feb 13 15:24:00.014388 systemd[1]: Started sshd@19-188.245.168.142:22-139.178.68.195:52516.service - OpenSSH per-connection server daemon (139.178.68.195:52516). Feb 13 15:24:01.010654 sshd[4349]: Accepted publickey for core from 139.178.68.195 port 52516 ssh2: RSA SHA256:dDBYffbys7IwrjEqnD+nC8HZkuMa8NXLOQVKUB+uHPI Feb 13 15:24:01.012671 sshd-session[4349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:24:01.019893 systemd-logind[1492]: New session 18 of user core. Feb 13 15:24:01.023122 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 15:24:01.761969 sshd[4351]: Connection closed by 139.178.68.195 port 52516 Feb 13 15:24:01.763075 sshd-session[4349]: pam_unix(sshd:session): session closed for user core Feb 13 15:24:01.767839 systemd[1]: sshd@19-188.245.168.142:22-139.178.68.195:52516.service: Deactivated successfully. Feb 13 15:24:01.769594 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 15:24:01.772338 systemd-logind[1492]: Session 18 logged out. Waiting for processes to exit. Feb 13 15:24:01.773415 systemd-logind[1492]: Removed session 18. Feb 13 15:24:06.935426 systemd[1]: Started sshd@20-188.245.168.142:22-139.178.68.195:36796.service - OpenSSH per-connection server daemon (139.178.68.195:36796). Feb 13 15:24:07.916743 sshd[4365]: Accepted publickey for core from 139.178.68.195 port 36796 ssh2: RSA SHA256:dDBYffbys7IwrjEqnD+nC8HZkuMa8NXLOQVKUB+uHPI Feb 13 15:24:07.919060 sshd-session[4365]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:24:07.925161 systemd-logind[1492]: New session 19 of user core. Feb 13 15:24:07.931232 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 15:24:08.657775 sshd[4367]: Connection closed by 139.178.68.195 port 36796 Feb 13 15:24:08.660295 sshd-session[4365]: pam_unix(sshd:session): session closed for user core Feb 13 15:24:08.664541 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 15:24:08.666493 systemd[1]: sshd@20-188.245.168.142:22-139.178.68.195:36796.service: Deactivated successfully. Feb 13 15:24:08.670996 systemd-logind[1492]: Session 19 logged out. Waiting for processes to exit. Feb 13 15:24:08.672100 systemd-logind[1492]: Removed session 19. Feb 13 15:24:13.839406 systemd[1]: Started sshd@21-188.245.168.142:22-139.178.68.195:36804.service - OpenSSH per-connection server daemon (139.178.68.195:36804). Feb 13 15:24:14.825805 sshd[4378]: Accepted publickey for core from 139.178.68.195 port 36804 ssh2: RSA SHA256:dDBYffbys7IwrjEqnD+nC8HZkuMa8NXLOQVKUB+uHPI Feb 13 15:24:14.828139 sshd-session[4378]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:24:14.834322 systemd-logind[1492]: New session 20 of user core. Feb 13 15:24:14.843292 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 15:24:15.571036 sshd[4380]: Connection closed by 139.178.68.195 port 36804 Feb 13 15:24:15.572235 sshd-session[4378]: pam_unix(sshd:session): session closed for user core Feb 13 15:24:15.578184 systemd[1]: sshd@21-188.245.168.142:22-139.178.68.195:36804.service: Deactivated successfully. Feb 13 15:24:15.583056 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 15:24:15.584873 systemd-logind[1492]: Session 20 logged out. Waiting for processes to exit. Feb 13 15:24:15.586266 systemd-logind[1492]: Removed session 20. Feb 13 15:24:15.753356 systemd[1]: Started sshd@22-188.245.168.142:22-139.178.68.195:36814.service - OpenSSH per-connection server daemon (139.178.68.195:36814). Feb 13 15:24:16.729743 sshd[4393]: Accepted publickey for core from 139.178.68.195 port 36814 ssh2: RSA SHA256:dDBYffbys7IwrjEqnD+nC8HZkuMa8NXLOQVKUB+uHPI Feb 13 15:24:16.732318 sshd-session[4393]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:24:16.738152 systemd-logind[1492]: New session 21 of user core. Feb 13 15:24:16.749263 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 15:24:18.850867 containerd[1515]: time="2025-02-13T15:24:18.850781129Z" level=info msg="StopContainer for \"bf7d3b8e4892e1b2255f9b252b5dd89bfdaabcd92e1ba1548b00b2c80c6c7e71\" with timeout 30 (s)" Feb 13 15:24:18.852240 containerd[1515]: time="2025-02-13T15:24:18.852060356Z" level=info msg="Stop container \"bf7d3b8e4892e1b2255f9b252b5dd89bfdaabcd92e1ba1548b00b2c80c6c7e71\" with signal terminated" Feb 13 15:24:18.869928 systemd[1]: cri-containerd-bf7d3b8e4892e1b2255f9b252b5dd89bfdaabcd92e1ba1548b00b2c80c6c7e71.scope: Deactivated successfully. Feb 13 15:24:18.880467 containerd[1515]: time="2025-02-13T15:24:18.879902938Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:24:18.888487 containerd[1515]: time="2025-02-13T15:24:18.888443467Z" level=info msg="StopContainer for \"5e99f0f8ef5cbfdad5f442c6e6b1652ac4a1cf76e16a50bda92b0157f02bc440\" with timeout 2 (s)" Feb 13 15:24:18.888881 containerd[1515]: time="2025-02-13T15:24:18.888810886Z" level=info msg="Stop container \"5e99f0f8ef5cbfdad5f442c6e6b1652ac4a1cf76e16a50bda92b0157f02bc440\" with signal terminated" Feb 13 15:24:18.898914 systemd-networkd[1413]: lxc_health: Link DOWN Feb 13 15:24:18.898924 systemd-networkd[1413]: lxc_health: Lost carrier Feb 13 15:24:18.904215 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bf7d3b8e4892e1b2255f9b252b5dd89bfdaabcd92e1ba1548b00b2c80c6c7e71-rootfs.mount: Deactivated successfully. Feb 13 15:24:18.918335 containerd[1515]: time="2025-02-13T15:24:18.918241031Z" level=info msg="shim disconnected" id=bf7d3b8e4892e1b2255f9b252b5dd89bfdaabcd92e1ba1548b00b2c80c6c7e71 namespace=k8s.io Feb 13 15:24:18.918716 containerd[1515]: time="2025-02-13T15:24:18.918422601Z" level=warning msg="cleaning up after shim disconnected" id=bf7d3b8e4892e1b2255f9b252b5dd89bfdaabcd92e1ba1548b00b2c80c6c7e71 namespace=k8s.io Feb 13 15:24:18.918716 containerd[1515]: time="2025-02-13T15:24:18.918435442Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:24:18.919312 systemd[1]: cri-containerd-5e99f0f8ef5cbfdad5f442c6e6b1652ac4a1cf76e16a50bda92b0157f02bc440.scope: Deactivated successfully. Feb 13 15:24:18.919615 systemd[1]: cri-containerd-5e99f0f8ef5cbfdad5f442c6e6b1652ac4a1cf76e16a50bda92b0157f02bc440.scope: Consumed 7.736s CPU time, 124.4M memory peak, 144K read from disk, 12.9M written to disk. Feb 13 15:24:18.937381 containerd[1515]: time="2025-02-13T15:24:18.937254310Z" level=warning msg="cleanup warnings time=\"2025-02-13T15:24:18Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 15:24:18.945524 containerd[1515]: time="2025-02-13T15:24:18.944466808Z" level=info msg="StopContainer for \"bf7d3b8e4892e1b2255f9b252b5dd89bfdaabcd92e1ba1548b00b2c80c6c7e71\" returns successfully" Feb 13 15:24:18.946896 containerd[1515]: time="2025-02-13T15:24:18.946604161Z" level=info msg="StopPodSandbox for \"491101b83bb32eb2726f899152d28496a071fdf5ccc824f787c7b008f8155066\"" Feb 13 15:24:18.946896 containerd[1515]: time="2025-02-13T15:24:18.946646883Z" level=info msg="Container to stop \"bf7d3b8e4892e1b2255f9b252b5dd89bfdaabcd92e1ba1548b00b2c80c6c7e71\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:24:18.949172 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5e99f0f8ef5cbfdad5f442c6e6b1652ac4a1cf76e16a50bda92b0157f02bc440-rootfs.mount: Deactivated successfully. Feb 13 15:24:18.950621 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-491101b83bb32eb2726f899152d28496a071fdf5ccc824f787c7b008f8155066-shm.mount: Deactivated successfully. Feb 13 15:24:18.957894 containerd[1515]: time="2025-02-13T15:24:18.957208597Z" level=info msg="shim disconnected" id=5e99f0f8ef5cbfdad5f442c6e6b1652ac4a1cf76e16a50bda92b0157f02bc440 namespace=k8s.io Feb 13 15:24:18.957894 containerd[1515]: time="2025-02-13T15:24:18.957291042Z" level=warning msg="cleaning up after shim disconnected" id=5e99f0f8ef5cbfdad5f442c6e6b1652ac4a1cf76e16a50bda92b0157f02bc440 namespace=k8s.io Feb 13 15:24:18.957894 containerd[1515]: time="2025-02-13T15:24:18.957332124Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:24:18.966258 systemd[1]: cri-containerd-491101b83bb32eb2726f899152d28496a071fdf5ccc824f787c7b008f8155066.scope: Deactivated successfully. Feb 13 15:24:18.978270 containerd[1515]: time="2025-02-13T15:24:18.978216700Z" level=warning msg="cleanup warnings time=\"2025-02-13T15:24:18Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 15:24:18.982551 containerd[1515]: time="2025-02-13T15:24:18.980901761Z" level=info msg="StopContainer for \"5e99f0f8ef5cbfdad5f442c6e6b1652ac4a1cf76e16a50bda92b0157f02bc440\" returns successfully" Feb 13 15:24:18.982914 containerd[1515]: time="2025-02-13T15:24:18.982888826Z" level=info msg="StopPodSandbox for \"fd6d9f49149a018cc617f8b9f3397bca194a2fdc634ac4450c557fc0a9656d9b\"" Feb 13 15:24:18.983076 containerd[1515]: time="2025-02-13T15:24:18.983053754Z" level=info msg="Container to stop \"57fb25ca75100ea1368a6b7a184e41a93992f17206ddeec4d476a562078873a9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:24:18.983144 containerd[1515]: time="2025-02-13T15:24:18.983131279Z" level=info msg="Container to stop \"9aa50006f62617c3e794695c0c96f692085489c8514f328d6c49a4963ae673f2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:24:18.983209 containerd[1515]: time="2025-02-13T15:24:18.983184001Z" level=info msg="Container to stop \"d30bd339b1a6d757b5be32299f37988977ae98041f8904761d4659a83171e4c2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:24:18.983279 containerd[1515]: time="2025-02-13T15:24:18.983259245Z" level=info msg="Container to stop \"7c9eea12b3d1ba408d208763718c9428707037e83d1a04ca03e43397a83bea03\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:24:18.983376 containerd[1515]: time="2025-02-13T15:24:18.983361291Z" level=info msg="Container to stop \"5e99f0f8ef5cbfdad5f442c6e6b1652ac4a1cf76e16a50bda92b0157f02bc440\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:24:18.992148 systemd[1]: cri-containerd-fd6d9f49149a018cc617f8b9f3397bca194a2fdc634ac4450c557fc0a9656d9b.scope: Deactivated successfully. Feb 13 15:24:19.008107 containerd[1515]: time="2025-02-13T15:24:19.007892579Z" level=info msg="shim disconnected" id=491101b83bb32eb2726f899152d28496a071fdf5ccc824f787c7b008f8155066 namespace=k8s.io Feb 13 15:24:19.008107 containerd[1515]: time="2025-02-13T15:24:19.007971944Z" level=warning msg="cleaning up after shim disconnected" id=491101b83bb32eb2726f899152d28496a071fdf5ccc824f787c7b008f8155066 namespace=k8s.io Feb 13 15:24:19.008107 containerd[1515]: time="2025-02-13T15:24:19.008001665Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:24:19.020239 containerd[1515]: time="2025-02-13T15:24:19.020060380Z" level=info msg="shim disconnected" id=fd6d9f49149a018cc617f8b9f3397bca194a2fdc634ac4450c557fc0a9656d9b namespace=k8s.io Feb 13 15:24:19.020583 containerd[1515]: time="2025-02-13T15:24:19.020560486Z" level=warning msg="cleaning up after shim disconnected" id=fd6d9f49149a018cc617f8b9f3397bca194a2fdc634ac4450c557fc0a9656d9b namespace=k8s.io Feb 13 15:24:19.021596 containerd[1515]: time="2025-02-13T15:24:19.021571379Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:24:19.028297 containerd[1515]: time="2025-02-13T15:24:19.028257331Z" level=info msg="TearDown network for sandbox \"491101b83bb32eb2726f899152d28496a071fdf5ccc824f787c7b008f8155066\" successfully" Feb 13 15:24:19.028297 containerd[1515]: time="2025-02-13T15:24:19.028286532Z" level=info msg="StopPodSandbox for \"491101b83bb32eb2726f899152d28496a071fdf5ccc824f787c7b008f8155066\" returns successfully" Feb 13 15:24:19.043640 containerd[1515]: time="2025-02-13T15:24:19.042853219Z" level=info msg="TearDown network for sandbox \"fd6d9f49149a018cc617f8b9f3397bca194a2fdc634ac4450c557fc0a9656d9b\" successfully" Feb 13 15:24:19.043640 containerd[1515]: time="2025-02-13T15:24:19.042893621Z" level=info msg="StopPodSandbox for \"fd6d9f49149a018cc617f8b9f3397bca194a2fdc634ac4450c557fc0a9656d9b\" returns successfully" Feb 13 15:24:19.149402 kubelet[2781]: I0213 15:24:19.146240 2781 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194-etc-cni-netd\") pod \"6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194\" (UID: \"6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194\") " Feb 13 15:24:19.149402 kubelet[2781]: I0213 15:24:19.146380 2781 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2ac7ca86-45e7-462c-bff4-b2d510a52da3-cilium-config-path\") pod \"2ac7ca86-45e7-462c-bff4-b2d510a52da3\" (UID: \"2ac7ca86-45e7-462c-bff4-b2d510a52da3\") " Feb 13 15:24:19.149402 kubelet[2781]: I0213 15:24:19.146438 2781 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194-hubble-tls\") pod \"6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194\" (UID: \"6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194\") " Feb 13 15:24:19.149402 kubelet[2781]: I0213 15:24:19.146477 2781 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194-cilium-cgroup\") pod \"6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194\" (UID: \"6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194\") " Feb 13 15:24:19.149402 kubelet[2781]: I0213 15:24:19.146518 2781 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g95mh\" (UniqueName: \"kubernetes.io/projected/2ac7ca86-45e7-462c-bff4-b2d510a52da3-kube-api-access-g95mh\") pod \"2ac7ca86-45e7-462c-bff4-b2d510a52da3\" (UID: \"2ac7ca86-45e7-462c-bff4-b2d510a52da3\") " Feb 13 15:24:19.149402 kubelet[2781]: I0213 15:24:19.146499 2781 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194" (UID: "6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:24:19.151443 kubelet[2781]: I0213 15:24:19.146559 2781 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74prm\" (UniqueName: \"kubernetes.io/projected/6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194-kube-api-access-74prm\") pod \"6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194\" (UID: \"6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194\") " Feb 13 15:24:19.151443 kubelet[2781]: I0213 15:24:19.146597 2781 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194-cilium-run\") pod \"6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194\" (UID: \"6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194\") " Feb 13 15:24:19.151443 kubelet[2781]: I0213 15:24:19.146634 2781 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194-bpf-maps\") pod \"6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194\" (UID: \"6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194\") " Feb 13 15:24:19.151443 kubelet[2781]: I0213 15:24:19.146673 2781 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194-cni-path\") pod \"6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194\" (UID: \"6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194\") " Feb 13 15:24:19.151443 kubelet[2781]: I0213 15:24:19.146708 2781 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194-host-proc-sys-kernel\") pod \"6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194\" (UID: \"6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194\") " Feb 13 15:24:19.151443 kubelet[2781]: I0213 15:24:19.146753 2781 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194-cilium-config-path\") pod \"6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194\" (UID: \"6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194\") " Feb 13 15:24:19.151602 kubelet[2781]: I0213 15:24:19.146791 2781 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194-xtables-lock\") pod \"6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194\" (UID: \"6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194\") " Feb 13 15:24:19.151602 kubelet[2781]: I0213 15:24:19.146825 2781 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194-lib-modules\") pod \"6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194\" (UID: \"6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194\") " Feb 13 15:24:19.151602 kubelet[2781]: I0213 15:24:19.146865 2781 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194-clustermesh-secrets\") pod \"6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194\" (UID: \"6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194\") " Feb 13 15:24:19.151602 kubelet[2781]: I0213 15:24:19.146904 2781 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194-host-proc-sys-net\") pod \"6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194\" (UID: \"6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194\") " Feb 13 15:24:19.151602 kubelet[2781]: I0213 15:24:19.146960 2781 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194-hostproc\") pod \"6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194\" (UID: \"6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194\") " Feb 13 15:24:19.151602 kubelet[2781]: I0213 15:24:19.147037 2781 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194-etc-cni-netd\") on node \"ci-4230-0-1-0-5f4e073373\" DevicePath \"\"" Feb 13 15:24:19.151725 kubelet[2781]: I0213 15:24:19.147095 2781 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194-hostproc" (OuterVolumeSpecName: "hostproc") pod "6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194" (UID: "6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:24:19.151725 kubelet[2781]: I0213 15:24:19.147140 2781 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194" (UID: "6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:24:19.151725 kubelet[2781]: I0213 15:24:19.149658 2781 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194" (UID: "6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:24:19.152148 kubelet[2781]: I0213 15:24:19.152112 2781 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ac7ca86-45e7-462c-bff4-b2d510a52da3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2ac7ca86-45e7-462c-bff4-b2d510a52da3" (UID: "2ac7ca86-45e7-462c-bff4-b2d510a52da3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 15:24:19.152211 kubelet[2781]: I0213 15:24:19.152172 2781 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194" (UID: "6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:24:19.154074 kubelet[2781]: I0213 15:24:19.154044 2781 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ac7ca86-45e7-462c-bff4-b2d510a52da3-kube-api-access-g95mh" (OuterVolumeSpecName: "kube-api-access-g95mh") pod "2ac7ca86-45e7-462c-bff4-b2d510a52da3" (UID: "2ac7ca86-45e7-462c-bff4-b2d510a52da3"). InnerVolumeSpecName "kube-api-access-g95mh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:24:19.154342 kubelet[2781]: I0213 15:24:19.154301 2781 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194-kube-api-access-74prm" (OuterVolumeSpecName: "kube-api-access-74prm") pod "6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194" (UID: "6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194"). InnerVolumeSpecName "kube-api-access-74prm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:24:19.154391 kubelet[2781]: I0213 15:24:19.154355 2781 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194" (UID: "6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:24:19.154391 kubelet[2781]: I0213 15:24:19.154373 2781 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194" (UID: "6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:24:19.154391 kubelet[2781]: I0213 15:24:19.154387 2781 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194-cni-path" (OuterVolumeSpecName: "cni-path") pod "6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194" (UID: "6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:24:19.154463 kubelet[2781]: I0213 15:24:19.154404 2781 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194" (UID: "6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:24:19.154463 kubelet[2781]: I0213 15:24:19.154418 2781 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194" (UID: "6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:24:19.156539 kubelet[2781]: I0213 15:24:19.156512 2781 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194" (UID: "6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 15:24:19.156654 kubelet[2781]: I0213 15:24:19.156530 2781 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194" (UID: "6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 15:24:19.156713 kubelet[2781]: I0213 15:24:19.156554 2781 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194" (UID: "6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:24:19.193374 kubelet[2781]: I0213 15:24:19.193279 2781 scope.go:117] "RemoveContainer" containerID="bf7d3b8e4892e1b2255f9b252b5dd89bfdaabcd92e1ba1548b00b2c80c6c7e71" Feb 13 15:24:19.198898 containerd[1515]: time="2025-02-13T15:24:19.198592653Z" level=info msg="RemoveContainer for \"bf7d3b8e4892e1b2255f9b252b5dd89bfdaabcd92e1ba1548b00b2c80c6c7e71\"" Feb 13 15:24:19.204756 containerd[1515]: time="2025-02-13T15:24:19.203881811Z" level=info msg="RemoveContainer for \"bf7d3b8e4892e1b2255f9b252b5dd89bfdaabcd92e1ba1548b00b2c80c6c7e71\" returns successfully" Feb 13 15:24:19.204756 containerd[1515]: time="2025-02-13T15:24:19.204674333Z" level=error msg="ContainerStatus for \"bf7d3b8e4892e1b2255f9b252b5dd89bfdaabcd92e1ba1548b00b2c80c6c7e71\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bf7d3b8e4892e1b2255f9b252b5dd89bfdaabcd92e1ba1548b00b2c80c6c7e71\": not found" Feb 13 15:24:19.204917 kubelet[2781]: I0213 15:24:19.204155 2781 scope.go:117] "RemoveContainer" containerID="bf7d3b8e4892e1b2255f9b252b5dd89bfdaabcd92e1ba1548b00b2c80c6c7e71" Feb 13 15:24:19.207602 kubelet[2781]: E0213 15:24:19.207562 2781 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bf7d3b8e4892e1b2255f9b252b5dd89bfdaabcd92e1ba1548b00b2c80c6c7e71\": not found" containerID="bf7d3b8e4892e1b2255f9b252b5dd89bfdaabcd92e1ba1548b00b2c80c6c7e71" Feb 13 15:24:19.208166 kubelet[2781]: I0213 15:24:19.207760 2781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bf7d3b8e4892e1b2255f9b252b5dd89bfdaabcd92e1ba1548b00b2c80c6c7e71"} err="failed to get container status \"bf7d3b8e4892e1b2255f9b252b5dd89bfdaabcd92e1ba1548b00b2c80c6c7e71\": rpc error: code = NotFound desc = an error occurred when try to find container \"bf7d3b8e4892e1b2255f9b252b5dd89bfdaabcd92e1ba1548b00b2c80c6c7e71\": not found" Feb 13 15:24:19.208365 kubelet[2781]: I0213 15:24:19.208151 2781 scope.go:117] "RemoveContainer" containerID="5e99f0f8ef5cbfdad5f442c6e6b1652ac4a1cf76e16a50bda92b0157f02bc440" Feb 13 15:24:19.209236 systemd[1]: Removed slice kubepods-besteffort-pod2ac7ca86_45e7_462c_bff4_b2d510a52da3.slice - libcontainer container kubepods-besteffort-pod2ac7ca86_45e7_462c_bff4_b2d510a52da3.slice. Feb 13 15:24:19.212978 systemd[1]: Removed slice kubepods-burstable-pod6d8f8fdb_ac3c_4bf4_82e8_5a620cfbe194.slice - libcontainer container kubepods-burstable-pod6d8f8fdb_ac3c_4bf4_82e8_5a620cfbe194.slice. Feb 13 15:24:19.213180 systemd[1]: kubepods-burstable-pod6d8f8fdb_ac3c_4bf4_82e8_5a620cfbe194.slice: Consumed 7.823s CPU time, 124.9M memory peak, 144K read from disk, 12.9M written to disk. Feb 13 15:24:19.215790 containerd[1515]: time="2025-02-13T15:24:19.215586667Z" level=info msg="RemoveContainer for \"5e99f0f8ef5cbfdad5f442c6e6b1652ac4a1cf76e16a50bda92b0157f02bc440\"" Feb 13 15:24:19.223590 containerd[1515]: time="2025-02-13T15:24:19.220976631Z" level=info msg="RemoveContainer for \"5e99f0f8ef5cbfdad5f442c6e6b1652ac4a1cf76e16a50bda92b0157f02bc440\" returns successfully" Feb 13 15:24:19.223783 kubelet[2781]: I0213 15:24:19.222026 2781 scope.go:117] "RemoveContainer" containerID="7c9eea12b3d1ba408d208763718c9428707037e83d1a04ca03e43397a83bea03" Feb 13 15:24:19.227469 containerd[1515]: time="2025-02-13T15:24:19.227044350Z" level=info msg="RemoveContainer for \"7c9eea12b3d1ba408d208763718c9428707037e83d1a04ca03e43397a83bea03\"" Feb 13 15:24:19.234719 containerd[1515]: time="2025-02-13T15:24:19.234661111Z" level=info msg="RemoveContainer for \"7c9eea12b3d1ba408d208763718c9428707037e83d1a04ca03e43397a83bea03\" returns successfully" Feb 13 15:24:19.235478 kubelet[2781]: I0213 15:24:19.235454 2781 scope.go:117] "RemoveContainer" containerID="d30bd339b1a6d757b5be32299f37988977ae98041f8904761d4659a83171e4c2" Feb 13 15:24:19.238889 containerd[1515]: time="2025-02-13T15:24:19.238513874Z" level=info msg="RemoveContainer for \"d30bd339b1a6d757b5be32299f37988977ae98041f8904761d4659a83171e4c2\"" Feb 13 15:24:19.241949 containerd[1515]: time="2025-02-13T15:24:19.241895052Z" level=info msg="RemoveContainer for \"d30bd339b1a6d757b5be32299f37988977ae98041f8904761d4659a83171e4c2\" returns successfully" Feb 13 15:24:19.242227 kubelet[2781]: I0213 15:24:19.242205 2781 scope.go:117] "RemoveContainer" containerID="9aa50006f62617c3e794695c0c96f692085489c8514f328d6c49a4963ae673f2" Feb 13 15:24:19.245249 containerd[1515]: time="2025-02-13T15:24:19.244981854Z" level=info msg="RemoveContainer for \"9aa50006f62617c3e794695c0c96f692085489c8514f328d6c49a4963ae673f2\"" Feb 13 15:24:19.247491 kubelet[2781]: I0213 15:24:19.247447 2781 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194-host-proc-sys-net\") on node \"ci-4230-0-1-0-5f4e073373\" DevicePath \"\"" Feb 13 15:24:19.247491 kubelet[2781]: I0213 15:24:19.247475 2781 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194-hostproc\") on node \"ci-4230-0-1-0-5f4e073373\" DevicePath \"\"" Feb 13 15:24:19.247491 kubelet[2781]: I0213 15:24:19.247488 2781 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2ac7ca86-45e7-462c-bff4-b2d510a52da3-cilium-config-path\") on node \"ci-4230-0-1-0-5f4e073373\" DevicePath \"\"" Feb 13 15:24:19.247491 kubelet[2781]: I0213 15:24:19.247498 2781 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194-hubble-tls\") on node \"ci-4230-0-1-0-5f4e073373\" DevicePath \"\"" Feb 13 15:24:19.247673 kubelet[2781]: I0213 15:24:19.247506 2781 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194-cilium-cgroup\") on node \"ci-4230-0-1-0-5f4e073373\" DevicePath \"\"" Feb 13 15:24:19.247673 kubelet[2781]: I0213 15:24:19.247514 2781 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-g95mh\" (UniqueName: \"kubernetes.io/projected/2ac7ca86-45e7-462c-bff4-b2d510a52da3-kube-api-access-g95mh\") on node \"ci-4230-0-1-0-5f4e073373\" DevicePath \"\"" Feb 13 15:24:19.247673 kubelet[2781]: I0213 15:24:19.247522 2781 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-74prm\" (UniqueName: \"kubernetes.io/projected/6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194-kube-api-access-74prm\") on node \"ci-4230-0-1-0-5f4e073373\" DevicePath \"\"" Feb 13 15:24:19.247673 kubelet[2781]: I0213 15:24:19.247530 2781 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194-cilium-run\") on node \"ci-4230-0-1-0-5f4e073373\" DevicePath \"\"" Feb 13 15:24:19.247673 kubelet[2781]: I0213 15:24:19.247537 2781 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194-bpf-maps\") on node \"ci-4230-0-1-0-5f4e073373\" DevicePath \"\"" Feb 13 15:24:19.247673 kubelet[2781]: I0213 15:24:19.247545 2781 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194-cni-path\") on node \"ci-4230-0-1-0-5f4e073373\" DevicePath \"\"" Feb 13 15:24:19.247673 kubelet[2781]: I0213 15:24:19.247552 2781 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194-host-proc-sys-kernel\") on node \"ci-4230-0-1-0-5f4e073373\" DevicePath \"\"" Feb 13 15:24:19.247673 kubelet[2781]: I0213 15:24:19.247560 2781 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194-cilium-config-path\") on node \"ci-4230-0-1-0-5f4e073373\" DevicePath \"\"" Feb 13 15:24:19.247832 kubelet[2781]: I0213 15:24:19.247568 2781 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194-xtables-lock\") on node \"ci-4230-0-1-0-5f4e073373\" DevicePath \"\"" Feb 13 15:24:19.247832 kubelet[2781]: I0213 15:24:19.247575 2781 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194-lib-modules\") on node \"ci-4230-0-1-0-5f4e073373\" DevicePath \"\"" Feb 13 15:24:19.247832 kubelet[2781]: I0213 15:24:19.247597 2781 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194-clustermesh-secrets\") on node \"ci-4230-0-1-0-5f4e073373\" DevicePath \"\"" Feb 13 15:24:19.250615 containerd[1515]: time="2025-02-13T15:24:19.249466410Z" level=info msg="RemoveContainer for \"9aa50006f62617c3e794695c0c96f692085489c8514f328d6c49a4963ae673f2\" returns successfully" Feb 13 15:24:19.251047 kubelet[2781]: I0213 15:24:19.249681 2781 scope.go:117] "RemoveContainer" containerID="57fb25ca75100ea1368a6b7a184e41a93992f17206ddeec4d476a562078873a9" Feb 13 15:24:19.252056 containerd[1515]: time="2025-02-13T15:24:19.252029585Z" level=info msg="RemoveContainer for \"57fb25ca75100ea1368a6b7a184e41a93992f17206ddeec4d476a562078873a9\"" Feb 13 15:24:19.255019 containerd[1515]: time="2025-02-13T15:24:19.254988460Z" level=info msg="RemoveContainer for \"57fb25ca75100ea1368a6b7a184e41a93992f17206ddeec4d476a562078873a9\" returns successfully" Feb 13 15:24:19.255429 kubelet[2781]: I0213 15:24:19.255394 2781 scope.go:117] "RemoveContainer" containerID="5e99f0f8ef5cbfdad5f442c6e6b1652ac4a1cf76e16a50bda92b0157f02bc440" Feb 13 15:24:19.255734 containerd[1515]: time="2025-02-13T15:24:19.255683297Z" level=error msg="ContainerStatus for \"5e99f0f8ef5cbfdad5f442c6e6b1652ac4a1cf76e16a50bda92b0157f02bc440\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5e99f0f8ef5cbfdad5f442c6e6b1652ac4a1cf76e16a50bda92b0157f02bc440\": not found" Feb 13 15:24:19.255972 kubelet[2781]: E0213 15:24:19.255917 2781 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5e99f0f8ef5cbfdad5f442c6e6b1652ac4a1cf76e16a50bda92b0157f02bc440\": not found" containerID="5e99f0f8ef5cbfdad5f442c6e6b1652ac4a1cf76e16a50bda92b0157f02bc440" Feb 13 15:24:19.256034 kubelet[2781]: I0213 15:24:19.255992 2781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5e99f0f8ef5cbfdad5f442c6e6b1652ac4a1cf76e16a50bda92b0157f02bc440"} err="failed to get container status \"5e99f0f8ef5cbfdad5f442c6e6b1652ac4a1cf76e16a50bda92b0157f02bc440\": rpc error: code = NotFound desc = an error occurred when try to find container \"5e99f0f8ef5cbfdad5f442c6e6b1652ac4a1cf76e16a50bda92b0157f02bc440\": not found" Feb 13 15:24:19.256062 kubelet[2781]: I0213 15:24:19.256037 2781 scope.go:117] "RemoveContainer" containerID="7c9eea12b3d1ba408d208763718c9428707037e83d1a04ca03e43397a83bea03" Feb 13 15:24:19.256607 containerd[1515]: time="2025-02-13T15:24:19.256542142Z" level=error msg="ContainerStatus for \"7c9eea12b3d1ba408d208763718c9428707037e83d1a04ca03e43397a83bea03\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7c9eea12b3d1ba408d208763718c9428707037e83d1a04ca03e43397a83bea03\": not found" Feb 13 15:24:19.256852 kubelet[2781]: E0213 15:24:19.256741 2781 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7c9eea12b3d1ba408d208763718c9428707037e83d1a04ca03e43397a83bea03\": not found" containerID="7c9eea12b3d1ba408d208763718c9428707037e83d1a04ca03e43397a83bea03" Feb 13 15:24:19.256852 kubelet[2781]: I0213 15:24:19.256767 2781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7c9eea12b3d1ba408d208763718c9428707037e83d1a04ca03e43397a83bea03"} err="failed to get container status \"7c9eea12b3d1ba408d208763718c9428707037e83d1a04ca03e43397a83bea03\": rpc error: code = NotFound desc = an error occurred when try to find container \"7c9eea12b3d1ba408d208763718c9428707037e83d1a04ca03e43397a83bea03\": not found" Feb 13 15:24:19.256852 kubelet[2781]: I0213 15:24:19.256784 2781 scope.go:117] "RemoveContainer" containerID="d30bd339b1a6d757b5be32299f37988977ae98041f8904761d4659a83171e4c2" Feb 13 15:24:19.257180 containerd[1515]: time="2025-02-13T15:24:19.257117613Z" level=error msg="ContainerStatus for \"d30bd339b1a6d757b5be32299f37988977ae98041f8904761d4659a83171e4c2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d30bd339b1a6d757b5be32299f37988977ae98041f8904761d4659a83171e4c2\": not found" Feb 13 15:24:19.257530 kubelet[2781]: E0213 15:24:19.257502 2781 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d30bd339b1a6d757b5be32299f37988977ae98041f8904761d4659a83171e4c2\": not found" containerID="d30bd339b1a6d757b5be32299f37988977ae98041f8904761d4659a83171e4c2" Feb 13 15:24:19.257574 kubelet[2781]: I0213 15:24:19.257543 2781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d30bd339b1a6d757b5be32299f37988977ae98041f8904761d4659a83171e4c2"} err="failed to get container status \"d30bd339b1a6d757b5be32299f37988977ae98041f8904761d4659a83171e4c2\": rpc error: code = NotFound desc = an error occurred when try to find container \"d30bd339b1a6d757b5be32299f37988977ae98041f8904761d4659a83171e4c2\": not found" Feb 13 15:24:19.257574 kubelet[2781]: I0213 15:24:19.257572 2781 scope.go:117] "RemoveContainer" containerID="9aa50006f62617c3e794695c0c96f692085489c8514f328d6c49a4963ae673f2" Feb 13 15:24:19.258285 containerd[1515]: time="2025-02-13T15:24:19.258202670Z" level=error msg="ContainerStatus for \"9aa50006f62617c3e794695c0c96f692085489c8514f328d6c49a4963ae673f2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9aa50006f62617c3e794695c0c96f692085489c8514f328d6c49a4963ae673f2\": not found" Feb 13 15:24:19.258656 kubelet[2781]: E0213 15:24:19.258621 2781 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9aa50006f62617c3e794695c0c96f692085489c8514f328d6c49a4963ae673f2\": not found" containerID="9aa50006f62617c3e794695c0c96f692085489c8514f328d6c49a4963ae673f2" Feb 13 15:24:19.258707 kubelet[2781]: I0213 15:24:19.258664 2781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9aa50006f62617c3e794695c0c96f692085489c8514f328d6c49a4963ae673f2"} err="failed to get container status \"9aa50006f62617c3e794695c0c96f692085489c8514f328d6c49a4963ae673f2\": rpc error: code = NotFound desc = an error occurred when try to find container \"9aa50006f62617c3e794695c0c96f692085489c8514f328d6c49a4963ae673f2\": not found" Feb 13 15:24:19.258707 kubelet[2781]: I0213 15:24:19.258683 2781 scope.go:117] "RemoveContainer" containerID="57fb25ca75100ea1368a6b7a184e41a93992f17206ddeec4d476a562078873a9" Feb 13 15:24:19.258874 containerd[1515]: time="2025-02-13T15:24:19.258846984Z" level=error msg="ContainerStatus for \"57fb25ca75100ea1368a6b7a184e41a93992f17206ddeec4d476a562078873a9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"57fb25ca75100ea1368a6b7a184e41a93992f17206ddeec4d476a562078873a9\": not found" Feb 13 15:24:19.259001 kubelet[2781]: E0213 15:24:19.258979 2781 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"57fb25ca75100ea1368a6b7a184e41a93992f17206ddeec4d476a562078873a9\": not found" containerID="57fb25ca75100ea1368a6b7a184e41a93992f17206ddeec4d476a562078873a9" Feb 13 15:24:19.259041 kubelet[2781]: I0213 15:24:19.259007 2781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"57fb25ca75100ea1368a6b7a184e41a93992f17206ddeec4d476a562078873a9"} err="failed to get container status \"57fb25ca75100ea1368a6b7a184e41a93992f17206ddeec4d476a562078873a9\": rpc error: code = NotFound desc = an error occurred when try to find container \"57fb25ca75100ea1368a6b7a184e41a93992f17206ddeec4d476a562078873a9\": not found" Feb 13 15:24:19.855518 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fd6d9f49149a018cc617f8b9f3397bca194a2fdc634ac4450c557fc0a9656d9b-rootfs.mount: Deactivated successfully. Feb 13 15:24:19.855669 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fd6d9f49149a018cc617f8b9f3397bca194a2fdc634ac4450c557fc0a9656d9b-shm.mount: Deactivated successfully. Feb 13 15:24:19.855752 systemd[1]: var-lib-kubelet-pods-6d8f8fdb\x2dac3c\x2d4bf4\x2d82e8\x2d5a620cfbe194-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d74prm.mount: Deactivated successfully. Feb 13 15:24:19.855837 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-491101b83bb32eb2726f899152d28496a071fdf5ccc824f787c7b008f8155066-rootfs.mount: Deactivated successfully. Feb 13 15:24:19.855917 systemd[1]: var-lib-kubelet-pods-2ac7ca86\x2d45e7\x2d462c\x2dbff4\x2db2d510a52da3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dg95mh.mount: Deactivated successfully. Feb 13 15:24:19.856033 systemd[1]: var-lib-kubelet-pods-6d8f8fdb\x2dac3c\x2d4bf4\x2d82e8\x2d5a620cfbe194-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 15:24:19.856861 systemd[1]: var-lib-kubelet-pods-6d8f8fdb\x2dac3c\x2d4bf4\x2d82e8\x2d5a620cfbe194-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 15:24:20.261278 kubelet[2781]: I0213 15:24:20.261075 2781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ac7ca86-45e7-462c-bff4-b2d510a52da3" path="/var/lib/kubelet/pods/2ac7ca86-45e7-462c-bff4-b2d510a52da3/volumes" Feb 13 15:24:20.261658 kubelet[2781]: I0213 15:24:20.261513 2781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194" path="/var/lib/kubelet/pods/6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194/volumes" Feb 13 15:24:20.447666 kubelet[2781]: E0213 15:24:20.447595 2781 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 15:24:20.908005 sshd[4397]: Connection closed by 139.178.68.195 port 36814 Feb 13 15:24:20.910231 sshd-session[4393]: pam_unix(sshd:session): session closed for user core Feb 13 15:24:20.914263 systemd[1]: sshd@22-188.245.168.142:22-139.178.68.195:36814.service: Deactivated successfully. Feb 13 15:24:20.916733 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 15:24:20.918683 systemd-logind[1492]: Session 21 logged out. Waiting for processes to exit. Feb 13 15:24:20.919608 systemd-logind[1492]: Removed session 21. Feb 13 15:24:21.084034 systemd[1]: Started sshd@23-188.245.168.142:22-139.178.68.195:50648.service - OpenSSH per-connection server daemon (139.178.68.195:50648). Feb 13 15:24:22.084810 sshd[4562]: Accepted publickey for core from 139.178.68.195 port 50648 ssh2: RSA SHA256:dDBYffbys7IwrjEqnD+nC8HZkuMa8NXLOQVKUB+uHPI Feb 13 15:24:22.086670 sshd-session[4562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:24:22.092876 systemd-logind[1492]: New session 22 of user core. Feb 13 15:24:22.100350 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 15:24:23.701324 kubelet[2781]: I0213 15:24:23.701270 2781 topology_manager.go:215] "Topology Admit Handler" podUID="40bcdb07-74a0-4625-91b2-6d56dc76e506" podNamespace="kube-system" podName="cilium-hdmh4" Feb 13 15:24:23.701324 kubelet[2781]: E0213 15:24:23.701333 2781 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194" containerName="apply-sysctl-overwrites" Feb 13 15:24:23.701772 kubelet[2781]: E0213 15:24:23.701344 2781 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194" containerName="clean-cilium-state" Feb 13 15:24:23.701772 kubelet[2781]: E0213 15:24:23.701351 2781 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2ac7ca86-45e7-462c-bff4-b2d510a52da3" containerName="cilium-operator" Feb 13 15:24:23.701772 kubelet[2781]: E0213 15:24:23.701357 2781 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194" containerName="mount-cgroup" Feb 13 15:24:23.701772 kubelet[2781]: E0213 15:24:23.701362 2781 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194" containerName="mount-bpf-fs" Feb 13 15:24:23.701772 kubelet[2781]: E0213 15:24:23.701369 2781 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194" containerName="cilium-agent" Feb 13 15:24:23.701772 kubelet[2781]: I0213 15:24:23.701390 2781 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ac7ca86-45e7-462c-bff4-b2d510a52da3" containerName="cilium-operator" Feb 13 15:24:23.701772 kubelet[2781]: I0213 15:24:23.701396 2781 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d8f8fdb-ac3c-4bf4-82e8-5a620cfbe194" containerName="cilium-agent" Feb 13 15:24:23.710328 kubelet[2781]: W0213 15:24:23.710293 2781 reflector.go:547] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4230-0-1-0-5f4e073373" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230-0-1-0-5f4e073373' and this object Feb 13 15:24:23.710420 kubelet[2781]: E0213 15:24:23.710339 2781 reflector.go:150] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4230-0-1-0-5f4e073373" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230-0-1-0-5f4e073373' and this object Feb 13 15:24:23.713180 systemd[1]: Created slice kubepods-burstable-pod40bcdb07_74a0_4625_91b2_6d56dc76e506.slice - libcontainer container kubepods-burstable-pod40bcdb07_74a0_4625_91b2_6d56dc76e506.slice. Feb 13 15:24:23.778187 kubelet[2781]: I0213 15:24:23.778095 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/40bcdb07-74a0-4625-91b2-6d56dc76e506-cni-path\") pod \"cilium-hdmh4\" (UID: \"40bcdb07-74a0-4625-91b2-6d56dc76e506\") " pod="kube-system/cilium-hdmh4" Feb 13 15:24:23.778187 kubelet[2781]: I0213 15:24:23.778177 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/40bcdb07-74a0-4625-91b2-6d56dc76e506-hostproc\") pod \"cilium-hdmh4\" (UID: \"40bcdb07-74a0-4625-91b2-6d56dc76e506\") " pod="kube-system/cilium-hdmh4" Feb 13 15:24:23.778438 kubelet[2781]: I0213 15:24:23.778212 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/40bcdb07-74a0-4625-91b2-6d56dc76e506-xtables-lock\") pod \"cilium-hdmh4\" (UID: \"40bcdb07-74a0-4625-91b2-6d56dc76e506\") " pod="kube-system/cilium-hdmh4" Feb 13 15:24:23.778438 kubelet[2781]: I0213 15:24:23.778247 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/40bcdb07-74a0-4625-91b2-6d56dc76e506-clustermesh-secrets\") pod \"cilium-hdmh4\" (UID: \"40bcdb07-74a0-4625-91b2-6d56dc76e506\") " pod="kube-system/cilium-hdmh4" Feb 13 15:24:23.778438 kubelet[2781]: I0213 15:24:23.778284 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/40bcdb07-74a0-4625-91b2-6d56dc76e506-hubble-tls\") pod \"cilium-hdmh4\" (UID: \"40bcdb07-74a0-4625-91b2-6d56dc76e506\") " pod="kube-system/cilium-hdmh4" Feb 13 15:24:23.778438 kubelet[2781]: I0213 15:24:23.778315 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/40bcdb07-74a0-4625-91b2-6d56dc76e506-etc-cni-netd\") pod \"cilium-hdmh4\" (UID: \"40bcdb07-74a0-4625-91b2-6d56dc76e506\") " pod="kube-system/cilium-hdmh4" Feb 13 15:24:23.778438 kubelet[2781]: I0213 15:24:23.778350 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/40bcdb07-74a0-4625-91b2-6d56dc76e506-lib-modules\") pod \"cilium-hdmh4\" (UID: \"40bcdb07-74a0-4625-91b2-6d56dc76e506\") " pod="kube-system/cilium-hdmh4" Feb 13 15:24:23.778438 kubelet[2781]: I0213 15:24:23.778384 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/40bcdb07-74a0-4625-91b2-6d56dc76e506-cilium-cgroup\") pod \"cilium-hdmh4\" (UID: \"40bcdb07-74a0-4625-91b2-6d56dc76e506\") " pod="kube-system/cilium-hdmh4" Feb 13 15:24:23.778755 kubelet[2781]: I0213 15:24:23.778416 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/40bcdb07-74a0-4625-91b2-6d56dc76e506-bpf-maps\") pod \"cilium-hdmh4\" (UID: \"40bcdb07-74a0-4625-91b2-6d56dc76e506\") " pod="kube-system/cilium-hdmh4" Feb 13 15:24:23.778755 kubelet[2781]: I0213 15:24:23.778448 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/40bcdb07-74a0-4625-91b2-6d56dc76e506-host-proc-sys-kernel\") pod \"cilium-hdmh4\" (UID: \"40bcdb07-74a0-4625-91b2-6d56dc76e506\") " pod="kube-system/cilium-hdmh4" Feb 13 15:24:23.778755 kubelet[2781]: I0213 15:24:23.778485 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfw59\" (UniqueName: \"kubernetes.io/projected/40bcdb07-74a0-4625-91b2-6d56dc76e506-kube-api-access-vfw59\") pod \"cilium-hdmh4\" (UID: \"40bcdb07-74a0-4625-91b2-6d56dc76e506\") " pod="kube-system/cilium-hdmh4" Feb 13 15:24:23.778755 kubelet[2781]: I0213 15:24:23.778518 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/40bcdb07-74a0-4625-91b2-6d56dc76e506-cilium-config-path\") pod \"cilium-hdmh4\" (UID: \"40bcdb07-74a0-4625-91b2-6d56dc76e506\") " pod="kube-system/cilium-hdmh4" Feb 13 15:24:23.778755 kubelet[2781]: I0213 15:24:23.778624 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/40bcdb07-74a0-4625-91b2-6d56dc76e506-cilium-ipsec-secrets\") pod \"cilium-hdmh4\" (UID: \"40bcdb07-74a0-4625-91b2-6d56dc76e506\") " pod="kube-system/cilium-hdmh4" Feb 13 15:24:23.779021 kubelet[2781]: I0213 15:24:23.778664 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/40bcdb07-74a0-4625-91b2-6d56dc76e506-host-proc-sys-net\") pod \"cilium-hdmh4\" (UID: \"40bcdb07-74a0-4625-91b2-6d56dc76e506\") " pod="kube-system/cilium-hdmh4" Feb 13 15:24:23.779021 kubelet[2781]: I0213 15:24:23.778729 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/40bcdb07-74a0-4625-91b2-6d56dc76e506-cilium-run\") pod \"cilium-hdmh4\" (UID: \"40bcdb07-74a0-4625-91b2-6d56dc76e506\") " pod="kube-system/cilium-hdmh4" Feb 13 15:24:23.879879 sshd[4564]: Connection closed by 139.178.68.195 port 50648 Feb 13 15:24:23.881277 sshd-session[4562]: pam_unix(sshd:session): session closed for user core Feb 13 15:24:23.901640 systemd[1]: sshd@23-188.245.168.142:22-139.178.68.195:50648.service: Deactivated successfully. Feb 13 15:24:23.904757 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 15:24:23.909877 systemd-logind[1492]: Session 22 logged out. Waiting for processes to exit. Feb 13 15:24:23.915102 systemd-logind[1492]: Removed session 22. Feb 13 15:24:24.054308 systemd[1]: Started sshd@24-188.245.168.142:22-139.178.68.195:50654.service - OpenSSH per-connection server daemon (139.178.68.195:50654). Feb 13 15:24:24.884980 kubelet[2781]: E0213 15:24:24.883764 2781 projected.go:269] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Feb 13 15:24:24.884980 kubelet[2781]: E0213 15:24:24.883817 2781 projected.go:200] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-hdmh4: failed to sync secret cache: timed out waiting for the condition Feb 13 15:24:24.884980 kubelet[2781]: E0213 15:24:24.883902 2781 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/40bcdb07-74a0-4625-91b2-6d56dc76e506-hubble-tls podName:40bcdb07-74a0-4625-91b2-6d56dc76e506 nodeName:}" failed. No retries permitted until 2025-02-13 15:24:25.383873212 +0000 UTC m=+355.240140330 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/40bcdb07-74a0-4625-91b2-6d56dc76e506-hubble-tls") pod "cilium-hdmh4" (UID: "40bcdb07-74a0-4625-91b2-6d56dc76e506") : failed to sync secret cache: timed out waiting for the condition Feb 13 15:24:25.035236 sshd[4578]: Accepted publickey for core from 139.178.68.195 port 50654 ssh2: RSA SHA256:dDBYffbys7IwrjEqnD+nC8HZkuMa8NXLOQVKUB+uHPI Feb 13 15:24:25.037473 sshd-session[4578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:24:25.043419 systemd-logind[1492]: New session 23 of user core. Feb 13 15:24:25.049250 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 15:24:25.256810 kubelet[2781]: E0213 15:24:25.256573 2781 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-m7c26" podUID="f5154aa1-c5f8-4fdb-a795-a6b8fbace965" Feb 13 15:24:25.449568 kubelet[2781]: E0213 15:24:25.449289 2781 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 15:24:25.519321 containerd[1515]: time="2025-02-13T15:24:25.518923125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hdmh4,Uid:40bcdb07-74a0-4625-91b2-6d56dc76e506,Namespace:kube-system,Attempt:0,}" Feb 13 15:24:25.544241 containerd[1515]: time="2025-02-13T15:24:25.544122026Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:24:25.544241 containerd[1515]: time="2025-02-13T15:24:25.544182469Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:24:25.544241 containerd[1515]: time="2025-02-13T15:24:25.544195270Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:24:25.544508 containerd[1515]: time="2025-02-13T15:24:25.544346398Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:24:25.562701 systemd[1]: run-containerd-runc-k8s.io-fc2c51ca2f454dca09295c5f7216c1a832e1ca03c6935f4aef50bb1548a06ad3-runc.hwikpQ.mount: Deactivated successfully. Feb 13 15:24:25.570116 systemd[1]: Started cri-containerd-fc2c51ca2f454dca09295c5f7216c1a832e1ca03c6935f4aef50bb1548a06ad3.scope - libcontainer container fc2c51ca2f454dca09295c5f7216c1a832e1ca03c6935f4aef50bb1548a06ad3. Feb 13 15:24:25.593743 containerd[1515]: time="2025-02-13T15:24:25.593701345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hdmh4,Uid:40bcdb07-74a0-4625-91b2-6d56dc76e506,Namespace:kube-system,Attempt:0,} returns sandbox id \"fc2c51ca2f454dca09295c5f7216c1a832e1ca03c6935f4aef50bb1548a06ad3\"" Feb 13 15:24:25.598205 containerd[1515]: time="2025-02-13T15:24:25.598160303Z" level=info msg="CreateContainer within sandbox \"fc2c51ca2f454dca09295c5f7216c1a832e1ca03c6935f4aef50bb1548a06ad3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 15:24:25.608955 containerd[1515]: time="2025-02-13T15:24:25.608890874Z" level=info msg="CreateContainer within sandbox \"fc2c51ca2f454dca09295c5f7216c1a832e1ca03c6935f4aef50bb1548a06ad3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"bda8919eb08bbb608bcc0225b343d4d726df2b7df462bdb9b255dd1c0d9d7f5e\"" Feb 13 15:24:25.609351 containerd[1515]: time="2025-02-13T15:24:25.609299176Z" level=info msg="StartContainer for \"bda8919eb08bbb608bcc0225b343d4d726df2b7df462bdb9b255dd1c0d9d7f5e\"" Feb 13 15:24:25.634518 systemd[1]: Started cri-containerd-bda8919eb08bbb608bcc0225b343d4d726df2b7df462bdb9b255dd1c0d9d7f5e.scope - libcontainer container bda8919eb08bbb608bcc0225b343d4d726df2b7df462bdb9b255dd1c0d9d7f5e. Feb 13 15:24:25.665621 containerd[1515]: time="2025-02-13T15:24:25.665525329Z" level=info msg="StartContainer for \"bda8919eb08bbb608bcc0225b343d4d726df2b7df462bdb9b255dd1c0d9d7f5e\" returns successfully" Feb 13 15:24:25.677003 systemd[1]: cri-containerd-bda8919eb08bbb608bcc0225b343d4d726df2b7df462bdb9b255dd1c0d9d7f5e.scope: Deactivated successfully. Feb 13 15:24:25.708261 containerd[1515]: time="2025-02-13T15:24:25.708015790Z" level=info msg="shim disconnected" id=bda8919eb08bbb608bcc0225b343d4d726df2b7df462bdb9b255dd1c0d9d7f5e namespace=k8s.io Feb 13 15:24:25.708261 containerd[1515]: time="2025-02-13T15:24:25.708249603Z" level=warning msg="cleaning up after shim disconnected" id=bda8919eb08bbb608bcc0225b343d4d726df2b7df462bdb9b255dd1c0d9d7f5e namespace=k8s.io Feb 13 15:24:25.708261 containerd[1515]: time="2025-02-13T15:24:25.708264044Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:24:25.710654 sshd[4580]: Connection closed by 139.178.68.195 port 50654 Feb 13 15:24:25.712110 sshd-session[4578]: pam_unix(sshd:session): session closed for user core Feb 13 15:24:25.717230 systemd[1]: sshd@24-188.245.168.142:22-139.178.68.195:50654.service: Deactivated successfully. Feb 13 15:24:25.721949 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 15:24:25.728091 systemd-logind[1492]: Session 23 logged out. Waiting for processes to exit. Feb 13 15:24:25.730131 systemd-logind[1492]: Removed session 23. Feb 13 15:24:25.892257 systemd[1]: Started sshd@25-188.245.168.142:22-139.178.68.195:50668.service - OpenSSH per-connection server daemon (139.178.68.195:50668). Feb 13 15:24:26.226109 containerd[1515]: time="2025-02-13T15:24:26.226012265Z" level=info msg="CreateContainer within sandbox \"fc2c51ca2f454dca09295c5f7216c1a832e1ca03c6935f4aef50bb1548a06ad3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 15:24:26.240125 containerd[1515]: time="2025-02-13T15:24:26.240079455Z" level=info msg="CreateContainer within sandbox \"fc2c51ca2f454dca09295c5f7216c1a832e1ca03c6935f4aef50bb1548a06ad3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e04297f00f86162cdf44586694ea214235da12617a174beee4267d44dbe7100e\"" Feb 13 15:24:26.241917 containerd[1515]: time="2025-02-13T15:24:26.241005545Z" level=info msg="StartContainer for \"e04297f00f86162cdf44586694ea214235da12617a174beee4267d44dbe7100e\"" Feb 13 15:24:26.278237 systemd[1]: Started cri-containerd-e04297f00f86162cdf44586694ea214235da12617a174beee4267d44dbe7100e.scope - libcontainer container e04297f00f86162cdf44586694ea214235da12617a174beee4267d44dbe7100e. Feb 13 15:24:26.308251 containerd[1515]: time="2025-02-13T15:24:26.308193368Z" level=info msg="StartContainer for \"e04297f00f86162cdf44586694ea214235da12617a174beee4267d44dbe7100e\" returns successfully" Feb 13 15:24:26.316445 systemd[1]: cri-containerd-e04297f00f86162cdf44586694ea214235da12617a174beee4267d44dbe7100e.scope: Deactivated successfully. Feb 13 15:24:26.342008 containerd[1515]: time="2025-02-13T15:24:26.341869164Z" level=info msg="shim disconnected" id=e04297f00f86162cdf44586694ea214235da12617a174beee4267d44dbe7100e namespace=k8s.io Feb 13 15:24:26.342008 containerd[1515]: time="2025-02-13T15:24:26.341921686Z" level=warning msg="cleaning up after shim disconnected" id=e04297f00f86162cdf44586694ea214235da12617a174beee4267d44dbe7100e namespace=k8s.io Feb 13 15:24:26.342008 containerd[1515]: time="2025-02-13T15:24:26.341929767Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:24:26.879948 sshd[4689]: Accepted publickey for core from 139.178.68.195 port 50668 ssh2: RSA SHA256:dDBYffbys7IwrjEqnD+nC8HZkuMa8NXLOQVKUB+uHPI Feb 13 15:24:26.882058 sshd-session[4689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:24:26.888776 systemd-logind[1492]: New session 24 of user core. Feb 13 15:24:26.894155 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 15:24:27.231411 containerd[1515]: time="2025-02-13T15:24:27.231298937Z" level=info msg="CreateContainer within sandbox \"fc2c51ca2f454dca09295c5f7216c1a832e1ca03c6935f4aef50bb1548a06ad3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 15:24:27.252514 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3402323557.mount: Deactivated successfully. Feb 13 15:24:27.258109 kubelet[2781]: E0213 15:24:27.258076 2781 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-bp6tl" podUID="715e7996-b52b-41fc-8005-86bb13b3de10" Feb 13 15:24:27.259871 kubelet[2781]: E0213 15:24:27.258871 2781 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-m7c26" podUID="f5154aa1-c5f8-4fdb-a795-a6b8fbace965" Feb 13 15:24:27.260193 containerd[1515]: time="2025-02-13T15:24:27.259613009Z" level=info msg="CreateContainer within sandbox \"fc2c51ca2f454dca09295c5f7216c1a832e1ca03c6935f4aef50bb1548a06ad3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2a5165d9c15a155b06c65c56815bc0fbe9e6d1fbe7cd61fedb94a2358b2cca74\"" Feb 13 15:24:27.261190 containerd[1515]: time="2025-02-13T15:24:27.260542659Z" level=info msg="StartContainer for \"2a5165d9c15a155b06c65c56815bc0fbe9e6d1fbe7cd61fedb94a2358b2cca74\"" Feb 13 15:24:27.294156 systemd[1]: Started cri-containerd-2a5165d9c15a155b06c65c56815bc0fbe9e6d1fbe7cd61fedb94a2358b2cca74.scope - libcontainer container 2a5165d9c15a155b06c65c56815bc0fbe9e6d1fbe7cd61fedb94a2358b2cca74. Feb 13 15:24:27.324274 containerd[1515]: time="2025-02-13T15:24:27.324209420Z" level=info msg="StartContainer for \"2a5165d9c15a155b06c65c56815bc0fbe9e6d1fbe7cd61fedb94a2358b2cca74\" returns successfully" Feb 13 15:24:27.327861 systemd[1]: cri-containerd-2a5165d9c15a155b06c65c56815bc0fbe9e6d1fbe7cd61fedb94a2358b2cca74.scope: Deactivated successfully. Feb 13 15:24:27.358169 containerd[1515]: time="2025-02-13T15:24:27.358060269Z" level=info msg="shim disconnected" id=2a5165d9c15a155b06c65c56815bc0fbe9e6d1fbe7cd61fedb94a2358b2cca74 namespace=k8s.io Feb 13 15:24:27.358169 containerd[1515]: time="2025-02-13T15:24:27.358169315Z" level=warning msg="cleaning up after shim disconnected" id=2a5165d9c15a155b06c65c56815bc0fbe9e6d1fbe7cd61fedb94a2358b2cca74 namespace=k8s.io Feb 13 15:24:27.358528 containerd[1515]: time="2025-02-13T15:24:27.358189196Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:24:27.399678 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2a5165d9c15a155b06c65c56815bc0fbe9e6d1fbe7cd61fedb94a2358b2cca74-rootfs.mount: Deactivated successfully. Feb 13 15:24:27.757774 kubelet[2781]: I0213 15:24:27.756958 2781 setters.go:580] "Node became not ready" node="ci-4230-0-1-0-5f4e073373" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T15:24:27Z","lastTransitionTime":"2025-02-13T15:24:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 13 15:24:28.236345 containerd[1515]: time="2025-02-13T15:24:28.236306810Z" level=info msg="CreateContainer within sandbox \"fc2c51ca2f454dca09295c5f7216c1a832e1ca03c6935f4aef50bb1548a06ad3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 15:24:28.253513 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3833659800.mount: Deactivated successfully. Feb 13 15:24:28.256711 containerd[1515]: time="2025-02-13T15:24:28.256672459Z" level=info msg="CreateContainer within sandbox \"fc2c51ca2f454dca09295c5f7216c1a832e1ca03c6935f4aef50bb1548a06ad3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0d3106072d4b853eb44dffc01375596e089148b281f904b5ac6856718387b34e\"" Feb 13 15:24:28.260252 containerd[1515]: time="2025-02-13T15:24:28.258115857Z" level=info msg="StartContainer for \"0d3106072d4b853eb44dffc01375596e089148b281f904b5ac6856718387b34e\"" Feb 13 15:24:28.291186 systemd[1]: Started cri-containerd-0d3106072d4b853eb44dffc01375596e089148b281f904b5ac6856718387b34e.scope - libcontainer container 0d3106072d4b853eb44dffc01375596e089148b281f904b5ac6856718387b34e. Feb 13 15:24:28.320024 systemd[1]: cri-containerd-0d3106072d4b853eb44dffc01375596e089148b281f904b5ac6856718387b34e.scope: Deactivated successfully. Feb 13 15:24:28.324501 containerd[1515]: time="2025-02-13T15:24:28.324426085Z" level=info msg="StartContainer for \"0d3106072d4b853eb44dffc01375596e089148b281f904b5ac6856718387b34e\" returns successfully" Feb 13 15:24:28.347723 containerd[1515]: time="2025-02-13T15:24:28.347532442Z" level=info msg="shim disconnected" id=0d3106072d4b853eb44dffc01375596e089148b281f904b5ac6856718387b34e namespace=k8s.io Feb 13 15:24:28.347723 containerd[1515]: time="2025-02-13T15:24:28.347621247Z" level=warning msg="cleaning up after shim disconnected" id=0d3106072d4b853eb44dffc01375596e089148b281f904b5ac6856718387b34e namespace=k8s.io Feb 13 15:24:28.347723 containerd[1515]: time="2025-02-13T15:24:28.347636848Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:24:28.399927 systemd[1]: run-containerd-runc-k8s.io-0d3106072d4b853eb44dffc01375596e089148b281f904b5ac6856718387b34e-runc.TCdApi.mount: Deactivated successfully. Feb 13 15:24:28.400145 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0d3106072d4b853eb44dffc01375596e089148b281f904b5ac6856718387b34e-rootfs.mount: Deactivated successfully. Feb 13 15:24:29.245389 containerd[1515]: time="2025-02-13T15:24:29.244872888Z" level=info msg="CreateContainer within sandbox \"fc2c51ca2f454dca09295c5f7216c1a832e1ca03c6935f4aef50bb1548a06ad3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 15:24:29.257159 kubelet[2781]: E0213 15:24:29.256968 2781 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-bp6tl" podUID="715e7996-b52b-41fc-8005-86bb13b3de10" Feb 13 15:24:29.257159 kubelet[2781]: E0213 15:24:29.257109 2781 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-m7c26" podUID="f5154aa1-c5f8-4fdb-a795-a6b8fbace965" Feb 13 15:24:29.270855 containerd[1515]: time="2025-02-13T15:24:29.270769316Z" level=info msg="CreateContainer within sandbox \"fc2c51ca2f454dca09295c5f7216c1a832e1ca03c6935f4aef50bb1548a06ad3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"893bc090b9f543a51a6f3d036ef307ea5b20f18428a88f969827c6f1ef3bdc45\"" Feb 13 15:24:29.271896 containerd[1515]: time="2025-02-13T15:24:29.271866095Z" level=info msg="StartContainer for \"893bc090b9f543a51a6f3d036ef307ea5b20f18428a88f969827c6f1ef3bdc45\"" Feb 13 15:24:29.311249 systemd[1]: Started cri-containerd-893bc090b9f543a51a6f3d036ef307ea5b20f18428a88f969827c6f1ef3bdc45.scope - libcontainer container 893bc090b9f543a51a6f3d036ef307ea5b20f18428a88f969827c6f1ef3bdc45. Feb 13 15:24:29.354572 containerd[1515]: time="2025-02-13T15:24:29.354356917Z" level=info msg="StartContainer for \"893bc090b9f543a51a6f3d036ef307ea5b20f18428a88f969827c6f1ef3bdc45\" returns successfully" Feb 13 15:24:29.686143 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Feb 13 15:24:30.275187 kubelet[2781]: I0213 15:24:30.275016 2781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-hdmh4" podStartSLOduration=7.274993777 podStartE2EDuration="7.274993777s" podCreationTimestamp="2025-02-13 15:24:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:24:30.272584247 +0000 UTC m=+360.128851365" watchObservedRunningTime="2025-02-13 15:24:30.274993777 +0000 UTC m=+360.131260935" Feb 13 15:24:30.307465 containerd[1515]: time="2025-02-13T15:24:30.307309792Z" level=info msg="StopPodSandbox for \"fd6d9f49149a018cc617f8b9f3397bca194a2fdc634ac4450c557fc0a9656d9b\"" Feb 13 15:24:30.307465 containerd[1515]: time="2025-02-13T15:24:30.307442839Z" level=info msg="TearDown network for sandbox \"fd6d9f49149a018cc617f8b9f3397bca194a2fdc634ac4450c557fc0a9656d9b\" successfully" Feb 13 15:24:30.307465 containerd[1515]: time="2025-02-13T15:24:30.307455160Z" level=info msg="StopPodSandbox for \"fd6d9f49149a018cc617f8b9f3397bca194a2fdc634ac4450c557fc0a9656d9b\" returns successfully" Feb 13 15:24:30.308104 containerd[1515]: time="2025-02-13T15:24:30.308040031Z" level=info msg="RemovePodSandbox for \"fd6d9f49149a018cc617f8b9f3397bca194a2fdc634ac4450c557fc0a9656d9b\"" Feb 13 15:24:30.308104 containerd[1515]: time="2025-02-13T15:24:30.308083514Z" level=info msg="Forcibly stopping sandbox \"fd6d9f49149a018cc617f8b9f3397bca194a2fdc634ac4450c557fc0a9656d9b\"" Feb 13 15:24:30.308322 containerd[1515]: time="2025-02-13T15:24:30.308139837Z" level=info msg="TearDown network for sandbox \"fd6d9f49149a018cc617f8b9f3397bca194a2fdc634ac4450c557fc0a9656d9b\" successfully" Feb 13 15:24:30.311376 containerd[1515]: time="2025-02-13T15:24:30.311331368Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fd6d9f49149a018cc617f8b9f3397bca194a2fdc634ac4450c557fc0a9656d9b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:24:30.311483 containerd[1515]: time="2025-02-13T15:24:30.311400892Z" level=info msg="RemovePodSandbox \"fd6d9f49149a018cc617f8b9f3397bca194a2fdc634ac4450c557fc0a9656d9b\" returns successfully" Feb 13 15:24:30.312125 containerd[1515]: time="2025-02-13T15:24:30.311969602Z" level=info msg="StopPodSandbox for \"491101b83bb32eb2726f899152d28496a071fdf5ccc824f787c7b008f8155066\"" Feb 13 15:24:30.312125 containerd[1515]: time="2025-02-13T15:24:30.312067088Z" level=info msg="TearDown network for sandbox \"491101b83bb32eb2726f899152d28496a071fdf5ccc824f787c7b008f8155066\" successfully" Feb 13 15:24:30.312125 containerd[1515]: time="2025-02-13T15:24:30.312078488Z" level=info msg="StopPodSandbox for \"491101b83bb32eb2726f899152d28496a071fdf5ccc824f787c7b008f8155066\" returns successfully" Feb 13 15:24:30.312707 containerd[1515]: time="2025-02-13T15:24:30.312669160Z" level=info msg="RemovePodSandbox for \"491101b83bb32eb2726f899152d28496a071fdf5ccc824f787c7b008f8155066\"" Feb 13 15:24:30.312759 containerd[1515]: time="2025-02-13T15:24:30.312725883Z" level=info msg="Forcibly stopping sandbox \"491101b83bb32eb2726f899152d28496a071fdf5ccc824f787c7b008f8155066\"" Feb 13 15:24:30.312856 containerd[1515]: time="2025-02-13T15:24:30.312830129Z" level=info msg="TearDown network for sandbox \"491101b83bb32eb2726f899152d28496a071fdf5ccc824f787c7b008f8155066\" successfully" Feb 13 15:24:30.317945 containerd[1515]: time="2025-02-13T15:24:30.317004873Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"491101b83bb32eb2726f899152d28496a071fdf5ccc824f787c7b008f8155066\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:24:30.317945 containerd[1515]: time="2025-02-13T15:24:30.317067956Z" level=info msg="RemovePodSandbox \"491101b83bb32eb2726f899152d28496a071fdf5ccc824f787c7b008f8155066\" returns successfully" Feb 13 15:24:31.685778 kubelet[2781]: E0213 15:24:31.685540 2781 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:42140->127.0.0.1:40205: write tcp 127.0.0.1:42140->127.0.0.1:40205: write: broken pipe Feb 13 15:24:32.632687 systemd-networkd[1413]: lxc_health: Link UP Feb 13 15:24:32.644963 systemd-networkd[1413]: lxc_health: Gained carrier Feb 13 15:24:34.144144 systemd-networkd[1413]: lxc_health: Gained IPv6LL Feb 13 15:24:35.962724 systemd[1]: run-containerd-runc-k8s.io-893bc090b9f543a51a6f3d036ef307ea5b20f18428a88f969827c6f1ef3bdc45-runc.v3MNfl.mount: Deactivated successfully. Feb 13 15:24:38.124874 systemd[1]: run-containerd-runc-k8s.io-893bc090b9f543a51a6f3d036ef307ea5b20f18428a88f969827c6f1ef3bdc45-runc.ABeaxt.mount: Deactivated successfully. Feb 13 15:24:38.338033 sshd[4751]: Connection closed by 139.178.68.195 port 50668 Feb 13 15:24:38.338957 sshd-session[4689]: pam_unix(sshd:session): session closed for user core Feb 13 15:24:38.342855 systemd[1]: sshd@25-188.245.168.142:22-139.178.68.195:50668.service: Deactivated successfully. Feb 13 15:24:38.345035 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 15:24:38.347202 systemd-logind[1492]: Session 24 logged out. Waiting for processes to exit. Feb 13 15:24:38.348678 systemd-logind[1492]: Removed session 24.