Feb 13 19:13:48.915363 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 19:13:48.915405 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Thu Feb 13 17:39:57 -00 2025 Feb 13 19:13:48.915419 kernel: KASLR enabled Feb 13 19:13:48.915425 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Feb 13 19:13:48.915432 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390b8118 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b41218 Feb 13 19:13:48.915439 kernel: random: crng init done Feb 13 19:13:48.915446 kernel: secureboot: Secure boot disabled Feb 13 19:13:48.915453 kernel: ACPI: Early table checksum verification disabled Feb 13 19:13:48.915460 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Feb 13 19:13:48.915468 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Feb 13 19:13:48.915475 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:13:48.915482 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:13:48.915489 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:13:48.915495 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:13:48.915503 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:13:48.915513 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:13:48.915520 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:13:48.915527 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:13:48.915534 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:13:48.915541 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Feb 13 19:13:48.915548 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Feb 13 19:13:48.915560 kernel: NUMA: Failed to initialise from firmware Feb 13 19:13:48.915570 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Feb 13 19:13:48.915582 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] Feb 13 19:13:48.915591 kernel: Zone ranges: Feb 13 19:13:48.915602 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Feb 13 19:13:48.915609 kernel: DMA32 empty Feb 13 19:13:48.915618 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Feb 13 19:13:48.915624 kernel: Movable zone start for each node Feb 13 19:13:48.915631 kernel: Early memory node ranges Feb 13 19:13:48.915639 kernel: node 0: [mem 0x0000000040000000-0x000000013666ffff] Feb 13 19:13:48.915645 kernel: node 0: [mem 0x0000000136670000-0x000000013667ffff] Feb 13 19:13:48.915653 kernel: node 0: [mem 0x0000000136680000-0x000000013676ffff] Feb 13 19:13:48.915660 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Feb 13 19:13:48.915667 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Feb 13 19:13:48.915674 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Feb 13 19:13:48.915681 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Feb 13 19:13:48.915690 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Feb 13 19:13:48.915697 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Feb 13 19:13:48.915705 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Feb 13 19:13:48.915715 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Feb 13 19:13:48.915723 kernel: psci: probing for conduit method from ACPI. Feb 13 19:13:48.915730 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 19:13:48.915739 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 19:13:48.915746 kernel: psci: Trusted OS migration not required Feb 13 19:13:48.915754 kernel: psci: SMC Calling Convention v1.1 Feb 13 19:13:48.915761 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 13 19:13:48.915769 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 19:13:48.915777 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 19:13:48.915784 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 13 19:13:48.915792 kernel: Detected PIPT I-cache on CPU0 Feb 13 19:13:48.915799 kernel: CPU features: detected: GIC system register CPU interface Feb 13 19:13:48.915808 kernel: CPU features: detected: Hardware dirty bit management Feb 13 19:13:48.915817 kernel: CPU features: detected: Spectre-v4 Feb 13 19:13:48.915825 kernel: CPU features: detected: Spectre-BHB Feb 13 19:13:48.915833 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 19:13:48.915841 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 19:13:48.915848 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 19:13:48.915855 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 19:13:48.915863 kernel: alternatives: applying boot alternatives Feb 13 19:13:48.915871 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=f06bad36699a22ae88c1968cd72b62b3503d97da521712e50a4b744320b1ba33 Feb 13 19:13:48.915880 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:13:48.915887 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 19:13:48.915896 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 19:13:48.915906 kernel: Fallback order for Node 0: 0 Feb 13 19:13:48.915914 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Feb 13 19:13:48.915921 kernel: Policy zone: Normal Feb 13 19:13:48.915929 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:13:48.915936 kernel: software IO TLB: area num 2. Feb 13 19:13:48.915944 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Feb 13 19:13:48.915951 kernel: Memory: 3883896K/4096000K available (10304K kernel code, 2186K rwdata, 8092K rodata, 38336K init, 897K bss, 212104K reserved, 0K cma-reserved) Feb 13 19:13:48.915960 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 19:13:48.915968 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:13:48.915986 kernel: rcu: RCU event tracing is enabled. Feb 13 19:13:48.915994 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 19:13:48.916000 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:13:48.916010 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:13:48.916016 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:13:48.916023 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 19:13:48.916032 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 19:13:48.916040 kernel: GICv3: 256 SPIs implemented Feb 13 19:13:48.916046 kernel: GICv3: 0 Extended SPIs implemented Feb 13 19:13:48.916055 kernel: Root IRQ handler: gic_handle_irq Feb 13 19:13:48.916063 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 19:13:48.916071 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 13 19:13:48.916078 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 13 19:13:48.916085 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 19:13:48.916098 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Feb 13 19:13:48.916105 kernel: GICv3: using LPI property table @0x00000001000e0000 Feb 13 19:13:48.916113 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Feb 13 19:13:48.916120 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:13:48.916127 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:13:48.916135 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 19:13:48.916143 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 19:13:48.916152 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 19:13:48.916160 kernel: Console: colour dummy device 80x25 Feb 13 19:13:48.916167 kernel: ACPI: Core revision 20230628 Feb 13 19:13:48.916174 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 19:13:48.916183 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:13:48.916191 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:13:48.916199 kernel: landlock: Up and running. Feb 13 19:13:48.916205 kernel: SELinux: Initializing. Feb 13 19:13:48.916212 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:13:48.916219 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:13:48.916226 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:13:48.916233 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:13:48.916240 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:13:48.916249 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:13:48.916255 kernel: Platform MSI: ITS@0x8080000 domain created Feb 13 19:13:48.916262 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 13 19:13:48.916269 kernel: Remapping and enabling EFI services. Feb 13 19:13:48.916287 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:13:48.916294 kernel: Detected PIPT I-cache on CPU1 Feb 13 19:13:48.916301 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 13 19:13:48.916310 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Feb 13 19:13:48.916317 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:13:48.916327 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 19:13:48.916335 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 19:13:48.916347 kernel: SMP: Total of 2 processors activated. Feb 13 19:13:48.916355 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 19:13:48.916362 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 19:13:48.916370 kernel: CPU features: detected: Common not Private translations Feb 13 19:13:48.916377 kernel: CPU features: detected: CRC32 instructions Feb 13 19:13:48.916384 kernel: CPU features: detected: Enhanced Virtualization Traps Feb 13 19:13:48.916391 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 19:13:48.916400 kernel: CPU features: detected: LSE atomic instructions Feb 13 19:13:48.916408 kernel: CPU features: detected: Privileged Access Never Feb 13 19:13:48.916415 kernel: CPU features: detected: RAS Extension Support Feb 13 19:13:48.916422 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 13 19:13:48.916429 kernel: CPU: All CPU(s) started at EL1 Feb 13 19:13:48.916437 kernel: alternatives: applying system-wide alternatives Feb 13 19:13:48.916445 kernel: devtmpfs: initialized Feb 13 19:13:48.916452 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:13:48.916461 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 19:13:48.916468 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:13:48.916475 kernel: SMBIOS 3.0.0 present. Feb 13 19:13:48.916482 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Feb 13 19:13:48.916490 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:13:48.916497 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 19:13:48.916504 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 19:13:48.916511 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 19:13:48.916518 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:13:48.916527 kernel: audit: type=2000 audit(0.012:1): state=initialized audit_enabled=0 res=1 Feb 13 19:13:48.916534 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:13:48.916542 kernel: cpuidle: using governor menu Feb 13 19:13:48.916549 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 19:13:48.916556 kernel: ASID allocator initialised with 32768 entries Feb 13 19:13:48.916563 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:13:48.916571 kernel: Serial: AMBA PL011 UART driver Feb 13 19:13:48.916578 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 19:13:48.916585 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 19:13:48.916593 kernel: Modules: 509280 pages in range for PLT usage Feb 13 19:13:48.916601 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:13:48.916608 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:13:48.916615 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 19:13:48.916622 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 19:13:48.916629 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:13:48.916636 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:13:48.916644 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 19:13:48.916651 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 19:13:48.916659 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:13:48.916666 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:13:48.916673 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:13:48.916681 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:13:48.916688 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 19:13:48.916695 kernel: ACPI: Interpreter enabled Feb 13 19:13:48.916702 kernel: ACPI: Using GIC for interrupt routing Feb 13 19:13:48.916709 kernel: ACPI: MCFG table detected, 1 entries Feb 13 19:13:48.916716 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 13 19:13:48.916725 kernel: printk: console [ttyAMA0] enabled Feb 13 19:13:48.916732 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 19:13:48.916912 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:13:48.917045 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 19:13:48.917125 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 19:13:48.917201 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 13 19:13:48.917272 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 13 19:13:48.917315 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 13 19:13:48.917323 kernel: PCI host bridge to bus 0000:00 Feb 13 19:13:48.917408 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 13 19:13:48.917471 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 19:13:48.917532 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 13 19:13:48.917593 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 19:13:48.917675 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 13 19:13:48.917761 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Feb 13 19:13:48.917831 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Feb 13 19:13:48.917902 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Feb 13 19:13:48.917999 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Feb 13 19:13:48.918083 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Feb 13 19:13:48.918171 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Feb 13 19:13:48.918256 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Feb 13 19:13:48.918434 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Feb 13 19:13:48.918508 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Feb 13 19:13:48.918579 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Feb 13 19:13:48.918646 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Feb 13 19:13:48.918718 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Feb 13 19:13:48.918782 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Feb 13 19:13:48.918859 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Feb 13 19:13:48.918925 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Feb 13 19:13:48.919012 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Feb 13 19:13:48.919082 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Feb 13 19:13:48.919154 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Feb 13 19:13:48.919221 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Feb 13 19:13:48.919317 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Feb 13 19:13:48.919388 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Feb 13 19:13:48.919462 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Feb 13 19:13:48.919528 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Feb 13 19:13:48.919609 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Feb 13 19:13:48.919679 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Feb 13 19:13:48.919751 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 19:13:48.919819 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Feb 13 19:13:48.919896 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Feb 13 19:13:48.920009 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Feb 13 19:13:48.920117 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Feb 13 19:13:48.920192 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Feb 13 19:13:48.920265 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Feb 13 19:13:48.921873 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Feb 13 19:13:48.921950 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Feb 13 19:13:48.922084 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Feb 13 19:13:48.922158 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Feb 13 19:13:48.922225 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Feb 13 19:13:48.922332 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Feb 13 19:13:48.922418 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Feb 13 19:13:48.922490 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Feb 13 19:13:48.922567 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Feb 13 19:13:48.922635 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Feb 13 19:13:48.922703 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Feb 13 19:13:48.922770 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Feb 13 19:13:48.922843 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Feb 13 19:13:48.922909 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Feb 13 19:13:48.922989 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Feb 13 19:13:48.923069 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Feb 13 19:13:48.923137 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Feb 13 19:13:48.923202 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Feb 13 19:13:48.923274 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Feb 13 19:13:48.923464 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Feb 13 19:13:48.923531 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Feb 13 19:13:48.923605 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Feb 13 19:13:48.923684 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Feb 13 19:13:48.923764 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Feb 13 19:13:48.923834 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Feb 13 19:13:48.923900 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Feb 13 19:13:48.923982 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Feb 13 19:13:48.924082 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Feb 13 19:13:48.924159 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Feb 13 19:13:48.924234 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Feb 13 19:13:48.924319 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Feb 13 19:13:48.924389 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Feb 13 19:13:48.924453 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Feb 13 19:13:48.924523 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Feb 13 19:13:48.924590 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Feb 13 19:13:48.924656 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Feb 13 19:13:48.924723 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Feb 13 19:13:48.924788 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Feb 13 19:13:48.924852 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Feb 13 19:13:48.924918 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Feb 13 19:13:48.925022 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Feb 13 19:13:48.925100 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Feb 13 19:13:48.925172 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Feb 13 19:13:48.925239 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Feb 13 19:13:48.925367 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Feb 13 19:13:48.925440 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Feb 13 19:13:48.925506 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Feb 13 19:13:48.925572 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Feb 13 19:13:48.925641 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Feb 13 19:13:48.925799 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Feb 13 19:13:48.925873 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Feb 13 19:13:48.925941 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Feb 13 19:13:48.926026 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Feb 13 19:13:48.926097 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Feb 13 19:13:48.926164 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Feb 13 19:13:48.926237 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Feb 13 19:13:48.926321 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Feb 13 19:13:48.926395 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Feb 13 19:13:48.926461 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Feb 13 19:13:48.926528 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Feb 13 19:13:48.926594 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Feb 13 19:13:48.926662 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Feb 13 19:13:48.926728 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Feb 13 19:13:48.926799 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Feb 13 19:13:48.926863 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Feb 13 19:13:48.926929 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Feb 13 19:13:48.927041 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Feb 13 19:13:48.927117 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Feb 13 19:13:48.927183 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Feb 13 19:13:48.927250 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Feb 13 19:13:48.927342 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Feb 13 19:13:48.927420 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Feb 13 19:13:48.927487 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Feb 13 19:13:48.927554 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Feb 13 19:13:48.927630 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Feb 13 19:13:48.927699 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Feb 13 19:13:48.927765 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Feb 13 19:13:48.927836 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Feb 13 19:13:48.927911 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Feb 13 19:13:48.928008 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 19:13:48.928084 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Feb 13 19:13:48.928160 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Feb 13 19:13:48.928232 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Feb 13 19:13:48.928439 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Feb 13 19:13:48.928517 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Feb 13 19:13:48.928592 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Feb 13 19:13:48.928669 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Feb 13 19:13:48.928736 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Feb 13 19:13:48.928804 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Feb 13 19:13:48.928867 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Feb 13 19:13:48.928939 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Feb 13 19:13:48.929055 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Feb 13 19:13:48.929128 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Feb 13 19:13:48.929195 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Feb 13 19:13:48.929259 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Feb 13 19:13:48.929341 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Feb 13 19:13:48.929417 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Feb 13 19:13:48.929489 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Feb 13 19:13:48.929555 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Feb 13 19:13:48.929625 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Feb 13 19:13:48.929690 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Feb 13 19:13:48.929763 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Feb 13 19:13:48.929831 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Feb 13 19:13:48.929900 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Feb 13 19:13:48.929968 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Feb 13 19:13:48.930055 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Feb 13 19:13:48.930122 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Feb 13 19:13:48.930200 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Feb 13 19:13:48.930268 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Feb 13 19:13:48.930401 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Feb 13 19:13:48.930470 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Feb 13 19:13:48.930534 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Feb 13 19:13:48.930598 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Feb 13 19:13:48.930669 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Feb 13 19:13:48.930736 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Feb 13 19:13:48.930808 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Feb 13 19:13:48.930874 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Feb 13 19:13:48.930938 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Feb 13 19:13:48.931020 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Feb 13 19:13:48.931087 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Feb 13 19:13:48.931154 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Feb 13 19:13:48.931219 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Feb 13 19:13:48.931301 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Feb 13 19:13:48.931371 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Feb 13 19:13:48.931439 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Feb 13 19:13:48.931504 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Feb 13 19:13:48.931568 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Feb 13 19:13:48.931633 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Feb 13 19:13:48.931701 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 13 19:13:48.931761 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 19:13:48.931825 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 13 19:13:48.931910 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Feb 13 19:13:48.932028 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Feb 13 19:13:48.932109 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Feb 13 19:13:48.932195 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Feb 13 19:13:48.932257 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Feb 13 19:13:48.932338 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Feb 13 19:13:48.932426 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Feb 13 19:13:48.932491 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Feb 13 19:13:48.932552 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Feb 13 19:13:48.932621 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Feb 13 19:13:48.932682 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Feb 13 19:13:48.932744 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Feb 13 19:13:48.932819 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Feb 13 19:13:48.932882 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Feb 13 19:13:48.932945 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Feb 13 19:13:48.933033 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Feb 13 19:13:48.933102 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Feb 13 19:13:48.933164 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Feb 13 19:13:48.933232 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Feb 13 19:13:48.933320 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Feb 13 19:13:48.933384 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Feb 13 19:13:48.933453 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Feb 13 19:13:48.933514 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Feb 13 19:13:48.933578 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Feb 13 19:13:48.933648 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Feb 13 19:13:48.933710 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Feb 13 19:13:48.933771 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Feb 13 19:13:48.933781 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 19:13:48.933789 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 19:13:48.933797 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 19:13:48.933807 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 19:13:48.933816 kernel: iommu: Default domain type: Translated Feb 13 19:13:48.933824 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 19:13:48.933831 kernel: efivars: Registered efivars operations Feb 13 19:13:48.933839 kernel: vgaarb: loaded Feb 13 19:13:48.933846 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 19:13:48.933854 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:13:48.933862 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:13:48.933869 kernel: pnp: PnP ACPI init Feb 13 19:13:48.933948 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 13 19:13:48.933961 kernel: pnp: PnP ACPI: found 1 devices Feb 13 19:13:48.933969 kernel: NET: Registered PF_INET protocol family Feb 13 19:13:48.934010 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 19:13:48.934019 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 19:13:48.934026 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:13:48.934034 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 19:13:48.934042 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 19:13:48.934049 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 19:13:48.934060 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:13:48.934068 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:13:48.934076 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:13:48.934169 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Feb 13 19:13:48.934184 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:13:48.934192 kernel: kvm [1]: HYP mode not available Feb 13 19:13:48.934199 kernel: Initialise system trusted keyrings Feb 13 19:13:48.934207 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 19:13:48.934215 kernel: Key type asymmetric registered Feb 13 19:13:48.934225 kernel: Asymmetric key parser 'x509' registered Feb 13 19:13:48.934232 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 19:13:48.934239 kernel: io scheduler mq-deadline registered Feb 13 19:13:48.934247 kernel: io scheduler kyber registered Feb 13 19:13:48.934255 kernel: io scheduler bfq registered Feb 13 19:13:48.934264 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Feb 13 19:13:48.934522 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Feb 13 19:13:48.934597 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Feb 13 19:13:48.934670 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 19:13:48.934744 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Feb 13 19:13:48.934810 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Feb 13 19:13:48.934875 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 19:13:48.934943 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Feb 13 19:13:48.935029 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Feb 13 19:13:48.935101 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 19:13:48.935170 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Feb 13 19:13:48.935238 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Feb 13 19:13:48.935322 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 19:13:48.935396 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Feb 13 19:13:48.935461 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Feb 13 19:13:48.935531 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 19:13:48.935602 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Feb 13 19:13:48.935668 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Feb 13 19:13:48.935732 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 19:13:48.935800 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Feb 13 19:13:48.935865 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Feb 13 19:13:48.935932 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 19:13:48.936047 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Feb 13 19:13:48.936121 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Feb 13 19:13:48.936195 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 19:13:48.936207 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Feb 13 19:13:48.936274 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Feb 13 19:13:48.936401 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Feb 13 19:13:48.936469 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 19:13:48.936479 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 19:13:48.936487 kernel: ACPI: button: Power Button [PWRB] Feb 13 19:13:48.936495 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 19:13:48.936567 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Feb 13 19:13:48.936642 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Feb 13 19:13:48.936653 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:13:48.936665 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Feb 13 19:13:48.936734 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Feb 13 19:13:48.936745 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Feb 13 19:13:48.936753 kernel: thunder_xcv, ver 1.0 Feb 13 19:13:48.936761 kernel: thunder_bgx, ver 1.0 Feb 13 19:13:48.936769 kernel: nicpf, ver 1.0 Feb 13 19:13:48.936780 kernel: nicvf, ver 1.0 Feb 13 19:13:48.936875 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 19:13:48.936944 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T19:13:48 UTC (1739474028) Feb 13 19:13:48.936956 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 19:13:48.936963 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 13 19:13:48.936983 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 19:13:48.936993 kernel: watchdog: Hard watchdog permanently disabled Feb 13 19:13:48.937001 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:13:48.937008 kernel: Segment Routing with IPv6 Feb 13 19:13:48.937016 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:13:48.937024 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:13:48.937034 kernel: Key type dns_resolver registered Feb 13 19:13:48.937041 kernel: registered taskstats version 1 Feb 13 19:13:48.937049 kernel: Loading compiled-in X.509 certificates Feb 13 19:13:48.937057 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 58bec1a0c6b8a133d1af4ea745973da0351f7027' Feb 13 19:13:48.937064 kernel: Key type .fscrypt registered Feb 13 19:13:48.937072 kernel: Key type fscrypt-provisioning registered Feb 13 19:13:48.937081 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 19:13:48.937089 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:13:48.937096 kernel: ima: No architecture policies found Feb 13 19:13:48.937105 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 19:13:48.937113 kernel: clk: Disabling unused clocks Feb 13 19:13:48.937121 kernel: Freeing unused kernel memory: 38336K Feb 13 19:13:48.937128 kernel: Run /init as init process Feb 13 19:13:48.937136 kernel: with arguments: Feb 13 19:13:48.937144 kernel: /init Feb 13 19:13:48.937151 kernel: with environment: Feb 13 19:13:48.937158 kernel: HOME=/ Feb 13 19:13:48.937165 kernel: TERM=linux Feb 13 19:13:48.937174 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:13:48.937183 systemd[1]: Successfully made /usr/ read-only. Feb 13 19:13:48.937195 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 19:13:48.937204 systemd[1]: Detected virtualization kvm. Feb 13 19:13:48.937211 systemd[1]: Detected architecture arm64. Feb 13 19:13:48.937221 systemd[1]: Running in initrd. Feb 13 19:13:48.937229 systemd[1]: No hostname configured, using default hostname. Feb 13 19:13:48.937238 systemd[1]: Hostname set to . Feb 13 19:13:48.937246 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:13:48.937254 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:13:48.937263 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:13:48.937271 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:13:48.937290 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:13:48.937299 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:13:48.937307 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:13:48.937318 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:13:48.937327 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:13:48.937335 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:13:48.937344 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:13:48.937352 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:13:48.937360 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:13:48.937368 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:13:48.937377 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:13:48.937385 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:13:48.937393 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:13:48.937401 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:13:48.937410 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:13:48.937418 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Feb 13 19:13:48.937426 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:13:48.937434 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:13:48.937442 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:13:48.937451 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:13:48.937459 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:13:48.937468 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:13:48.937476 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:13:48.937484 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:13:48.937491 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:13:48.937499 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:13:48.937507 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:13:48.937517 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:13:48.937525 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:13:48.937533 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:13:48.937542 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:13:48.937580 systemd-journald[236]: Collecting audit messages is disabled. Feb 13 19:13:48.937602 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:13:48.937610 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:13:48.937619 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:13:48.937629 systemd-journald[236]: Journal started Feb 13 19:13:48.937649 systemd-journald[236]: Runtime Journal (/run/log/journal/88e4c8f20b55469cbe3b31f748e06098) is 8M, max 76.6M, 68.6M free. Feb 13 19:13:48.911355 systemd-modules-load[237]: Inserted module 'overlay' Feb 13 19:13:48.939516 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:13:48.940862 systemd-modules-load[237]: Inserted module 'br_netfilter' Feb 13 19:13:48.942490 kernel: Bridge firewalling registered Feb 13 19:13:48.942514 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:13:48.953702 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:13:48.968619 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:13:48.973896 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:13:48.975566 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:13:48.981704 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:13:48.996806 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:13:49.011571 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:13:49.015692 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:13:49.019433 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:13:49.029306 dracut-cmdline[271]: dracut-dracut-053 Feb 13 19:13:49.031005 dracut-cmdline[271]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=f06bad36699a22ae88c1968cd72b62b3503d97da521712e50a4b744320b1ba33 Feb 13 19:13:49.033843 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:13:49.063435 systemd-resolved[280]: Positive Trust Anchors: Feb 13 19:13:49.063454 systemd-resolved[280]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:13:49.063486 systemd-resolved[280]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:13:49.071772 systemd-resolved[280]: Defaulting to hostname 'linux'. Feb 13 19:13:49.072957 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:13:49.074244 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:13:49.132333 kernel: SCSI subsystem initialized Feb 13 19:13:49.137347 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:13:49.145728 kernel: iscsi: registered transport (tcp) Feb 13 19:13:49.160350 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:13:49.160424 kernel: QLogic iSCSI HBA Driver Feb 13 19:13:49.221357 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:13:49.228498 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:13:49.246575 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:13:49.247402 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:13:49.247422 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:13:49.301345 kernel: raid6: neonx8 gen() 15650 MB/s Feb 13 19:13:49.318413 kernel: raid6: neonx4 gen() 15741 MB/s Feb 13 19:13:49.335377 kernel: raid6: neonx2 gen() 13114 MB/s Feb 13 19:13:49.352351 kernel: raid6: neonx1 gen() 10425 MB/s Feb 13 19:13:49.369483 kernel: raid6: int64x8 gen() 6741 MB/s Feb 13 19:13:49.386354 kernel: raid6: int64x4 gen() 7308 MB/s Feb 13 19:13:49.403317 kernel: raid6: int64x2 gen() 6076 MB/s Feb 13 19:13:49.420355 kernel: raid6: int64x1 gen() 5031 MB/s Feb 13 19:13:49.420442 kernel: raid6: using algorithm neonx4 gen() 15741 MB/s Feb 13 19:13:49.437355 kernel: raid6: .... xor() 12328 MB/s, rmw enabled Feb 13 19:13:49.437450 kernel: raid6: using neon recovery algorithm Feb 13 19:13:49.442352 kernel: xor: measuring software checksum speed Feb 13 19:13:49.442414 kernel: 8regs : 18552 MB/sec Feb 13 19:13:49.443582 kernel: 32regs : 21681 MB/sec Feb 13 19:13:49.443620 kernel: arm64_neon : 27841 MB/sec Feb 13 19:13:49.443639 kernel: xor: using function: arm64_neon (27841 MB/sec) Feb 13 19:13:49.495341 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:13:49.509670 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:13:49.516514 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:13:49.534848 systemd-udevd[457]: Using default interface naming scheme 'v255'. Feb 13 19:13:49.538910 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:13:49.552548 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:13:49.567927 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation Feb 13 19:13:49.614092 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:13:49.627664 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:13:49.683369 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:13:49.692603 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:13:49.713324 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:13:49.714980 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:13:49.716959 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:13:49.717693 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:13:49.725666 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:13:49.745178 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:13:49.783676 kernel: scsi host0: Virtio SCSI HBA Feb 13 19:13:49.796334 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 13 19:13:49.796443 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Feb 13 19:13:49.803695 kernel: ACPI: bus type USB registered Feb 13 19:13:49.803918 kernel: usbcore: registered new interface driver usbfs Feb 13 19:13:49.812405 kernel: usbcore: registered new interface driver hub Feb 13 19:13:49.813380 kernel: usbcore: registered new device driver usb Feb 13 19:13:49.832564 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:13:49.833417 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:13:49.834303 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:13:49.834901 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:13:49.835110 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:13:49.839419 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:13:49.848686 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:13:49.853388 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Feb 13 19:13:49.861050 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Feb 13 19:13:49.861189 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Feb 13 19:13:49.861299 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Feb 13 19:13:49.861390 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Feb 13 19:13:49.861472 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Feb 13 19:13:49.861554 kernel: hub 1-0:1.0: USB hub found Feb 13 19:13:49.861666 kernel: hub 1-0:1.0: 4 ports detected Feb 13 19:13:49.861748 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Feb 13 19:13:49.861842 kernel: hub 2-0:1.0: USB hub found Feb 13 19:13:49.861928 kernel: hub 2-0:1.0: 4 ports detected Feb 13 19:13:49.871376 kernel: sr 0:0:0:0: Power-on or device reset occurred Feb 13 19:13:49.880759 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Feb 13 19:13:49.880895 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 19:13:49.880906 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Feb 13 19:13:49.870706 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:13:49.881921 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:13:49.891520 kernel: sd 0:0:0:1: Power-on or device reset occurred Feb 13 19:13:49.899025 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Feb 13 19:13:49.899185 kernel: sd 0:0:0:1: [sda] Write Protect is off Feb 13 19:13:49.899321 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Feb 13 19:13:49.899436 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 13 19:13:49.899534 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:13:49.899544 kernel: GPT:17805311 != 80003071 Feb 13 19:13:49.899554 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:13:49.899564 kernel: GPT:17805311 != 80003071 Feb 13 19:13:49.899573 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:13:49.899583 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 19:13:49.899598 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Feb 13 19:13:49.902758 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:13:49.957301 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (528) Feb 13 19:13:49.966301 kernel: BTRFS: device fsid 4fff035f-dd55-45d8-9bb7-2a61f21b22d5 devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (512) Feb 13 19:13:49.982498 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Feb 13 19:13:49.996750 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Feb 13 19:13:50.005810 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Feb 13 19:13:50.012722 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Feb 13 19:13:50.013491 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Feb 13 19:13:50.023520 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:13:50.035126 disk-uuid[580]: Primary Header is updated. Feb 13 19:13:50.035126 disk-uuid[580]: Secondary Entries is updated. Feb 13 19:13:50.035126 disk-uuid[580]: Secondary Header is updated. Feb 13 19:13:50.044555 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 19:13:50.105296 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Feb 13 19:13:50.346513 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Feb 13 19:13:50.502961 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Feb 13 19:13:50.503040 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Feb 13 19:13:50.505339 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Feb 13 19:13:50.559314 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Feb 13 19:13:50.560373 kernel: usbcore: registered new interface driver usbhid Feb 13 19:13:50.561308 kernel: usbhid: USB HID core driver Feb 13 19:13:51.062310 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 19:13:51.063918 disk-uuid[581]: The operation has completed successfully. Feb 13 19:13:51.139102 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:13:51.139252 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:13:51.168559 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:13:51.185220 sh[596]: Success Feb 13 19:13:51.201297 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 19:13:51.274548 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:13:51.276432 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:13:51.282494 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:13:51.304832 kernel: BTRFS info (device dm-0): first mount of filesystem 4fff035f-dd55-45d8-9bb7-2a61f21b22d5 Feb 13 19:13:51.304935 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:13:51.304968 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:13:51.305612 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:13:51.305656 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:13:51.313320 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 19:13:51.315485 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:13:51.317355 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:13:51.323562 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:13:51.329680 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:13:51.350158 kernel: BTRFS info (device sda6): first mount of filesystem 843e6c1f-b3c4-44a3-b5c6-7983dd77012d Feb 13 19:13:51.350215 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:13:51.350227 kernel: BTRFS info (device sda6): using free space tree Feb 13 19:13:51.359558 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 19:13:51.359662 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 19:13:51.375177 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:13:51.376692 kernel: BTRFS info (device sda6): last unmount of filesystem 843e6c1f-b3c4-44a3-b5c6-7983dd77012d Feb 13 19:13:51.387481 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:13:51.398371 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:13:51.468060 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:13:51.478445 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:13:51.521386 systemd-networkd[784]: lo: Link UP Feb 13 19:13:51.522932 ignition[689]: Ignition 2.20.0 Feb 13 19:13:51.521398 systemd-networkd[784]: lo: Gained carrier Feb 13 19:13:51.522947 ignition[689]: Stage: fetch-offline Feb 13 19:13:51.523909 systemd-networkd[784]: Enumeration completed Feb 13 19:13:51.522990 ignition[689]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:13:51.524585 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:13:51.522998 ignition[689]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 19:13:51.524589 systemd-networkd[784]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:13:51.523169 ignition[689]: parsed url from cmdline: "" Feb 13 19:13:51.525303 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:13:51.523172 ignition[689]: no config URL provided Feb 13 19:13:51.526065 systemd-networkd[784]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:13:51.523177 ignition[689]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:13:51.526070 systemd-networkd[784]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:13:51.523185 ignition[689]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:13:51.526157 systemd[1]: Reached target network.target - Network. Feb 13 19:13:51.523191 ignition[689]: failed to fetch config: resource requires networking Feb 13 19:13:51.526713 systemd-networkd[784]: eth0: Link UP Feb 13 19:13:51.527771 ignition[689]: Ignition finished successfully Feb 13 19:13:51.526717 systemd-networkd[784]: eth0: Gained carrier Feb 13 19:13:51.526725 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:13:51.530741 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:13:51.532486 systemd-networkd[784]: eth1: Link UP Feb 13 19:13:51.532490 systemd-networkd[784]: eth1: Gained carrier Feb 13 19:13:51.532502 systemd-networkd[784]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:13:51.541641 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 19:13:51.559011 ignition[789]: Ignition 2.20.0 Feb 13 19:13:51.559030 ignition[789]: Stage: fetch Feb 13 19:13:51.559570 ignition[789]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:13:51.559587 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 19:13:51.559706 ignition[789]: parsed url from cmdline: "" Feb 13 19:13:51.559709 ignition[789]: no config URL provided Feb 13 19:13:51.559715 ignition[789]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:13:51.559723 ignition[789]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:13:51.559843 ignition[789]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Feb 13 19:13:51.560742 ignition[789]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Feb 13 19:13:51.566386 systemd-networkd[784]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:13:51.600422 systemd-networkd[784]: eth0: DHCPv4 address 142.132.176.244/32, gateway 172.31.1.1 acquired from 172.31.1.1 Feb 13 19:13:51.761273 ignition[789]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Feb 13 19:13:51.770603 ignition[789]: GET result: OK Feb 13 19:13:51.770898 ignition[789]: parsing config with SHA512: 350d04560c813a508501ce55457ad28a30833524f7a90896bf57dd4bc70302d105fe59bb2ddfb0d4a52fd0bc59cd8d7e70dd7fffe4d01b0364cab873c19eb750 Feb 13 19:13:51.779568 unknown[789]: fetched base config from "system" Feb 13 19:13:51.779957 ignition[789]: fetch: fetch complete Feb 13 19:13:51.779581 unknown[789]: fetched base config from "system" Feb 13 19:13:51.779962 ignition[789]: fetch: fetch passed Feb 13 19:13:51.779587 unknown[789]: fetched user config from "hetzner" Feb 13 19:13:51.780010 ignition[789]: Ignition finished successfully Feb 13 19:13:51.784918 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 19:13:51.797609 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:13:51.822518 ignition[796]: Ignition 2.20.0 Feb 13 19:13:51.822528 ignition[796]: Stage: kargs Feb 13 19:13:51.822715 ignition[796]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:13:51.822725 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 19:13:51.828440 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:13:51.825783 ignition[796]: kargs: kargs passed Feb 13 19:13:51.825888 ignition[796]: Ignition finished successfully Feb 13 19:13:51.839754 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:13:51.856779 ignition[802]: Ignition 2.20.0 Feb 13 19:13:51.856793 ignition[802]: Stage: disks Feb 13 19:13:51.857106 ignition[802]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:13:51.857117 ignition[802]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 19:13:51.859450 ignition[802]: disks: disks passed Feb 13 19:13:51.861222 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:13:51.859520 ignition[802]: Ignition finished successfully Feb 13 19:13:51.862920 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:13:51.865640 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:13:51.866689 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:13:51.868029 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:13:51.869299 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:13:51.876618 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:13:51.901800 systemd-fsck[811]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Feb 13 19:13:51.906728 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:13:52.314477 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:13:52.367589 kernel: EXT4-fs (sda9): mounted filesystem 24882d04-b1a5-4a27-95f1-925956e69b18 r/w with ordered data mode. Quota mode: none. Feb 13 19:13:52.368425 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:13:52.369638 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:13:52.379527 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:13:52.385488 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:13:52.388489 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Feb 13 19:13:52.392902 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:13:52.392968 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:13:52.395051 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:13:52.404516 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:13:52.408376 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (819) Feb 13 19:13:52.415211 kernel: BTRFS info (device sda6): first mount of filesystem 843e6c1f-b3c4-44a3-b5c6-7983dd77012d Feb 13 19:13:52.415297 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:13:52.416327 kernel: BTRFS info (device sda6): using free space tree Feb 13 19:13:52.423707 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 19:13:52.423779 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 19:13:52.426755 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:13:52.471096 coreos-metadata[821]: Feb 13 19:13:52.470 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Feb 13 19:13:52.474361 coreos-metadata[821]: Feb 13 19:13:52.474 INFO Fetch successful Feb 13 19:13:52.476576 coreos-metadata[821]: Feb 13 19:13:52.476 INFO wrote hostname ci-4230-0-1-4-0b1b2da462 to /sysroot/etc/hostname Feb 13 19:13:52.479074 initrd-setup-root[846]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:13:52.480371 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 19:13:52.487247 initrd-setup-root[854]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:13:52.494054 initrd-setup-root[861]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:13:52.500589 initrd-setup-root[868]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:13:52.609042 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:13:52.614495 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:13:52.618526 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:13:52.628372 kernel: BTRFS info (device sda6): last unmount of filesystem 843e6c1f-b3c4-44a3-b5c6-7983dd77012d Feb 13 19:13:52.661743 ignition[936]: INFO : Ignition 2.20.0 Feb 13 19:13:52.661743 ignition[936]: INFO : Stage: mount Feb 13 19:13:52.663651 ignition[936]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:13:52.663651 ignition[936]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 19:13:52.663651 ignition[936]: INFO : mount: mount passed Feb 13 19:13:52.663651 ignition[936]: INFO : Ignition finished successfully Feb 13 19:13:52.662549 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:13:52.665329 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:13:52.669487 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:13:53.138973 systemd-networkd[784]: eth1: Gained IPv6LL Feb 13 19:13:53.303856 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:13:53.309503 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:13:53.323387 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (947) Feb 13 19:13:53.325885 kernel: BTRFS info (device sda6): first mount of filesystem 843e6c1f-b3c4-44a3-b5c6-7983dd77012d Feb 13 19:13:53.325971 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:13:53.326010 kernel: BTRFS info (device sda6): using free space tree Feb 13 19:13:53.333347 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 19:13:53.333497 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 19:13:53.337195 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:13:53.357723 ignition[965]: INFO : Ignition 2.20.0 Feb 13 19:13:53.357723 ignition[965]: INFO : Stage: files Feb 13 19:13:53.358977 ignition[965]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:13:53.358977 ignition[965]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 19:13:53.360683 ignition[965]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:13:53.361868 ignition[965]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:13:53.361868 ignition[965]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:13:53.365793 ignition[965]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:13:53.367241 ignition[965]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:13:53.367241 ignition[965]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:13:53.366317 unknown[965]: wrote ssh authorized keys file for user: core Feb 13 19:13:53.370012 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 19:13:53.370012 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 19:13:53.394433 systemd-networkd[784]: eth0: Gained IPv6LL Feb 13 19:13:53.445454 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 19:13:53.549416 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 19:13:53.549416 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:13:53.553457 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:13:53.553457 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:13:53.553457 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:13:53.553457 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:13:53.553457 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:13:53.553457 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:13:53.553457 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:13:53.553457 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:13:53.553457 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:13:53.553457 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:13:53.566883 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:13:53.566883 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:13:53.566883 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Feb 13 19:13:54.120313 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 19:13:54.466229 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:13:54.466229 ignition[965]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 19:13:54.471436 ignition[965]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:13:54.471436 ignition[965]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:13:54.471436 ignition[965]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 19:13:54.471436 ignition[965]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Feb 13 19:13:54.471436 ignition[965]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Feb 13 19:13:54.471436 ignition[965]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Feb 13 19:13:54.471436 ignition[965]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Feb 13 19:13:54.471436 ignition[965]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Feb 13 19:13:54.471436 ignition[965]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 19:13:54.471436 ignition[965]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:13:54.471436 ignition[965]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:13:54.471436 ignition[965]: INFO : files: files passed Feb 13 19:13:54.471436 ignition[965]: INFO : Ignition finished successfully Feb 13 19:13:54.470811 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:13:54.482766 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:13:54.489086 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:13:54.499581 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:13:54.499741 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:13:54.519264 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:13:54.521217 initrd-setup-root-after-ignition[993]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:13:54.522336 initrd-setup-root-after-ignition[997]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:13:54.525918 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:13:54.527103 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:13:54.533638 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:13:54.580162 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:13:54.580376 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:13:54.583874 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:13:54.584932 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:13:54.586571 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:13:54.593633 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:13:54.609372 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:13:54.618523 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:13:54.630196 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:13:54.631781 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:13:54.633428 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:13:54.634069 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:13:54.634202 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:13:54.636946 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:13:54.638358 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:13:54.639341 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:13:54.640359 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:13:54.641842 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:13:54.643080 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:13:54.644133 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:13:54.645328 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:13:54.646509 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:13:54.647554 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:13:54.648484 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:13:54.648674 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:13:54.649969 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:13:54.651121 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:13:54.652243 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:13:54.652369 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:13:54.653485 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:13:54.653672 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:13:54.655252 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:13:54.655400 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:13:54.656660 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:13:54.656824 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:13:54.657735 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 13 19:13:54.657915 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 19:13:54.670680 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:13:54.671852 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:13:54.672153 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:13:54.678418 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:13:54.679042 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:13:54.679242 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:13:54.680410 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:13:54.680512 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:13:54.690472 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:13:54.690595 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:13:54.697983 ignition[1017]: INFO : Ignition 2.20.0 Feb 13 19:13:54.699387 ignition[1017]: INFO : Stage: umount Feb 13 19:13:54.701205 ignition[1017]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:13:54.701205 ignition[1017]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 19:13:54.706843 ignition[1017]: INFO : umount: umount passed Feb 13 19:13:54.706843 ignition[1017]: INFO : Ignition finished successfully Feb 13 19:13:54.703167 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:13:54.708221 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:13:54.708435 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:13:54.709440 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:13:54.709513 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:13:54.711274 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:13:54.711366 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:13:54.713936 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 19:13:54.713991 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 19:13:54.714983 systemd[1]: Stopped target network.target - Network. Feb 13 19:13:54.715916 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:13:54.715986 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:13:54.717006 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:13:54.717870 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:13:54.723458 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:13:54.726438 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:13:54.728401 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:13:54.729651 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:13:54.729695 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:13:54.730817 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:13:54.730853 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:13:54.731842 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:13:54.731909 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:13:54.732942 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:13:54.732989 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:13:54.734033 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:13:54.736425 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:13:54.737438 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:13:54.737540 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:13:54.739615 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:13:54.739677 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:13:54.746092 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:13:54.747328 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:13:54.753696 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Feb 13 19:13:54.754016 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:13:54.754128 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:13:54.757506 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Feb 13 19:13:54.758182 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:13:54.758250 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:13:54.762464 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:13:54.762997 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:13:54.763055 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:13:54.767041 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:13:54.767127 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:13:54.768892 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:13:54.769552 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:13:54.770899 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:13:54.770964 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:13:54.772590 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:13:54.776030 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 13 19:13:54.776100 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Feb 13 19:13:54.787943 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:13:54.788382 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:13:54.791936 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:13:54.792144 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:13:54.794072 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:13:54.794178 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:13:54.795648 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:13:54.795685 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:13:54.797089 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:13:54.797149 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:13:54.799221 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:13:54.799272 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:13:54.800976 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:13:54.801037 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:13:54.815514 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:13:54.816693 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:13:54.816797 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:13:54.823261 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:13:54.823387 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:13:54.832217 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 13 19:13:54.832359 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Feb 13 19:13:54.832876 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:13:54.834328 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:13:54.835731 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:13:54.842549 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:13:54.851353 systemd[1]: Switching root. Feb 13 19:13:54.891847 systemd-journald[236]: Journal stopped Feb 13 19:13:56.051635 systemd-journald[236]: Received SIGTERM from PID 1 (systemd). Feb 13 19:13:56.051726 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:13:56.051741 kernel: SELinux: policy capability open_perms=1 Feb 13 19:13:56.051751 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:13:56.051761 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:13:56.051770 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:13:56.051785 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:13:56.051794 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:13:56.051804 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:13:56.051815 kernel: audit: type=1403 audit(1739474035.067:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:13:56.051826 systemd[1]: Successfully loaded SELinux policy in 35.458ms. Feb 13 19:13:56.051843 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.433ms. Feb 13 19:13:56.051855 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 19:13:56.051866 systemd[1]: Detected virtualization kvm. Feb 13 19:13:56.051876 systemd[1]: Detected architecture arm64. Feb 13 19:13:56.051929 systemd[1]: Detected first boot. Feb 13 19:13:56.051948 systemd[1]: Hostname set to . Feb 13 19:13:56.051960 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:13:56.051970 zram_generator::config[1061]: No configuration found. Feb 13 19:13:56.051980 kernel: NET: Registered PF_VSOCK protocol family Feb 13 19:13:56.051990 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:13:56.052001 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Feb 13 19:13:56.052011 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 19:13:56.052021 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 19:13:56.052031 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 19:13:56.052041 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:13:56.052053 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:13:56.052063 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:13:56.052073 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:13:56.052083 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:13:56.052094 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:13:56.052105 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:13:56.052114 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:13:56.052124 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:13:56.052137 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:13:56.052148 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:13:56.052157 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:13:56.052167 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:13:56.052177 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:13:56.052187 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 13 19:13:56.052198 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:13:56.052210 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 19:13:56.052220 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 19:13:56.052230 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 19:13:56.052239 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:13:56.052250 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:13:56.052260 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:13:56.052270 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:13:56.052311 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:13:56.052323 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:13:56.052337 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:13:56.052347 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Feb 13 19:13:56.052357 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:13:56.052367 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:13:56.052377 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:13:56.052389 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:13:56.052401 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:13:56.052413 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:13:56.052423 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:13:56.052433 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:13:56.052443 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:13:56.052453 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:13:56.052467 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:13:56.052477 systemd[1]: Reached target machines.target - Containers. Feb 13 19:13:56.052489 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:13:56.052500 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:13:56.052510 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:13:56.052522 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:13:56.052535 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:13:56.052548 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:13:56.052560 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:13:56.052576 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:13:56.052590 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:13:56.052603 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:13:56.052616 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 19:13:56.052628 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 19:13:56.052753 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 19:13:56.052801 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 19:13:56.052817 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 19:13:56.052828 kernel: loop: module loaded Feb 13 19:13:56.052838 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:13:56.052853 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:13:56.052863 kernel: fuse: init (API version 7.39) Feb 13 19:13:56.052873 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:13:56.052897 kernel: ACPI: bus type drm_connector registered Feb 13 19:13:56.052910 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:13:56.052921 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Feb 13 19:13:56.052931 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:13:56.052946 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 19:13:56.052956 systemd[1]: Stopped verity-setup.service. Feb 13 19:13:56.052970 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:13:56.052982 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:13:56.052997 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:13:56.053053 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:13:56.053075 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:13:56.053086 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:13:56.053096 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:13:56.053107 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:13:56.053117 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:13:56.053127 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:13:56.053139 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:13:56.053149 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:13:56.053159 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:13:56.053170 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:13:56.053181 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:13:56.053191 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:13:56.053202 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:13:56.053212 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:13:56.053222 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:13:56.053235 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:13:56.053247 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:13:56.055263 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Feb 13 19:13:56.056398 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:13:56.056419 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:13:56.056432 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:13:56.056443 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:13:56.056584 systemd-journald[1136]: Collecting audit messages is disabled. Feb 13 19:13:56.056618 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:13:56.056631 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:13:56.056643 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:13:56.056654 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Feb 13 19:13:56.056665 systemd-journald[1136]: Journal started Feb 13 19:13:56.056687 systemd-journald[1136]: Runtime Journal (/run/log/journal/88e4c8f20b55469cbe3b31f748e06098) is 8M, max 76.6M, 68.6M free. Feb 13 19:13:55.685323 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:13:55.696506 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Feb 13 19:13:55.696994 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 19:13:56.065377 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:13:56.074632 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:13:56.078295 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:13:56.084812 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:13:56.089838 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:13:56.094376 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:13:56.099311 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:13:56.107096 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:13:56.114526 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:13:56.125560 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:13:56.130518 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:13:56.133353 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:13:56.141978 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:13:56.145096 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:13:56.146584 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:13:56.149874 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:13:56.180538 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:13:56.182303 kernel: loop0: detected capacity change from 0 to 123192 Feb 13 19:13:56.194956 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:13:56.213213 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:13:56.215631 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:13:56.220790 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Feb 13 19:13:56.231445 systemd-journald[1136]: Time spent on flushing to /var/log/journal/88e4c8f20b55469cbe3b31f748e06098 is 24.840ms for 1146 entries. Feb 13 19:13:56.231445 systemd-journald[1136]: System Journal (/var/log/journal/88e4c8f20b55469cbe3b31f748e06098) is 8M, max 584.8M, 576.8M free. Feb 13 19:13:56.268593 systemd-journald[1136]: Received client request to flush runtime journal. Feb 13 19:13:56.268875 kernel: loop1: detected capacity change from 0 to 194096 Feb 13 19:13:56.232511 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:13:56.235523 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:13:56.254515 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:13:56.271427 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:13:56.298699 udevadm[1195]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 19:13:56.300672 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Feb 13 19:13:56.326632 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Feb 13 19:13:56.326655 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Feb 13 19:13:56.337447 kernel: loop2: detected capacity change from 0 to 8 Feb 13 19:13:56.342648 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:13:56.362112 kernel: loop3: detected capacity change from 0 to 113512 Feb 13 19:13:56.410005 kernel: loop4: detected capacity change from 0 to 123192 Feb 13 19:13:56.436318 kernel: loop5: detected capacity change from 0 to 194096 Feb 13 19:13:56.467328 kernel: loop6: detected capacity change from 0 to 8 Feb 13 19:13:56.472025 kernel: loop7: detected capacity change from 0 to 113512 Feb 13 19:13:56.495153 (sd-merge)[1208]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Feb 13 19:13:56.496543 (sd-merge)[1208]: Merged extensions into '/usr'. Feb 13 19:13:56.506796 systemd[1]: Reload requested from client PID 1162 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:13:56.507088 systemd[1]: Reloading... Feb 13 19:13:56.663317 zram_generator::config[1236]: No configuration found. Feb 13 19:13:56.683975 ldconfig[1158]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:13:56.807011 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:13:56.880469 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:13:56.881105 systemd[1]: Reloading finished in 373 ms. Feb 13 19:13:56.901428 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:13:56.902559 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:13:56.916686 systemd[1]: Starting ensure-sysext.service... Feb 13 19:13:56.923494 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:13:56.941269 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:13:56.944964 systemd[1]: Reload requested from client PID 1273 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:13:56.945090 systemd[1]: Reloading... Feb 13 19:13:56.955936 systemd-tmpfiles[1274]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:13:56.956200 systemd-tmpfiles[1274]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:13:56.957007 systemd-tmpfiles[1274]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:13:56.957266 systemd-tmpfiles[1274]: ACLs are not supported, ignoring. Feb 13 19:13:56.957377 systemd-tmpfiles[1274]: ACLs are not supported, ignoring. Feb 13 19:13:56.960820 systemd-tmpfiles[1274]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:13:56.960840 systemd-tmpfiles[1274]: Skipping /boot Feb 13 19:13:56.972419 systemd-tmpfiles[1274]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:13:56.972437 systemd-tmpfiles[1274]: Skipping /boot Feb 13 19:13:57.031429 zram_generator::config[1306]: No configuration found. Feb 13 19:13:57.141947 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:13:57.216179 systemd[1]: Reloading finished in 270 ms. Feb 13 19:13:57.255770 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:13:57.275086 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:13:57.282846 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:13:57.286715 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:13:57.291756 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:13:57.301694 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:13:57.305680 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:13:57.312150 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:13:57.315771 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:13:57.320824 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:13:57.332628 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:13:57.333634 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:13:57.333802 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 19:13:57.336688 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:13:57.336932 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:13:57.337025 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 19:13:57.342359 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:13:57.345660 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:13:57.346973 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:13:57.347125 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 19:13:57.356504 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:13:57.364933 systemd[1]: Finished ensure-sysext.service. Feb 13 19:13:57.377503 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 19:13:57.378623 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:13:57.379391 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:13:57.383831 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:13:57.411039 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:13:57.417245 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:13:57.418385 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:13:57.420682 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:13:57.420925 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:13:57.424107 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:13:57.426455 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:13:57.428260 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:13:57.431051 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:13:57.437734 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:13:57.462714 systemd-udevd[1349]: Using default interface naming scheme 'v255'. Feb 13 19:13:57.466111 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:13:57.468121 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:13:57.474078 augenrules[1381]: No rules Feb 13 19:13:57.475973 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:13:57.476436 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:13:57.478392 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:13:57.497937 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:13:57.508459 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:13:57.521620 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:13:57.606139 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 19:13:57.607696 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:13:57.660684 systemd-resolved[1347]: Positive Trust Anchors: Feb 13 19:13:57.661143 systemd-resolved[1347]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:13:57.661307 systemd-resolved[1347]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:13:57.662201 systemd-networkd[1398]: lo: Link UP Feb 13 19:13:57.662263 systemd-networkd[1398]: lo: Gained carrier Feb 13 19:13:57.669538 systemd-resolved[1347]: Using system hostname 'ci-4230-0-1-4-0b1b2da462'. Feb 13 19:13:57.672099 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:13:57.673705 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:13:57.688468 systemd-networkd[1398]: Enumeration completed Feb 13 19:13:57.688607 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:13:57.690396 systemd[1]: Reached target network.target - Network. Feb 13 19:13:57.699484 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Feb 13 19:13:57.702537 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:13:57.703986 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Feb 13 19:13:57.720857 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Feb 13 19:13:57.748524 systemd-networkd[1398]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:13:57.748554 systemd-networkd[1398]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:13:57.749738 systemd-networkd[1398]: eth0: Link UP Feb 13 19:13:57.749749 systemd-networkd[1398]: eth0: Gained carrier Feb 13 19:13:57.749771 systemd-networkd[1398]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:13:57.758644 systemd-networkd[1398]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:13:57.758661 systemd-networkd[1398]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:13:57.759914 systemd-networkd[1398]: eth1: Link UP Feb 13 19:13:57.759923 systemd-networkd[1398]: eth1: Gained carrier Feb 13 19:13:57.759948 systemd-networkd[1398]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:13:57.792404 systemd-networkd[1398]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:13:57.793084 systemd-timesyncd[1359]: Network configuration changed, trying to establish connection. Feb 13 19:13:57.793211 systemd-timesyncd[1359]: Network configuration changed, trying to establish connection. Feb 13 19:13:57.803987 systemd-networkd[1398]: eth0: DHCPv4 address 142.132.176.244/32, gateway 172.31.1.1 acquired from 172.31.1.1 Feb 13 19:13:57.806001 systemd-timesyncd[1359]: Network configuration changed, trying to establish connection. Feb 13 19:13:57.810384 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 19:13:57.876389 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1413) Feb 13 19:13:57.929096 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Feb 13 19:13:57.929248 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:13:57.937575 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:13:57.942137 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:13:57.947616 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:13:57.948394 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:13:57.948445 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 19:13:57.948468 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:13:57.948826 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:13:57.950331 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:13:57.954925 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Feb 13 19:13:57.973573 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:13:57.974793 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:13:57.975110 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:13:57.981254 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:13:57.984670 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:13:57.984933 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:13:57.993115 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:13:58.009350 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:13:58.016591 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Feb 13 19:13:58.016704 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Feb 13 19:13:58.016724 kernel: [drm] features: -context_init Feb 13 19:13:58.017428 kernel: [drm] number of scanouts: 1 Feb 13 19:13:58.018368 kernel: [drm] number of cap sets: 0 Feb 13 19:13:58.019379 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Feb 13 19:13:58.024311 kernel: Console: switching to colour frame buffer device 160x50 Feb 13 19:13:58.038314 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Feb 13 19:13:58.058672 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:13:58.066725 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:13:58.067026 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:13:58.077671 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:13:58.159475 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:13:58.204926 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:13:58.218088 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:13:58.235062 lvm[1461]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:13:58.270989 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:13:58.274137 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:13:58.274989 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:13:58.276035 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:13:58.277193 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:13:58.278346 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:13:58.279160 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:13:58.280182 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:13:58.281096 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:13:58.281161 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:13:58.281823 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:13:58.284194 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:13:58.287033 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:13:58.293182 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Feb 13 19:13:58.294426 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Feb 13 19:13:58.295335 systemd[1]: Reached target ssh-access.target - SSH Access Available. Feb 13 19:13:58.304609 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:13:58.307119 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Feb 13 19:13:58.319084 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:13:58.323246 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:13:58.325409 lvm[1465]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:13:58.324630 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:13:58.325336 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:13:58.326108 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:13:58.326146 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:13:58.334307 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:13:58.337715 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 19:13:58.341628 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:13:58.346444 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:13:58.352559 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:13:58.355061 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:13:58.359493 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:13:58.365556 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 19:13:58.371653 jq[1469]: false Feb 13 19:13:58.373608 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Feb 13 19:13:58.382408 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:13:58.387050 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:13:58.396546 coreos-metadata[1467]: Feb 13 19:13:58.396 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Feb 13 19:13:58.402693 coreos-metadata[1467]: Feb 13 19:13:58.402 INFO Fetch successful Feb 13 19:13:58.402693 coreos-metadata[1467]: Feb 13 19:13:58.402 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Feb 13 19:13:58.400416 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:13:58.403000 coreos-metadata[1467]: Feb 13 19:13:58.402 INFO Fetch successful Feb 13 19:13:58.404104 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 19:13:58.404798 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:13:58.408852 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:13:58.413005 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:13:58.415092 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:13:58.419994 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:13:58.420584 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:13:58.448031 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:13:58.448369 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:13:58.464205 extend-filesystems[1472]: Found loop4 Feb 13 19:13:58.468306 jq[1481]: true Feb 13 19:13:58.478672 extend-filesystems[1472]: Found loop5 Feb 13 19:13:58.478672 extend-filesystems[1472]: Found loop6 Feb 13 19:13:58.478672 extend-filesystems[1472]: Found loop7 Feb 13 19:13:58.478672 extend-filesystems[1472]: Found sda Feb 13 19:13:58.478672 extend-filesystems[1472]: Found sda1 Feb 13 19:13:58.478672 extend-filesystems[1472]: Found sda2 Feb 13 19:13:58.478672 extend-filesystems[1472]: Found sda3 Feb 13 19:13:58.478672 extend-filesystems[1472]: Found usr Feb 13 19:13:58.478672 extend-filesystems[1472]: Found sda4 Feb 13 19:13:58.478672 extend-filesystems[1472]: Found sda6 Feb 13 19:13:58.478672 extend-filesystems[1472]: Found sda7 Feb 13 19:13:58.478672 extend-filesystems[1472]: Found sda9 Feb 13 19:13:58.478672 extend-filesystems[1472]: Checking size of /dev/sda9 Feb 13 19:13:58.542382 tar[1484]: linux-arm64/helm Feb 13 19:13:58.481803 dbus-daemon[1468]: [system] SELinux support is enabled Feb 13 19:13:58.482818 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:13:58.556369 extend-filesystems[1472]: Resized partition /dev/sda9 Feb 13 19:13:58.496104 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:13:58.496252 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:13:58.497316 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:13:58.497367 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:13:58.501125 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:13:58.501404 (ntainerd)[1499]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:13:58.501430 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:13:58.568825 extend-filesystems[1515]: resize2fs 1.47.1 (20-May-2024) Feb 13 19:13:58.570489 jq[1506]: true Feb 13 19:13:58.575296 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Feb 13 19:13:58.602342 update_engine[1480]: I20250213 19:13:58.600703 1480 main.cc:92] Flatcar Update Engine starting Feb 13 19:13:58.615136 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 19:13:58.618607 update_engine[1480]: I20250213 19:13:58.618172 1480 update_check_scheduler.cc:74] Next update check in 5m5s Feb 13 19:13:58.619228 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:13:58.621483 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:13:58.636946 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:13:58.716343 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1406) Feb 13 19:13:58.740992 systemd-logind[1478]: New seat seat0. Feb 13 19:13:58.742333 systemd-logind[1478]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 19:13:58.742361 systemd-logind[1478]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Feb 13 19:13:58.742616 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:13:58.773091 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Feb 13 19:13:58.806143 extend-filesystems[1515]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Feb 13 19:13:58.806143 extend-filesystems[1515]: old_desc_blocks = 1, new_desc_blocks = 5 Feb 13 19:13:58.806143 extend-filesystems[1515]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Feb 13 19:13:58.812553 extend-filesystems[1472]: Resized filesystem in /dev/sda9 Feb 13 19:13:58.812553 extend-filesystems[1472]: Found sr0 Feb 13 19:13:58.809316 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:13:58.814585 bash[1542]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:13:58.809539 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:13:58.820420 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:13:58.864951 systemd[1]: Starting sshkeys.service... Feb 13 19:13:58.891934 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 19:13:58.902935 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 19:13:58.944551 containerd[1499]: time="2025-02-13T19:13:58.944388160Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 19:13:58.961146 locksmithd[1523]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:13:58.978994 coreos-metadata[1547]: Feb 13 19:13:58.977 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Feb 13 19:13:58.980352 coreos-metadata[1547]: Feb 13 19:13:58.980 INFO Fetch successful Feb 13 19:13:58.984028 unknown[1547]: wrote ssh authorized keys file for user: core Feb 13 19:13:59.027828 containerd[1499]: time="2025-02-13T19:13:59.027738920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:13:59.033042 containerd[1499]: time="2025-02-13T19:13:59.032985240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:13:59.033042 containerd[1499]: time="2025-02-13T19:13:59.033033200Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:13:59.033042 containerd[1499]: time="2025-02-13T19:13:59.033053960Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:13:59.035029 containerd[1499]: time="2025-02-13T19:13:59.034981040Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:13:59.035029 containerd[1499]: time="2025-02-13T19:13:59.035025040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:13:59.035182 containerd[1499]: time="2025-02-13T19:13:59.035111640Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:13:59.035182 containerd[1499]: time="2025-02-13T19:13:59.035129320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:13:59.035437 containerd[1499]: time="2025-02-13T19:13:59.035413000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:13:59.035437 containerd[1499]: time="2025-02-13T19:13:59.035435920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:13:59.035483 containerd[1499]: time="2025-02-13T19:13:59.035451160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:13:59.035483 containerd[1499]: time="2025-02-13T19:13:59.035462240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:13:59.035564 containerd[1499]: time="2025-02-13T19:13:59.035546840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:13:59.035777 containerd[1499]: time="2025-02-13T19:13:59.035755400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:13:59.035913 update-ssh-keys[1558]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:13:59.038587 containerd[1499]: time="2025-02-13T19:13:59.038535120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:13:59.038587 containerd[1499]: time="2025-02-13T19:13:59.038582440Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:13:59.038930 containerd[1499]: time="2025-02-13T19:13:59.038732200Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:13:59.038930 containerd[1499]: time="2025-02-13T19:13:59.038791640Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:13:59.039121 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 19:13:59.046367 systemd[1]: Finished sshkeys.service. Feb 13 19:13:59.053616 containerd[1499]: time="2025-02-13T19:13:59.053354600Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:13:59.053616 containerd[1499]: time="2025-02-13T19:13:59.053427120Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:13:59.053616 containerd[1499]: time="2025-02-13T19:13:59.053445880Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:13:59.053616 containerd[1499]: time="2025-02-13T19:13:59.053464400Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:13:59.053616 containerd[1499]: time="2025-02-13T19:13:59.053480280Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:13:59.053616 containerd[1499]: time="2025-02-13T19:13:59.053686680Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:13:59.054431 containerd[1499]: time="2025-02-13T19:13:59.054002280Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:13:59.054431 containerd[1499]: time="2025-02-13T19:13:59.054142520Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:13:59.054431 containerd[1499]: time="2025-02-13T19:13:59.054162560Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:13:59.054431 containerd[1499]: time="2025-02-13T19:13:59.054178120Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:13:59.054431 containerd[1499]: time="2025-02-13T19:13:59.054194680Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:13:59.054431 containerd[1499]: time="2025-02-13T19:13:59.054221320Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:13:59.054431 containerd[1499]: time="2025-02-13T19:13:59.054235720Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:13:59.054431 containerd[1499]: time="2025-02-13T19:13:59.054251640Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:13:59.054431 containerd[1499]: time="2025-02-13T19:13:59.054267720Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:13:59.056623 containerd[1499]: time="2025-02-13T19:13:59.056399080Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:13:59.056623 containerd[1499]: time="2025-02-13T19:13:59.056434800Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:13:59.056623 containerd[1499]: time="2025-02-13T19:13:59.056451680Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:13:59.056623 containerd[1499]: time="2025-02-13T19:13:59.056479680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:13:59.056623 containerd[1499]: time="2025-02-13T19:13:59.056495360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:13:59.056623 containerd[1499]: time="2025-02-13T19:13:59.056507880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:13:59.056623 containerd[1499]: time="2025-02-13T19:13:59.056524840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:13:59.056623 containerd[1499]: time="2025-02-13T19:13:59.056538240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:13:59.056623 containerd[1499]: time="2025-02-13T19:13:59.056554320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:13:59.056623 containerd[1499]: time="2025-02-13T19:13:59.056568760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:13:59.056623 containerd[1499]: time="2025-02-13T19:13:59.056583600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:13:59.056623 containerd[1499]: time="2025-02-13T19:13:59.056599680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:13:59.056623 containerd[1499]: time="2025-02-13T19:13:59.056618560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:13:59.056623 containerd[1499]: time="2025-02-13T19:13:59.056636880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:13:59.057017 containerd[1499]: time="2025-02-13T19:13:59.056658040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:13:59.057017 containerd[1499]: time="2025-02-13T19:13:59.056671800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:13:59.057017 containerd[1499]: time="2025-02-13T19:13:59.056687280Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:13:59.057017 containerd[1499]: time="2025-02-13T19:13:59.056716120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:13:59.057017 containerd[1499]: time="2025-02-13T19:13:59.056733240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:13:59.057017 containerd[1499]: time="2025-02-13T19:13:59.056744800Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:13:59.057633 containerd[1499]: time="2025-02-13T19:13:59.057134480Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:13:59.057633 containerd[1499]: time="2025-02-13T19:13:59.057166160Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:13:59.057633 containerd[1499]: time="2025-02-13T19:13:59.057263680Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:13:59.057633 containerd[1499]: time="2025-02-13T19:13:59.057299040Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:13:59.057633 containerd[1499]: time="2025-02-13T19:13:59.057313120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:13:59.057633 containerd[1499]: time="2025-02-13T19:13:59.057346880Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:13:59.057633 containerd[1499]: time="2025-02-13T19:13:59.057359480Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:13:59.057633 containerd[1499]: time="2025-02-13T19:13:59.057378600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:13:59.058151 containerd[1499]: time="2025-02-13T19:13:59.057885000Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:13:59.058151 containerd[1499]: time="2025-02-13T19:13:59.057940760Z" level=info msg="Connect containerd service" Feb 13 19:13:59.058151 containerd[1499]: time="2025-02-13T19:13:59.057995680Z" level=info msg="using legacy CRI server" Feb 13 19:13:59.058151 containerd[1499]: time="2025-02-13T19:13:59.058005960Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:13:59.060538 containerd[1499]: time="2025-02-13T19:13:59.060380640Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:13:59.061554 containerd[1499]: time="2025-02-13T19:13:59.061271280Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:13:59.063446 containerd[1499]: time="2025-02-13T19:13:59.063399000Z" level=info msg="Start subscribing containerd event" Feb 13 19:13:59.063517 containerd[1499]: time="2025-02-13T19:13:59.063460400Z" level=info msg="Start recovering state" Feb 13 19:13:59.063760 containerd[1499]: time="2025-02-13T19:13:59.063549120Z" level=info msg="Start event monitor" Feb 13 19:13:59.063760 containerd[1499]: time="2025-02-13T19:13:59.063567000Z" level=info msg="Start snapshots syncer" Feb 13 19:13:59.063760 containerd[1499]: time="2025-02-13T19:13:59.063577600Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:13:59.063760 containerd[1499]: time="2025-02-13T19:13:59.063585040Z" level=info msg="Start streaming server" Feb 13 19:13:59.064540 containerd[1499]: time="2025-02-13T19:13:59.064513080Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:13:59.064585 containerd[1499]: time="2025-02-13T19:13:59.064570760Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:13:59.069570 containerd[1499]: time="2025-02-13T19:13:59.068449760Z" level=info msg="containerd successfully booted in 0.126208s" Feb 13 19:13:59.068588 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:13:59.290940 tar[1484]: linux-arm64/LICENSE Feb 13 19:13:59.290940 tar[1484]: linux-arm64/README.md Feb 13 19:13:59.307014 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 19:13:59.410490 systemd-networkd[1398]: eth0: Gained IPv6LL Feb 13 19:13:59.412368 systemd-timesyncd[1359]: Network configuration changed, trying to establish connection. Feb 13 19:13:59.415653 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:13:59.417179 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:13:59.428015 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:13:59.432594 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:13:59.497208 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:13:59.666793 systemd-networkd[1398]: eth1: Gained IPv6LL Feb 13 19:13:59.668917 systemd-timesyncd[1359]: Network configuration changed, trying to establish connection. Feb 13 19:14:00.203754 (kubelet)[1582]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:14:00.204093 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:14:00.822313 kubelet[1582]: E0213 19:14:00.820991 1582 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:14:00.824505 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:14:00.825084 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:14:00.826393 systemd[1]: kubelet.service: Consumed 877ms CPU time, 239.7M memory peak. Feb 13 19:14:00.903012 sshd_keygen[1504]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:14:00.932328 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:14:00.945873 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:14:00.955610 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:14:00.956606 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:14:00.965986 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:14:00.980104 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:14:00.991762 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:14:00.995074 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 13 19:14:00.996804 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:14:00.997718 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:14:01.000450 systemd[1]: Startup finished in 862ms (kernel) + 6.373s (initrd) + 5.969s (userspace) = 13.205s. Feb 13 19:14:10.880701 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 19:14:10.892802 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:14:11.025880 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:14:11.031572 (kubelet)[1618]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:14:11.091460 kubelet[1618]: E0213 19:14:11.091331 1618 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:14:11.094982 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:14:11.095153 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:14:11.095877 systemd[1]: kubelet.service: Consumed 173ms CPU time, 97.3M memory peak. Feb 13 19:14:21.130707 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 19:14:21.135665 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:14:21.236480 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:14:21.250989 (kubelet)[1634]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:14:21.303018 kubelet[1634]: E0213 19:14:21.302950 1634 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:14:21.305564 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:14:21.305739 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:14:21.306714 systemd[1]: kubelet.service: Consumed 154ms CPU time, 94.3M memory peak. Feb 13 19:14:29.682694 systemd-timesyncd[1359]: Contacted time server 78.47.168.188:123 (2.flatcar.pool.ntp.org). Feb 13 19:14:29.682814 systemd-timesyncd[1359]: Initial clock synchronization to Thu 2025-02-13 19:14:29.682455 UTC. Feb 13 19:14:29.683643 systemd-resolved[1347]: Clock change detected. Flushing caches. Feb 13 19:14:30.932479 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Feb 13 19:14:30.938071 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:14:31.068063 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:14:31.076476 (kubelet)[1650]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:14:31.130229 kubelet[1650]: E0213 19:14:31.130125 1650 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:14:31.133246 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:14:31.133501 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:14:31.134337 systemd[1]: kubelet.service: Consumed 159ms CPU time, 94.5M memory peak. Feb 13 19:14:41.183194 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Feb 13 19:14:41.193133 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:14:41.322130 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:14:41.332610 (kubelet)[1665]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:14:41.388467 kubelet[1665]: E0213 19:14:41.388419 1665 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:14:41.391971 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:14:41.392344 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:14:41.393265 systemd[1]: kubelet.service: Consumed 158ms CPU time, 98.5M memory peak. Feb 13 19:14:43.390936 update_engine[1480]: I20250213 19:14:43.390484 1480 update_attempter.cc:509] Updating boot flags... Feb 13 19:14:43.443877 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1683) Feb 13 19:14:43.542902 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1684) Feb 13 19:14:51.432438 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Feb 13 19:14:51.440118 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:14:51.562191 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:14:51.564974 (kubelet)[1700]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:14:51.615355 kubelet[1700]: E0213 19:14:51.615269 1700 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:14:51.618533 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:14:51.618773 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:14:51.619496 systemd[1]: kubelet.service: Consumed 150ms CPU time, 96.8M memory peak. Feb 13 19:15:01.682184 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Feb 13 19:15:01.694204 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:15:01.814524 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:15:01.825353 (kubelet)[1716]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:15:01.880891 kubelet[1716]: E0213 19:15:01.880839 1716 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:15:01.883596 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:15:01.885267 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:15:01.885927 systemd[1]: kubelet.service: Consumed 158ms CPU time, 94.5M memory peak. Feb 13 19:15:11.934141 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Feb 13 19:15:11.945286 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:15:12.109076 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:15:12.111104 (kubelet)[1731]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:15:12.179116 kubelet[1731]: E0213 19:15:12.179028 1731 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:15:12.183276 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:15:12.183574 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:15:12.184953 systemd[1]: kubelet.service: Consumed 180ms CPU time, 94.6M memory peak. Feb 13 19:15:22.432934 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Feb 13 19:15:22.444229 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:15:22.576739 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:15:22.589386 (kubelet)[1746]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:15:22.642226 kubelet[1746]: E0213 19:15:22.642060 1746 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:15:22.645231 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:15:22.645960 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:15:22.646561 systemd[1]: kubelet.service: Consumed 162ms CPU time, 96.3M memory peak. Feb 13 19:15:32.683062 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Feb 13 19:15:32.694235 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:15:32.828160 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:15:32.829592 (kubelet)[1763]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:15:32.890897 kubelet[1763]: E0213 19:15:32.890825 1763 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:15:32.893992 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:15:32.894361 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:15:32.895295 systemd[1]: kubelet.service: Consumed 166ms CPU time, 96.1M memory peak. Feb 13 19:15:42.932605 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Feb 13 19:15:42.938115 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:15:43.069914 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:15:43.089269 (kubelet)[1779]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:15:43.139914 kubelet[1779]: E0213 19:15:43.139849 1779 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:15:43.143575 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:15:43.143732 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:15:43.144468 systemd[1]: kubelet.service: Consumed 164ms CPU time, 96.4M memory peak. Feb 13 19:15:47.779027 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:15:47.793454 systemd[1]: Started sshd@0-142.132.176.244:22-139.178.68.195:49464.service - OpenSSH per-connection server daemon (139.178.68.195:49464). Feb 13 19:15:48.800058 sshd[1788]: Accepted publickey for core from 139.178.68.195 port 49464 ssh2: RSA SHA256:GiO19KKKK6dAgXb8V1C7vI95O6t/PswdbHT7p8WkVYc Feb 13 19:15:48.804012 sshd-session[1788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:15:48.812471 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:15:48.818138 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:15:48.827927 systemd-logind[1478]: New session 1 of user core. Feb 13 19:15:48.833914 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:15:48.841478 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:15:48.847146 (systemd)[1792]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:15:48.851374 systemd-logind[1478]: New session c1 of user core. Feb 13 19:15:48.985843 systemd[1792]: Queued start job for default target default.target. Feb 13 19:15:48.998112 systemd[1792]: Created slice app.slice - User Application Slice. Feb 13 19:15:48.998149 systemd[1792]: Reached target paths.target - Paths. Feb 13 19:15:48.998292 systemd[1792]: Reached target timers.target - Timers. Feb 13 19:15:49.000401 systemd[1792]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:15:49.016174 systemd[1792]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:15:49.016328 systemd[1792]: Reached target sockets.target - Sockets. Feb 13 19:15:49.016414 systemd[1792]: Reached target basic.target - Basic System. Feb 13 19:15:49.016465 systemd[1792]: Reached target default.target - Main User Target. Feb 13 19:15:49.016501 systemd[1792]: Startup finished in 157ms. Feb 13 19:15:49.016589 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:15:49.030121 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:15:49.730193 systemd[1]: Started sshd@1-142.132.176.244:22-139.178.68.195:49476.service - OpenSSH per-connection server daemon (139.178.68.195:49476). Feb 13 19:15:50.717208 sshd[1804]: Accepted publickey for core from 139.178.68.195 port 49476 ssh2: RSA SHA256:GiO19KKKK6dAgXb8V1C7vI95O6t/PswdbHT7p8WkVYc Feb 13 19:15:50.720605 sshd-session[1804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:15:50.726650 systemd-logind[1478]: New session 2 of user core. Feb 13 19:15:50.736201 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:15:51.399530 sshd[1806]: Connection closed by 139.178.68.195 port 49476 Feb 13 19:15:51.398798 sshd-session[1804]: pam_unix(sshd:session): session closed for user core Feb 13 19:15:51.403921 systemd[1]: sshd@1-142.132.176.244:22-139.178.68.195:49476.service: Deactivated successfully. Feb 13 19:15:51.404125 systemd-logind[1478]: Session 2 logged out. Waiting for processes to exit. Feb 13 19:15:51.406351 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 19:15:51.409069 systemd-logind[1478]: Removed session 2. Feb 13 19:15:51.583709 systemd[1]: Started sshd@2-142.132.176.244:22-139.178.68.195:49492.service - OpenSSH per-connection server daemon (139.178.68.195:49492). Feb 13 19:15:52.571007 sshd[1812]: Accepted publickey for core from 139.178.68.195 port 49492 ssh2: RSA SHA256:GiO19KKKK6dAgXb8V1C7vI95O6t/PswdbHT7p8WkVYc Feb 13 19:15:52.572878 sshd-session[1812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:15:52.580561 systemd-logind[1478]: New session 3 of user core. Feb 13 19:15:52.591095 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:15:53.183724 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Feb 13 19:15:53.190225 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:15:53.247938 sshd[1814]: Connection closed by 139.178.68.195 port 49492 Feb 13 19:15:53.248804 sshd-session[1812]: pam_unix(sshd:session): session closed for user core Feb 13 19:15:53.254669 systemd[1]: sshd@2-142.132.176.244:22-139.178.68.195:49492.service: Deactivated successfully. Feb 13 19:15:53.256747 systemd-logind[1478]: Session 3 logged out. Waiting for processes to exit. Feb 13 19:15:53.258890 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 19:15:53.266390 systemd-logind[1478]: Removed session 3. Feb 13 19:15:53.336839 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:15:53.352658 (kubelet)[1827]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:15:53.411857 kubelet[1827]: E0213 19:15:53.411809 1827 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:15:53.415688 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:15:53.415856 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:15:53.416262 systemd[1]: kubelet.service: Consumed 172ms CPU time, 95.3M memory peak. Feb 13 19:15:53.429735 systemd[1]: Started sshd@3-142.132.176.244:22-139.178.68.195:49506.service - OpenSSH per-connection server daemon (139.178.68.195:49506). Feb 13 19:15:54.431499 sshd[1836]: Accepted publickey for core from 139.178.68.195 port 49506 ssh2: RSA SHA256:GiO19KKKK6dAgXb8V1C7vI95O6t/PswdbHT7p8WkVYc Feb 13 19:15:54.433704 sshd-session[1836]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:15:54.439739 systemd-logind[1478]: New session 4 of user core. Feb 13 19:15:54.451151 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:15:55.118901 sshd[1838]: Connection closed by 139.178.68.195 port 49506 Feb 13 19:15:55.119813 sshd-session[1836]: pam_unix(sshd:session): session closed for user core Feb 13 19:15:55.125693 systemd[1]: sshd@3-142.132.176.244:22-139.178.68.195:49506.service: Deactivated successfully. Feb 13 19:15:55.129593 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 19:15:55.130750 systemd-logind[1478]: Session 4 logged out. Waiting for processes to exit. Feb 13 19:15:55.132069 systemd-logind[1478]: Removed session 4. Feb 13 19:15:55.303200 systemd[1]: Started sshd@4-142.132.176.244:22-139.178.68.195:49520.service - OpenSSH per-connection server daemon (139.178.68.195:49520). Feb 13 19:15:56.300555 sshd[1844]: Accepted publickey for core from 139.178.68.195 port 49520 ssh2: RSA SHA256:GiO19KKKK6dAgXb8V1C7vI95O6t/PswdbHT7p8WkVYc Feb 13 19:15:56.303369 sshd-session[1844]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:15:56.312580 systemd-logind[1478]: New session 5 of user core. Feb 13 19:15:56.328154 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:15:56.840560 sudo[1847]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:15:56.841395 sudo[1847]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:15:57.189365 (dockerd)[1865]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 19:15:57.189396 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 19:15:57.457274 dockerd[1865]: time="2025-02-13T19:15:57.456716656Z" level=info msg="Starting up" Feb 13 19:15:57.544392 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3264093191-merged.mount: Deactivated successfully. Feb 13 19:15:57.566209 systemd[1]: var-lib-docker-metacopy\x2dcheck407960719-merged.mount: Deactivated successfully. Feb 13 19:15:57.575019 dockerd[1865]: time="2025-02-13T19:15:57.574581703Z" level=info msg="Loading containers: start." Feb 13 19:15:57.773892 kernel: Initializing XFRM netlink socket Feb 13 19:15:57.870268 systemd-networkd[1398]: docker0: Link UP Feb 13 19:15:57.908845 dockerd[1865]: time="2025-02-13T19:15:57.908597068Z" level=info msg="Loading containers: done." Feb 13 19:15:57.931279 dockerd[1865]: time="2025-02-13T19:15:57.931198045Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 19:15:57.931512 dockerd[1865]: time="2025-02-13T19:15:57.931375725Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Feb 13 19:15:57.931712 dockerd[1865]: time="2025-02-13T19:15:57.931641205Z" level=info msg="Daemon has completed initialization" Feb 13 19:15:57.977991 dockerd[1865]: time="2025-02-13T19:15:57.977902319Z" level=info msg="API listen on /run/docker.sock" Feb 13 19:15:57.979013 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 19:15:59.165450 containerd[1499]: time="2025-02-13T19:15:59.165160653Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\"" Feb 13 19:15:59.837991 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount910129077.mount: Deactivated successfully. Feb 13 19:16:01.313839 containerd[1499]: time="2025-02-13T19:16:01.313640298Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:16:01.317014 containerd[1499]: time="2025-02-13T19:16:01.316510779Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.10: active requests=0, bytes read=29865299" Feb 13 19:16:01.318397 containerd[1499]: time="2025-02-13T19:16:01.318287020Z" level=info msg="ImageCreate event name:\"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:16:01.322958 containerd[1499]: time="2025-02-13T19:16:01.322842303Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:16:01.325043 containerd[1499]: time="2025-02-13T19:16:01.324734464Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.10\" with image id \"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\", size \"29862007\" in 2.159514811s" Feb 13 19:16:01.325043 containerd[1499]: time="2025-02-13T19:16:01.324819584Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\" returns image reference \"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\"" Feb 13 19:16:01.349100 containerd[1499]: time="2025-02-13T19:16:01.348804718Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\"" Feb 13 19:16:03.232627 containerd[1499]: time="2025-02-13T19:16:03.232558577Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:16:03.234848 containerd[1499]: time="2025-02-13T19:16:03.234742418Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.10: active requests=0, bytes read=26898614" Feb 13 19:16:03.235561 containerd[1499]: time="2025-02-13T19:16:03.235323818Z" level=info msg="ImageCreate event name:\"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:16:03.240403 containerd[1499]: time="2025-02-13T19:16:03.240323580Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:16:03.242044 containerd[1499]: time="2025-02-13T19:16:03.241567821Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.10\" with image id \"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\", size \"28302323\" in 1.892720423s" Feb 13 19:16:03.242044 containerd[1499]: time="2025-02-13T19:16:03.241610701Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\" returns image reference \"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\"" Feb 13 19:16:03.266371 containerd[1499]: time="2025-02-13T19:16:03.266318713Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\"" Feb 13 19:16:03.432208 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Feb 13 19:16:03.443131 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:16:03.562527 (kubelet)[2130]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:16:03.562814 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:16:03.619571 kubelet[2130]: E0213 19:16:03.619496 2130 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:16:03.622365 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:16:03.622566 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:16:03.623379 systemd[1]: kubelet.service: Consumed 150ms CPU time, 96.7M memory peak. Feb 13 19:16:04.812485 containerd[1499]: time="2025-02-13T19:16:04.810648540Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:16:04.812485 containerd[1499]: time="2025-02-13T19:16:04.812121620Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.10: active requests=0, bytes read=16164954" Feb 13 19:16:04.813606 containerd[1499]: time="2025-02-13T19:16:04.813527061Z" level=info msg="ImageCreate event name:\"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:16:04.817931 containerd[1499]: time="2025-02-13T19:16:04.817849103Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:16:04.819849 containerd[1499]: time="2025-02-13T19:16:04.819385144Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.10\" with image id \"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\", size \"17568681\" in 1.553017991s" Feb 13 19:16:04.819849 containerd[1499]: time="2025-02-13T19:16:04.819432024Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\" returns image reference \"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\"" Feb 13 19:16:04.848504 containerd[1499]: time="2025-02-13T19:16:04.848461797Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 13 19:16:05.929267 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount541234653.mount: Deactivated successfully. Feb 13 19:16:06.189571 containerd[1499]: time="2025-02-13T19:16:06.189480160Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:16:06.191259 containerd[1499]: time="2025-02-13T19:16:06.190927805Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=25663396" Feb 13 19:16:06.192723 containerd[1499]: time="2025-02-13T19:16:06.192644130Z" level=info msg="ImageCreate event name:\"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:16:06.197376 containerd[1499]: time="2025-02-13T19:16:06.195928062Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:16:06.197376 containerd[1499]: time="2025-02-13T19:16:06.197098786Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"25662389\" in 1.348411869s" Feb 13 19:16:06.197376 containerd[1499]: time="2025-02-13T19:16:06.197142506Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\"" Feb 13 19:16:06.230591 containerd[1499]: time="2025-02-13T19:16:06.230538221Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 19:16:06.831900 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount968612941.mount: Deactivated successfully. Feb 13 19:16:07.599946 containerd[1499]: time="2025-02-13T19:16:07.599877229Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:16:07.601487 containerd[1499]: time="2025-02-13T19:16:07.601381670Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485461" Feb 13 19:16:07.602852 containerd[1499]: time="2025-02-13T19:16:07.602753068Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:16:07.609945 containerd[1499]: time="2025-02-13T19:16:07.609663176Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:16:07.611795 containerd[1499]: time="2025-02-13T19:16:07.611411864Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.380450881s" Feb 13 19:16:07.611795 containerd[1499]: time="2025-02-13T19:16:07.611672791Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 13 19:16:07.640614 containerd[1499]: time="2025-02-13T19:16:07.640517500Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 19:16:08.138166 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2423039457.mount: Deactivated successfully. Feb 13 19:16:08.149035 containerd[1499]: time="2025-02-13T19:16:08.148950805Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:16:08.150569 containerd[1499]: time="2025-02-13T19:16:08.150490566Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268841" Feb 13 19:16:08.152790 containerd[1499]: time="2025-02-13T19:16:08.151328388Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:16:08.154922 containerd[1499]: time="2025-02-13T19:16:08.154872282Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:16:08.156043 containerd[1499]: time="2025-02-13T19:16:08.155630222Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 515.02416ms" Feb 13 19:16:08.156043 containerd[1499]: time="2025-02-13T19:16:08.155664343Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 13 19:16:08.181360 containerd[1499]: time="2025-02-13T19:16:08.181323305Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Feb 13 19:16:08.775559 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2587477351.mount: Deactivated successfully. Feb 13 19:16:10.847021 containerd[1499]: time="2025-02-13T19:16:10.846915145Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:16:10.849578 containerd[1499]: time="2025-02-13T19:16:10.848790792Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191552" Feb 13 19:16:10.852790 containerd[1499]: time="2025-02-13T19:16:10.851094250Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:16:10.856732 containerd[1499]: time="2025-02-13T19:16:10.856602229Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:16:10.859476 containerd[1499]: time="2025-02-13T19:16:10.859411019Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 2.678048513s" Feb 13 19:16:10.859476 containerd[1499]: time="2025-02-13T19:16:10.859478821Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Feb 13 19:16:13.682057 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Feb 13 19:16:13.688219 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:16:13.818955 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:16:13.828376 (kubelet)[2328]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:16:13.912957 kubelet[2328]: E0213 19:16:13.912904 2328 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:16:13.917618 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:16:13.918040 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:16:13.918907 systemd[1]: kubelet.service: Consumed 158ms CPU time, 96.6M memory peak. Feb 13 19:16:15.993421 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:16:15.993614 systemd[1]: kubelet.service: Consumed 158ms CPU time, 96.6M memory peak. Feb 13 19:16:16.012365 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:16:16.053498 systemd[1]: Reload requested from client PID 2343 ('systemctl') (unit session-5.scope)... Feb 13 19:16:16.053669 systemd[1]: Reloading... Feb 13 19:16:16.218802 zram_generator::config[2397]: No configuration found. Feb 13 19:16:16.323642 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:16:16.432869 systemd[1]: Reloading finished in 378 ms. Feb 13 19:16:16.493656 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:16:16.513280 (kubelet)[2426]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:16:16.520342 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:16:16.523080 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:16:16.523557 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:16:16.523718 systemd[1]: kubelet.service: Consumed 110ms CPU time, 83.7M memory peak. Feb 13 19:16:16.529533 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:16:16.652060 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:16:16.661431 (kubelet)[2443]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:16:16.720533 kubelet[2443]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:16:16.722468 kubelet[2443]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:16:16.722468 kubelet[2443]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:16:16.722468 kubelet[2443]: I0213 19:16:16.721109 2443 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:16:17.290388 kubelet[2443]: I0213 19:16:17.290339 2443 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 19:16:17.290603 kubelet[2443]: I0213 19:16:17.290590 2443 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:16:17.291015 kubelet[2443]: I0213 19:16:17.290994 2443 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 19:16:17.322527 kubelet[2443]: E0213 19:16:17.322434 2443 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://142.132.176.244:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 142.132.176.244:6443: connect: connection refused Feb 13 19:16:17.324804 kubelet[2443]: I0213 19:16:17.324420 2443 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:16:17.340651 kubelet[2443]: I0213 19:16:17.339920 2443 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:16:17.342228 kubelet[2443]: I0213 19:16:17.342155 2443 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:16:17.342632 kubelet[2443]: I0213 19:16:17.342398 2443 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-0-1-4-0b1b2da462","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 19:16:17.342910 kubelet[2443]: I0213 19:16:17.342889 2443 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:16:17.342978 kubelet[2443]: I0213 19:16:17.342969 2443 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 19:16:17.343376 kubelet[2443]: I0213 19:16:17.343356 2443 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:16:17.346934 kubelet[2443]: I0213 19:16:17.346158 2443 kubelet.go:400] "Attempting to sync node with API server" Feb 13 19:16:17.346934 kubelet[2443]: I0213 19:16:17.346205 2443 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:16:17.346934 kubelet[2443]: I0213 19:16:17.346519 2443 kubelet.go:312] "Adding apiserver pod source" Feb 13 19:16:17.346934 kubelet[2443]: I0213 19:16:17.346690 2443 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:16:17.349486 kubelet[2443]: W0213 19:16:17.349276 2443 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://142.132.176.244:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-0-1-4-0b1b2da462&limit=500&resourceVersion=0": dial tcp 142.132.176.244:6443: connect: connection refused Feb 13 19:16:17.349730 kubelet[2443]: E0213 19:16:17.349704 2443 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://142.132.176.244:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-0-1-4-0b1b2da462&limit=500&resourceVersion=0": dial tcp 142.132.176.244:6443: connect: connection refused Feb 13 19:16:17.350789 kubelet[2443]: W0213 19:16:17.350713 2443 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://142.132.176.244:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 142.132.176.244:6443: connect: connection refused Feb 13 19:16:17.351009 kubelet[2443]: E0213 19:16:17.350989 2443 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://142.132.176.244:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 142.132.176.244:6443: connect: connection refused Feb 13 19:16:17.351678 kubelet[2443]: I0213 19:16:17.351647 2443 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:16:17.352779 kubelet[2443]: I0213 19:16:17.352193 2443 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:16:17.352779 kubelet[2443]: W0213 19:16:17.352323 2443 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:16:17.358196 kubelet[2443]: I0213 19:16:17.358145 2443 server.go:1264] "Started kubelet" Feb 13 19:16:17.362678 kubelet[2443]: I0213 19:16:17.362604 2443 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:16:17.364404 kubelet[2443]: I0213 19:16:17.364315 2443 server.go:455] "Adding debug handlers to kubelet server" Feb 13 19:16:17.365726 kubelet[2443]: I0213 19:16:17.365646 2443 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:16:17.366787 kubelet[2443]: I0213 19:16:17.366009 2443 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:16:17.366787 kubelet[2443]: E0213 19:16:17.366179 2443 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://142.132.176.244:6443/api/v1/namespaces/default/events\": dial tcp 142.132.176.244:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230-0-1-4-0b1b2da462.1823da8c556517bf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-0-1-4-0b1b2da462,UID:ci-4230-0-1-4-0b1b2da462,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230-0-1-4-0b1b2da462,},FirstTimestamp:2025-02-13 19:16:17.358108607 +0000 UTC m=+0.692356581,LastTimestamp:2025-02-13 19:16:17.358108607 +0000 UTC m=+0.692356581,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-0-1-4-0b1b2da462,}" Feb 13 19:16:17.369608 kubelet[2443]: I0213 19:16:17.369571 2443 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:16:17.377324 kubelet[2443]: E0213 19:16:17.375741 2443 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4230-0-1-4-0b1b2da462\" not found" Feb 13 19:16:17.377324 kubelet[2443]: I0213 19:16:17.376324 2443 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 19:16:17.377324 kubelet[2443]: I0213 19:16:17.376454 2443 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:16:17.378172 kubelet[2443]: I0213 19:16:17.378149 2443 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:16:17.379159 kubelet[2443]: W0213 19:16:17.379104 2443 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://142.132.176.244:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 142.132.176.244:6443: connect: connection refused Feb 13 19:16:17.379288 kubelet[2443]: E0213 19:16:17.379276 2443 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://142.132.176.244:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 142.132.176.244:6443: connect: connection refused Feb 13 19:16:17.379543 kubelet[2443]: E0213 19:16:17.379513 2443 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://142.132.176.244:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-0-1-4-0b1b2da462?timeout=10s\": dial tcp 142.132.176.244:6443: connect: connection refused" interval="200ms" Feb 13 19:16:17.379863 kubelet[2443]: I0213 19:16:17.379845 2443 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:16:17.380059 kubelet[2443]: I0213 19:16:17.380035 2443 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:16:17.380888 kubelet[2443]: E0213 19:16:17.380557 2443 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:16:17.382084 kubelet[2443]: I0213 19:16:17.382060 2443 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:16:17.390564 kubelet[2443]: I0213 19:16:17.389361 2443 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:16:17.391820 kubelet[2443]: I0213 19:16:17.391786 2443 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:16:17.391820 kubelet[2443]: I0213 19:16:17.391820 2443 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:16:17.391956 kubelet[2443]: I0213 19:16:17.391845 2443 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 19:16:17.391956 kubelet[2443]: E0213 19:16:17.391890 2443 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:16:17.400655 kubelet[2443]: W0213 19:16:17.400593 2443 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://142.132.176.244:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 142.132.176.244:6443: connect: connection refused Feb 13 19:16:17.400655 kubelet[2443]: E0213 19:16:17.400665 2443 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://142.132.176.244:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 142.132.176.244:6443: connect: connection refused Feb 13 19:16:17.417618 kubelet[2443]: I0213 19:16:17.417590 2443 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:16:17.417925 kubelet[2443]: I0213 19:16:17.417859 2443 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:16:17.417925 kubelet[2443]: I0213 19:16:17.417886 2443 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:16:17.420709 kubelet[2443]: I0213 19:16:17.420558 2443 policy_none.go:49] "None policy: Start" Feb 13 19:16:17.421774 kubelet[2443]: I0213 19:16:17.421728 2443 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:16:17.421958 kubelet[2443]: I0213 19:16:17.421892 2443 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:16:17.430727 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 19:16:17.450810 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 19:16:17.458076 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 19:16:17.473599 kubelet[2443]: I0213 19:16:17.471820 2443 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:16:17.473599 kubelet[2443]: I0213 19:16:17.472269 2443 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:16:17.473599 kubelet[2443]: I0213 19:16:17.472604 2443 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:16:17.476135 kubelet[2443]: E0213 19:16:17.476106 2443 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230-0-1-4-0b1b2da462\" not found" Feb 13 19:16:17.478445 kubelet[2443]: I0213 19:16:17.478360 2443 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-0-1-4-0b1b2da462" Feb 13 19:16:17.479409 kubelet[2443]: E0213 19:16:17.479186 2443 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://142.132.176.244:6443/api/v1/nodes\": dial tcp 142.132.176.244:6443: connect: connection refused" node="ci-4230-0-1-4-0b1b2da462" Feb 13 19:16:17.492929 kubelet[2443]: I0213 19:16:17.492841 2443 topology_manager.go:215] "Topology Admit Handler" podUID="de224405d61f07758f0485e69c66ce82" podNamespace="kube-system" podName="kube-apiserver-ci-4230-0-1-4-0b1b2da462" Feb 13 19:16:17.496724 kubelet[2443]: I0213 19:16:17.496222 2443 topology_manager.go:215] "Topology Admit Handler" podUID="f243d8af4bda50e7fb403e4078a95ac1" podNamespace="kube-system" podName="kube-controller-manager-ci-4230-0-1-4-0b1b2da462" Feb 13 19:16:17.498387 kubelet[2443]: I0213 19:16:17.498339 2443 topology_manager.go:215] "Topology Admit Handler" podUID="fcb2f3b46526810c727ede03f0e37b28" podNamespace="kube-system" podName="kube-scheduler-ci-4230-0-1-4-0b1b2da462" Feb 13 19:16:17.511692 systemd[1]: Created slice kubepods-burstable-podde224405d61f07758f0485e69c66ce82.slice - libcontainer container kubepods-burstable-podde224405d61f07758f0485e69c66ce82.slice. Feb 13 19:16:17.529819 systemd[1]: Created slice kubepods-burstable-podfcb2f3b46526810c727ede03f0e37b28.slice - libcontainer container kubepods-burstable-podfcb2f3b46526810c727ede03f0e37b28.slice. Feb 13 19:16:17.535547 systemd[1]: Created slice kubepods-burstable-podf243d8af4bda50e7fb403e4078a95ac1.slice - libcontainer container kubepods-burstable-podf243d8af4bda50e7fb403e4078a95ac1.slice. Feb 13 19:16:17.581815 kubelet[2443]: I0213 19:16:17.579234 2443 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/de224405d61f07758f0485e69c66ce82-k8s-certs\") pod \"kube-apiserver-ci-4230-0-1-4-0b1b2da462\" (UID: \"de224405d61f07758f0485e69c66ce82\") " pod="kube-system/kube-apiserver-ci-4230-0-1-4-0b1b2da462" Feb 13 19:16:17.581815 kubelet[2443]: I0213 19:16:17.579338 2443 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/de224405d61f07758f0485e69c66ce82-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-0-1-4-0b1b2da462\" (UID: \"de224405d61f07758f0485e69c66ce82\") " pod="kube-system/kube-apiserver-ci-4230-0-1-4-0b1b2da462" Feb 13 19:16:17.581815 kubelet[2443]: I0213 19:16:17.579430 2443 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f243d8af4bda50e7fb403e4078a95ac1-k8s-certs\") pod \"kube-controller-manager-ci-4230-0-1-4-0b1b2da462\" (UID: \"f243d8af4bda50e7fb403e4078a95ac1\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-4-0b1b2da462" Feb 13 19:16:17.581815 kubelet[2443]: I0213 19:16:17.579527 2443 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f243d8af4bda50e7fb403e4078a95ac1-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-0-1-4-0b1b2da462\" (UID: \"f243d8af4bda50e7fb403e4078a95ac1\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-4-0b1b2da462" Feb 13 19:16:17.581815 kubelet[2443]: E0213 19:16:17.580400 2443 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://142.132.176.244:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-0-1-4-0b1b2da462?timeout=10s\": dial tcp 142.132.176.244:6443: connect: connection refused" interval="400ms" Feb 13 19:16:17.582631 kubelet[2443]: I0213 19:16:17.582283 2443 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/de224405d61f07758f0485e69c66ce82-ca-certs\") pod \"kube-apiserver-ci-4230-0-1-4-0b1b2da462\" (UID: \"de224405d61f07758f0485e69c66ce82\") " pod="kube-system/kube-apiserver-ci-4230-0-1-4-0b1b2da462" Feb 13 19:16:17.582631 kubelet[2443]: I0213 19:16:17.582479 2443 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f243d8af4bda50e7fb403e4078a95ac1-ca-certs\") pod \"kube-controller-manager-ci-4230-0-1-4-0b1b2da462\" (UID: \"f243d8af4bda50e7fb403e4078a95ac1\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-4-0b1b2da462" Feb 13 19:16:17.582631 kubelet[2443]: I0213 19:16:17.582511 2443 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f243d8af4bda50e7fb403e4078a95ac1-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-0-1-4-0b1b2da462\" (UID: \"f243d8af4bda50e7fb403e4078a95ac1\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-4-0b1b2da462" Feb 13 19:16:17.582631 kubelet[2443]: I0213 19:16:17.582538 2443 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f243d8af4bda50e7fb403e4078a95ac1-kubeconfig\") pod \"kube-controller-manager-ci-4230-0-1-4-0b1b2da462\" (UID: \"f243d8af4bda50e7fb403e4078a95ac1\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-4-0b1b2da462" Feb 13 19:16:17.582631 kubelet[2443]: I0213 19:16:17.582558 2443 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fcb2f3b46526810c727ede03f0e37b28-kubeconfig\") pod \"kube-scheduler-ci-4230-0-1-4-0b1b2da462\" (UID: \"fcb2f3b46526810c727ede03f0e37b28\") " pod="kube-system/kube-scheduler-ci-4230-0-1-4-0b1b2da462" Feb 13 19:16:17.658534 kubelet[2443]: E0213 19:16:17.658299 2443 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://142.132.176.244:6443/api/v1/namespaces/default/events\": dial tcp 142.132.176.244:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230-0-1-4-0b1b2da462.1823da8c556517bf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-0-1-4-0b1b2da462,UID:ci-4230-0-1-4-0b1b2da462,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230-0-1-4-0b1b2da462,},FirstTimestamp:2025-02-13 19:16:17.358108607 +0000 UTC m=+0.692356581,LastTimestamp:2025-02-13 19:16:17.358108607 +0000 UTC m=+0.692356581,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-0-1-4-0b1b2da462,}" Feb 13 19:16:17.682175 kubelet[2443]: I0213 19:16:17.681599 2443 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-0-1-4-0b1b2da462" Feb 13 19:16:17.682175 kubelet[2443]: E0213 19:16:17.682099 2443 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://142.132.176.244:6443/api/v1/nodes\": dial tcp 142.132.176.244:6443: connect: connection refused" node="ci-4230-0-1-4-0b1b2da462" Feb 13 19:16:17.825148 containerd[1499]: time="2025-02-13T19:16:17.824818199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-0-1-4-0b1b2da462,Uid:de224405d61f07758f0485e69c66ce82,Namespace:kube-system,Attempt:0,}" Feb 13 19:16:17.835214 containerd[1499]: time="2025-02-13T19:16:17.834850446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-0-1-4-0b1b2da462,Uid:fcb2f3b46526810c727ede03f0e37b28,Namespace:kube-system,Attempt:0,}" Feb 13 19:16:17.839388 containerd[1499]: time="2025-02-13T19:16:17.839124055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-0-1-4-0b1b2da462,Uid:f243d8af4bda50e7fb403e4078a95ac1,Namespace:kube-system,Attempt:0,}" Feb 13 19:16:17.982060 kubelet[2443]: E0213 19:16:17.981931 2443 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://142.132.176.244:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-0-1-4-0b1b2da462?timeout=10s\": dial tcp 142.132.176.244:6443: connect: connection refused" interval="800ms" Feb 13 19:16:18.089662 kubelet[2443]: I0213 19:16:18.088560 2443 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-0-1-4-0b1b2da462" Feb 13 19:16:18.089662 kubelet[2443]: E0213 19:16:18.089005 2443 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://142.132.176.244:6443/api/v1/nodes\": dial tcp 142.132.176.244:6443: connect: connection refused" node="ci-4230-0-1-4-0b1b2da462" Feb 13 19:16:18.201672 kubelet[2443]: W0213 19:16:18.201576 2443 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://142.132.176.244:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 142.132.176.244:6443: connect: connection refused Feb 13 19:16:18.201672 kubelet[2443]: E0213 19:16:18.201648 2443 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://142.132.176.244:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 142.132.176.244:6443: connect: connection refused Feb 13 19:16:18.332496 kubelet[2443]: W0213 19:16:18.332378 2443 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://142.132.176.244:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-0-1-4-0b1b2da462&limit=500&resourceVersion=0": dial tcp 142.132.176.244:6443: connect: connection refused Feb 13 19:16:18.332496 kubelet[2443]: E0213 19:16:18.332451 2443 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://142.132.176.244:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-0-1-4-0b1b2da462&limit=500&resourceVersion=0": dial tcp 142.132.176.244:6443: connect: connection refused Feb 13 19:16:18.406400 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3424634291.mount: Deactivated successfully. Feb 13 19:16:18.415227 containerd[1499]: time="2025-02-13T19:16:18.415081363Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:16:18.421577 containerd[1499]: time="2025-02-13T19:16:18.421473011Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Feb 13 19:16:18.425303 containerd[1499]: time="2025-02-13T19:16:18.424293868Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:16:18.433164 containerd[1499]: time="2025-02-13T19:16:18.431791140Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:16:18.435082 containerd[1499]: time="2025-02-13T19:16:18.434986884Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:16:18.444488 containerd[1499]: time="2025-02-13T19:16:18.444419674Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:16:18.447718 containerd[1499]: time="2025-02-13T19:16:18.445554177Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 620.650377ms" Feb 13 19:16:18.449086 containerd[1499]: time="2025-02-13T19:16:18.448877564Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:16:18.454586 containerd[1499]: time="2025-02-13T19:16:18.454500998Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:16:18.456556 containerd[1499]: time="2025-02-13T19:16:18.456185712Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 616.943294ms" Feb 13 19:16:18.462939 containerd[1499]: time="2025-02-13T19:16:18.462889327Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 627.910678ms" Feb 13 19:16:18.565773 containerd[1499]: time="2025-02-13T19:16:18.565589438Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:16:18.566737 containerd[1499]: time="2025-02-13T19:16:18.566003887Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:16:18.566737 containerd[1499]: time="2025-02-13T19:16:18.566456216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:16:18.566737 containerd[1499]: time="2025-02-13T19:16:18.566635060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:16:18.569445 containerd[1499]: time="2025-02-13T19:16:18.569355914Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:16:18.570789 containerd[1499]: time="2025-02-13T19:16:18.570509418Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:16:18.570789 containerd[1499]: time="2025-02-13T19:16:18.570551179Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:16:18.570789 containerd[1499]: time="2025-02-13T19:16:18.570682021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:16:18.571259 containerd[1499]: time="2025-02-13T19:16:18.571188431Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:16:18.571405 containerd[1499]: time="2025-02-13T19:16:18.571248073Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:16:18.571405 containerd[1499]: time="2025-02-13T19:16:18.571264313Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:16:18.571405 containerd[1499]: time="2025-02-13T19:16:18.571359955Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:16:18.599497 systemd[1]: Started cri-containerd-c22f45d4cc60db855f70966c2457897e2d36461d5f4a1618611a55dc3a2dcecf.scope - libcontainer container c22f45d4cc60db855f70966c2457897e2d36461d5f4a1618611a55dc3a2dcecf. Feb 13 19:16:18.613205 systemd[1]: Started cri-containerd-e7469daae9f41b3812cd27bd15643a8a8eacf5bc5991b0f110cec281f4b676c2.scope - libcontainer container e7469daae9f41b3812cd27bd15643a8a8eacf5bc5991b0f110cec281f4b676c2. Feb 13 19:16:18.624985 systemd[1]: Started cri-containerd-66ac9516999369d187c5d637b9b365eedf0cd72556aa5c97bda6a6c62210d9da.scope - libcontainer container 66ac9516999369d187c5d637b9b365eedf0cd72556aa5c97bda6a6c62210d9da. Feb 13 19:16:18.665693 containerd[1499]: time="2025-02-13T19:16:18.665312530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-0-1-4-0b1b2da462,Uid:de224405d61f07758f0485e69c66ce82,Namespace:kube-system,Attempt:0,} returns sandbox id \"c22f45d4cc60db855f70966c2457897e2d36461d5f4a1618611a55dc3a2dcecf\"" Feb 13 19:16:18.682916 containerd[1499]: time="2025-02-13T19:16:18.682717721Z" level=info msg="CreateContainer within sandbox \"c22f45d4cc60db855f70966c2457897e2d36461d5f4a1618611a55dc3a2dcecf\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 19:16:18.693423 containerd[1499]: time="2025-02-13T19:16:18.693216773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-0-1-4-0b1b2da462,Uid:f243d8af4bda50e7fb403e4078a95ac1,Namespace:kube-system,Attempt:0,} returns sandbox id \"66ac9516999369d187c5d637b9b365eedf0cd72556aa5c97bda6a6c62210d9da\"" Feb 13 19:16:18.699397 containerd[1499]: time="2025-02-13T19:16:18.699141532Z" level=info msg="CreateContainer within sandbox \"66ac9516999369d187c5d637b9b365eedf0cd72556aa5c97bda6a6c62210d9da\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 19:16:18.705744 containerd[1499]: time="2025-02-13T19:16:18.704235915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-0-1-4-0b1b2da462,Uid:fcb2f3b46526810c727ede03f0e37b28,Namespace:kube-system,Attempt:0,} returns sandbox id \"e7469daae9f41b3812cd27bd15643a8a8eacf5bc5991b0f110cec281f4b676c2\"" Feb 13 19:16:18.712720 containerd[1499]: time="2025-02-13T19:16:18.712640165Z" level=info msg="CreateContainer within sandbox \"e7469daae9f41b3812cd27bd15643a8a8eacf5bc5991b0f110cec281f4b676c2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 19:16:18.721071 containerd[1499]: time="2025-02-13T19:16:18.720981493Z" level=info msg="CreateContainer within sandbox \"c22f45d4cc60db855f70966c2457897e2d36461d5f4a1618611a55dc3a2dcecf\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f111d19aed94a8b930a419471274a15b19621b617e5731573bc0c9f4efb23116\"" Feb 13 19:16:18.723113 containerd[1499]: time="2025-02-13T19:16:18.722786889Z" level=info msg="StartContainer for \"f111d19aed94a8b930a419471274a15b19621b617e5731573bc0c9f4efb23116\"" Feb 13 19:16:18.743371 containerd[1499]: time="2025-02-13T19:16:18.743254422Z" level=info msg="CreateContainer within sandbox \"66ac9516999369d187c5d637b9b365eedf0cd72556aa5c97bda6a6c62210d9da\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8ffa88fbe33692b77c110012a3956370e91f6f2d40cd3ae50cb170f39aff0275\"" Feb 13 19:16:18.747583 containerd[1499]: time="2025-02-13T19:16:18.747542469Z" level=info msg="StartContainer for \"8ffa88fbe33692b77c110012a3956370e91f6f2d40cd3ae50cb170f39aff0275\"" Feb 13 19:16:18.766085 containerd[1499]: time="2025-02-13T19:16:18.766028322Z" level=info msg="CreateContainer within sandbox \"e7469daae9f41b3812cd27bd15643a8a8eacf5bc5991b0f110cec281f4b676c2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"71fda32abb214ffd279240616a0158cb15c396c39ec04af136d9f85a91410021\"" Feb 13 19:16:18.767748 containerd[1499]: time="2025-02-13T19:16:18.767606833Z" level=info msg="StartContainer for \"71fda32abb214ffd279240616a0158cb15c396c39ec04af136d9f85a91410021\"" Feb 13 19:16:18.769284 kubelet[2443]: W0213 19:16:18.769148 2443 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://142.132.176.244:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 142.132.176.244:6443: connect: connection refused Feb 13 19:16:18.769284 kubelet[2443]: E0213 19:16:18.769230 2443 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://142.132.176.244:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 142.132.176.244:6443: connect: connection refused Feb 13 19:16:18.771625 systemd[1]: Started cri-containerd-f111d19aed94a8b930a419471274a15b19621b617e5731573bc0c9f4efb23116.scope - libcontainer container f111d19aed94a8b930a419471274a15b19621b617e5731573bc0c9f4efb23116. Feb 13 19:16:18.783178 kubelet[2443]: E0213 19:16:18.782927 2443 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://142.132.176.244:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-0-1-4-0b1b2da462?timeout=10s\": dial tcp 142.132.176.244:6443: connect: connection refused" interval="1.6s" Feb 13 19:16:18.807006 systemd[1]: Started cri-containerd-8ffa88fbe33692b77c110012a3956370e91f6f2d40cd3ae50cb170f39aff0275.scope - libcontainer container 8ffa88fbe33692b77c110012a3956370e91f6f2d40cd3ae50cb170f39aff0275. Feb 13 19:16:18.825018 systemd[1]: Started cri-containerd-71fda32abb214ffd279240616a0158cb15c396c39ec04af136d9f85a91410021.scope - libcontainer container 71fda32abb214ffd279240616a0158cb15c396c39ec04af136d9f85a91410021. Feb 13 19:16:18.860208 containerd[1499]: time="2025-02-13T19:16:18.860000417Z" level=info msg="StartContainer for \"f111d19aed94a8b930a419471274a15b19621b617e5731573bc0c9f4efb23116\" returns successfully" Feb 13 19:16:18.881396 containerd[1499]: time="2025-02-13T19:16:18.881194805Z" level=info msg="StartContainer for \"8ffa88fbe33692b77c110012a3956370e91f6f2d40cd3ae50cb170f39aff0275\" returns successfully" Feb 13 19:16:18.895695 kubelet[2443]: I0213 19:16:18.894641 2443 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-0-1-4-0b1b2da462" Feb 13 19:16:18.895695 kubelet[2443]: E0213 19:16:18.895088 2443 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://142.132.176.244:6443/api/v1/nodes\": dial tcp 142.132.176.244:6443: connect: connection refused" node="ci-4230-0-1-4-0b1b2da462" Feb 13 19:16:18.908657 containerd[1499]: time="2025-02-13T19:16:18.908378073Z" level=info msg="StartContainer for \"71fda32abb214ffd279240616a0158cb15c396c39ec04af136d9f85a91410021\" returns successfully" Feb 13 19:16:18.933799 kubelet[2443]: W0213 19:16:18.933683 2443 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://142.132.176.244:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 142.132.176.244:6443: connect: connection refused Feb 13 19:16:18.933799 kubelet[2443]: E0213 19:16:18.933771 2443 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://142.132.176.244:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 142.132.176.244:6443: connect: connection refused Feb 13 19:16:20.499187 kubelet[2443]: I0213 19:16:20.498446 2443 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-0-1-4-0b1b2da462" Feb 13 19:16:21.184589 kubelet[2443]: E0213 19:16:21.184541 2443 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230-0-1-4-0b1b2da462\" not found" node="ci-4230-0-1-4-0b1b2da462" Feb 13 19:16:21.245074 kubelet[2443]: I0213 19:16:21.244839 2443 kubelet_node_status.go:76] "Successfully registered node" node="ci-4230-0-1-4-0b1b2da462" Feb 13 19:16:21.351100 kubelet[2443]: I0213 19:16:21.350813 2443 apiserver.go:52] "Watching apiserver" Feb 13 19:16:21.377121 kubelet[2443]: I0213 19:16:21.377068 2443 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:16:21.599116 kubelet[2443]: E0213 19:16:21.598622 2443 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4230-0-1-4-0b1b2da462\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4230-0-1-4-0b1b2da462" Feb 13 19:16:23.611845 systemd[1]: Reload requested from client PID 2723 ('systemctl') (unit session-5.scope)... Feb 13 19:16:23.611866 systemd[1]: Reloading... Feb 13 19:16:23.756793 zram_generator::config[2774]: No configuration found. Feb 13 19:16:23.866015 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:16:23.987622 systemd[1]: Reloading finished in 374 ms. Feb 13 19:16:24.020267 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:16:24.021563 kubelet[2443]: I0213 19:16:24.021095 2443 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:16:24.022616 kubelet[2443]: E0213 19:16:24.020076 2443 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{ci-4230-0-1-4-0b1b2da462.1823da8c556517bf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-0-1-4-0b1b2da462,UID:ci-4230-0-1-4-0b1b2da462,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230-0-1-4-0b1b2da462,},FirstTimestamp:2025-02-13 19:16:17.358108607 +0000 UTC m=+0.692356581,LastTimestamp:2025-02-13 19:16:17.358108607 +0000 UTC m=+0.692356581,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-0-1-4-0b1b2da462,}" Feb 13 19:16:24.039276 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:16:24.039984 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:16:24.040160 systemd[1]: kubelet.service: Consumed 1.150s CPU time, 111.9M memory peak. Feb 13 19:16:24.047231 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:16:24.189997 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:16:24.193126 (kubelet)[2813]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:16:24.263036 kubelet[2813]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:16:24.263036 kubelet[2813]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:16:24.263036 kubelet[2813]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:16:24.263716 kubelet[2813]: I0213 19:16:24.263069 2813 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:16:24.269612 kubelet[2813]: I0213 19:16:24.269568 2813 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 19:16:24.269612 kubelet[2813]: I0213 19:16:24.269598 2813 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:16:24.269909 kubelet[2813]: I0213 19:16:24.269833 2813 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 19:16:24.271632 kubelet[2813]: I0213 19:16:24.271597 2813 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 19:16:24.273221 kubelet[2813]: I0213 19:16:24.273171 2813 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:16:24.285745 kubelet[2813]: I0213 19:16:24.285590 2813 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:16:24.286411 kubelet[2813]: I0213 19:16:24.285957 2813 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:16:24.286411 kubelet[2813]: I0213 19:16:24.285988 2813 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-0-1-4-0b1b2da462","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 19:16:24.286411 kubelet[2813]: I0213 19:16:24.286372 2813 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:16:24.286411 kubelet[2813]: I0213 19:16:24.286384 2813 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 19:16:24.286584 kubelet[2813]: I0213 19:16:24.286426 2813 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:16:24.286584 kubelet[2813]: I0213 19:16:24.286545 2813 kubelet.go:400] "Attempting to sync node with API server" Feb 13 19:16:24.286584 kubelet[2813]: I0213 19:16:24.286559 2813 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:16:24.286644 kubelet[2813]: I0213 19:16:24.286593 2813 kubelet.go:312] "Adding apiserver pod source" Feb 13 19:16:24.286644 kubelet[2813]: I0213 19:16:24.286611 2813 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:16:24.287911 kubelet[2813]: I0213 19:16:24.287867 2813 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:16:24.288236 kubelet[2813]: I0213 19:16:24.288220 2813 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:16:24.288965 kubelet[2813]: I0213 19:16:24.288946 2813 server.go:1264] "Started kubelet" Feb 13 19:16:24.291554 kubelet[2813]: I0213 19:16:24.291520 2813 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:16:24.298157 kubelet[2813]: I0213 19:16:24.298095 2813 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:16:24.299570 kubelet[2813]: I0213 19:16:24.299152 2813 server.go:455] "Adding debug handlers to kubelet server" Feb 13 19:16:24.302289 kubelet[2813]: I0213 19:16:24.302219 2813 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:16:24.302289 kubelet[2813]: I0213 19:16:24.302514 2813 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:16:24.308817 kubelet[2813]: I0213 19:16:24.308388 2813 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 19:16:24.314256 kubelet[2813]: I0213 19:16:24.314207 2813 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:16:24.314646 kubelet[2813]: I0213 19:16:24.314629 2813 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:16:24.324860 kubelet[2813]: I0213 19:16:24.322843 2813 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:16:24.327668 kubelet[2813]: I0213 19:16:24.327630 2813 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:16:24.327852 kubelet[2813]: I0213 19:16:24.327841 2813 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:16:24.327946 kubelet[2813]: I0213 19:16:24.327936 2813 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 19:16:24.328207 kubelet[2813]: E0213 19:16:24.328182 2813 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:16:24.337021 kubelet[2813]: I0213 19:16:24.336980 2813 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:16:24.337440 kubelet[2813]: I0213 19:16:24.337089 2813 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:16:24.346909 kubelet[2813]: I0213 19:16:24.343918 2813 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:16:24.346909 kubelet[2813]: E0213 19:16:24.346330 2813 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:16:24.411100 kubelet[2813]: I0213 19:16:24.411048 2813 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:16:24.411828 kubelet[2813]: I0213 19:16:24.411661 2813 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:16:24.411828 kubelet[2813]: I0213 19:16:24.411700 2813 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:16:24.411828 kubelet[2813]: I0213 19:16:24.411814 2813 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-0-1-4-0b1b2da462" Feb 13 19:16:24.412318 kubelet[2813]: I0213 19:16:24.412290 2813 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 19:16:24.412846 kubelet[2813]: I0213 19:16:24.412436 2813 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 19:16:24.412846 kubelet[2813]: I0213 19:16:24.412471 2813 policy_none.go:49] "None policy: Start" Feb 13 19:16:24.413741 kubelet[2813]: I0213 19:16:24.413594 2813 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:16:24.413741 kubelet[2813]: I0213 19:16:24.413631 2813 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:16:24.414062 kubelet[2813]: I0213 19:16:24.414043 2813 state_mem.go:75] "Updated machine memory state" Feb 13 19:16:24.422034 kubelet[2813]: I0213 19:16:24.421992 2813 kubelet_node_status.go:112] "Node was previously registered" node="ci-4230-0-1-4-0b1b2da462" Feb 13 19:16:24.422192 kubelet[2813]: I0213 19:16:24.422089 2813 kubelet_node_status.go:76] "Successfully registered node" node="ci-4230-0-1-4-0b1b2da462" Feb 13 19:16:24.424298 kubelet[2813]: I0213 19:16:24.423943 2813 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:16:24.424298 kubelet[2813]: I0213 19:16:24.424155 2813 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:16:24.424298 kubelet[2813]: I0213 19:16:24.424270 2813 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:16:24.428565 kubelet[2813]: I0213 19:16:24.428457 2813 topology_manager.go:215] "Topology Admit Handler" podUID="fcb2f3b46526810c727ede03f0e37b28" podNamespace="kube-system" podName="kube-scheduler-ci-4230-0-1-4-0b1b2da462" Feb 13 19:16:24.429038 kubelet[2813]: I0213 19:16:24.428622 2813 topology_manager.go:215] "Topology Admit Handler" podUID="de224405d61f07758f0485e69c66ce82" podNamespace="kube-system" podName="kube-apiserver-ci-4230-0-1-4-0b1b2da462" Feb 13 19:16:24.429038 kubelet[2813]: I0213 19:16:24.428668 2813 topology_manager.go:215] "Topology Admit Handler" podUID="f243d8af4bda50e7fb403e4078a95ac1" podNamespace="kube-system" podName="kube-controller-manager-ci-4230-0-1-4-0b1b2da462" Feb 13 19:16:24.515853 kubelet[2813]: I0213 19:16:24.514989 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f243d8af4bda50e7fb403e4078a95ac1-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-0-1-4-0b1b2da462\" (UID: \"f243d8af4bda50e7fb403e4078a95ac1\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-4-0b1b2da462" Feb 13 19:16:24.515853 kubelet[2813]: I0213 19:16:24.515038 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f243d8af4bda50e7fb403e4078a95ac1-k8s-certs\") pod \"kube-controller-manager-ci-4230-0-1-4-0b1b2da462\" (UID: \"f243d8af4bda50e7fb403e4078a95ac1\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-4-0b1b2da462" Feb 13 19:16:24.515853 kubelet[2813]: I0213 19:16:24.515063 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f243d8af4bda50e7fb403e4078a95ac1-kubeconfig\") pod \"kube-controller-manager-ci-4230-0-1-4-0b1b2da462\" (UID: \"f243d8af4bda50e7fb403e4078a95ac1\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-4-0b1b2da462" Feb 13 19:16:24.515853 kubelet[2813]: I0213 19:16:24.515083 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f243d8af4bda50e7fb403e4078a95ac1-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-0-1-4-0b1b2da462\" (UID: \"f243d8af4bda50e7fb403e4078a95ac1\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-4-0b1b2da462" Feb 13 19:16:24.515853 kubelet[2813]: I0213 19:16:24.515132 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fcb2f3b46526810c727ede03f0e37b28-kubeconfig\") pod \"kube-scheduler-ci-4230-0-1-4-0b1b2da462\" (UID: \"fcb2f3b46526810c727ede03f0e37b28\") " pod="kube-system/kube-scheduler-ci-4230-0-1-4-0b1b2da462" Feb 13 19:16:24.516092 kubelet[2813]: I0213 19:16:24.515165 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/de224405d61f07758f0485e69c66ce82-k8s-certs\") pod \"kube-apiserver-ci-4230-0-1-4-0b1b2da462\" (UID: \"de224405d61f07758f0485e69c66ce82\") " pod="kube-system/kube-apiserver-ci-4230-0-1-4-0b1b2da462" Feb 13 19:16:24.516092 kubelet[2813]: I0213 19:16:24.515185 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/de224405d61f07758f0485e69c66ce82-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-0-1-4-0b1b2da462\" (UID: \"de224405d61f07758f0485e69c66ce82\") " pod="kube-system/kube-apiserver-ci-4230-0-1-4-0b1b2da462" Feb 13 19:16:24.516092 kubelet[2813]: I0213 19:16:24.515203 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f243d8af4bda50e7fb403e4078a95ac1-ca-certs\") pod \"kube-controller-manager-ci-4230-0-1-4-0b1b2da462\" (UID: \"f243d8af4bda50e7fb403e4078a95ac1\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-4-0b1b2da462" Feb 13 19:16:24.516092 kubelet[2813]: I0213 19:16:24.515220 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/de224405d61f07758f0485e69c66ce82-ca-certs\") pod \"kube-apiserver-ci-4230-0-1-4-0b1b2da462\" (UID: \"de224405d61f07758f0485e69c66ce82\") " pod="kube-system/kube-apiserver-ci-4230-0-1-4-0b1b2da462" Feb 13 19:16:25.294640 kubelet[2813]: I0213 19:16:25.294594 2813 apiserver.go:52] "Watching apiserver" Feb 13 19:16:25.314826 kubelet[2813]: I0213 19:16:25.314744 2813 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:16:25.389243 kubelet[2813]: E0213 19:16:25.389197 2813 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4230-0-1-4-0b1b2da462\" already exists" pod="kube-system/kube-apiserver-ci-4230-0-1-4-0b1b2da462" Feb 13 19:16:25.406573 kubelet[2813]: I0213 19:16:25.406301 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230-0-1-4-0b1b2da462" podStartSLOduration=1.406237521 podStartE2EDuration="1.406237521s" podCreationTimestamp="2025-02-13 19:16:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:16:25.404859298 +0000 UTC m=+1.205209511" watchObservedRunningTime="2025-02-13 19:16:25.406237521 +0000 UTC m=+1.206587694" Feb 13 19:16:25.436554 kubelet[2813]: I0213 19:16:25.436252 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230-0-1-4-0b1b2da462" podStartSLOduration=1.436227863 podStartE2EDuration="1.436227863s" podCreationTimestamp="2025-02-13 19:16:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:16:25.421753901 +0000 UTC m=+1.222104074" watchObservedRunningTime="2025-02-13 19:16:25.436227863 +0000 UTC m=+1.236578036" Feb 13 19:16:25.454337 kubelet[2813]: I0213 19:16:25.453497 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230-0-1-4-0b1b2da462" podStartSLOduration=1.4534745519999999 podStartE2EDuration="1.453474552s" podCreationTimestamp="2025-02-13 19:16:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:16:25.436582789 +0000 UTC m=+1.236932962" watchObservedRunningTime="2025-02-13 19:16:25.453474552 +0000 UTC m=+1.253824725" Feb 13 19:16:25.658340 sudo[1847]: pam_unix(sudo:session): session closed for user root Feb 13 19:16:25.822241 sshd[1846]: Connection closed by 139.178.68.195 port 49520 Feb 13 19:16:25.821943 sshd-session[1844]: pam_unix(sshd:session): session closed for user core Feb 13 19:16:25.835187 systemd[1]: sshd@4-142.132.176.244:22-139.178.68.195:49520.service: Deactivated successfully. Feb 13 19:16:25.838447 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 19:16:25.838902 systemd[1]: session-5.scope: Consumed 6.315s CPU time, 256.7M memory peak. Feb 13 19:16:25.840475 systemd-logind[1478]: Session 5 logged out. Waiting for processes to exit. Feb 13 19:16:25.842581 systemd-logind[1478]: Removed session 5. Feb 13 19:16:39.047900 kubelet[2813]: I0213 19:16:39.047735 2813 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 19:16:39.050015 containerd[1499]: time="2025-02-13T19:16:39.049640487Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:16:39.051711 kubelet[2813]: I0213 19:16:39.051260 2813 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 19:16:39.619872 kubelet[2813]: I0213 19:16:39.619809 2813 topology_manager.go:215] "Topology Admit Handler" podUID="fe6ff944-cc56-4dd1-95fc-ee49f1c1c852" podNamespace="kube-system" podName="kube-proxy-czb88" Feb 13 19:16:39.632442 kubelet[2813]: I0213 19:16:39.630152 2813 topology_manager.go:215] "Topology Admit Handler" podUID="5c29a3a3-9cac-4c34-9f19-f7025b80d978" podNamespace="kube-flannel" podName="kube-flannel-ds-4ftv5" Feb 13 19:16:39.634338 systemd[1]: Created slice kubepods-besteffort-podfe6ff944_cc56_4dd1_95fc_ee49f1c1c852.slice - libcontainer container kubepods-besteffort-podfe6ff944_cc56_4dd1_95fc_ee49f1c1c852.slice. Feb 13 19:16:39.652572 systemd[1]: Created slice kubepods-burstable-pod5c29a3a3_9cac_4c34_9f19_f7025b80d978.slice - libcontainer container kubepods-burstable-pod5c29a3a3_9cac_4c34_9f19_f7025b80d978.slice. Feb 13 19:16:39.716328 kubelet[2813]: I0213 19:16:39.715613 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5c29a3a3-9cac-4c34-9f19-f7025b80d978-run\") pod \"kube-flannel-ds-4ftv5\" (UID: \"5c29a3a3-9cac-4c34-9f19-f7025b80d978\") " pod="kube-flannel/kube-flannel-ds-4ftv5" Feb 13 19:16:39.716328 kubelet[2813]: I0213 19:16:39.715710 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/5c29a3a3-9cac-4c34-9f19-f7025b80d978-cni-plugin\") pod \"kube-flannel-ds-4ftv5\" (UID: \"5c29a3a3-9cac-4c34-9f19-f7025b80d978\") " pod="kube-flannel/kube-flannel-ds-4ftv5" Feb 13 19:16:39.716328 kubelet[2813]: I0213 19:16:39.715801 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5c29a3a3-9cac-4c34-9f19-f7025b80d978-xtables-lock\") pod \"kube-flannel-ds-4ftv5\" (UID: \"5c29a3a3-9cac-4c34-9f19-f7025b80d978\") " pod="kube-flannel/kube-flannel-ds-4ftv5" Feb 13 19:16:39.716328 kubelet[2813]: I0213 19:16:39.715850 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fe6ff944-cc56-4dd1-95fc-ee49f1c1c852-xtables-lock\") pod \"kube-proxy-czb88\" (UID: \"fe6ff944-cc56-4dd1-95fc-ee49f1c1c852\") " pod="kube-system/kube-proxy-czb88" Feb 13 19:16:39.716328 kubelet[2813]: I0213 19:16:39.715898 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spwmq\" (UniqueName: \"kubernetes.io/projected/5c29a3a3-9cac-4c34-9f19-f7025b80d978-kube-api-access-spwmq\") pod \"kube-flannel-ds-4ftv5\" (UID: \"5c29a3a3-9cac-4c34-9f19-f7025b80d978\") " pod="kube-flannel/kube-flannel-ds-4ftv5" Feb 13 19:16:39.717015 kubelet[2813]: I0213 19:16:39.715941 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fe6ff944-cc56-4dd1-95fc-ee49f1c1c852-kube-proxy\") pod \"kube-proxy-czb88\" (UID: \"fe6ff944-cc56-4dd1-95fc-ee49f1c1c852\") " pod="kube-system/kube-proxy-czb88" Feb 13 19:16:39.717015 kubelet[2813]: I0213 19:16:39.715981 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqvpf\" (UniqueName: \"kubernetes.io/projected/fe6ff944-cc56-4dd1-95fc-ee49f1c1c852-kube-api-access-tqvpf\") pod \"kube-proxy-czb88\" (UID: \"fe6ff944-cc56-4dd1-95fc-ee49f1c1c852\") " pod="kube-system/kube-proxy-czb88" Feb 13 19:16:39.717015 kubelet[2813]: I0213 19:16:39.716020 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/5c29a3a3-9cac-4c34-9f19-f7025b80d978-cni\") pod \"kube-flannel-ds-4ftv5\" (UID: \"5c29a3a3-9cac-4c34-9f19-f7025b80d978\") " pod="kube-flannel/kube-flannel-ds-4ftv5" Feb 13 19:16:39.717015 kubelet[2813]: I0213 19:16:39.716059 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/5c29a3a3-9cac-4c34-9f19-f7025b80d978-flannel-cfg\") pod \"kube-flannel-ds-4ftv5\" (UID: \"5c29a3a3-9cac-4c34-9f19-f7025b80d978\") " pod="kube-flannel/kube-flannel-ds-4ftv5" Feb 13 19:16:39.717015 kubelet[2813]: I0213 19:16:39.716106 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fe6ff944-cc56-4dd1-95fc-ee49f1c1c852-lib-modules\") pod \"kube-proxy-czb88\" (UID: \"fe6ff944-cc56-4dd1-95fc-ee49f1c1c852\") " pod="kube-system/kube-proxy-czb88" Feb 13 19:16:39.948006 containerd[1499]: time="2025-02-13T19:16:39.947883796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-czb88,Uid:fe6ff944-cc56-4dd1-95fc-ee49f1c1c852,Namespace:kube-system,Attempt:0,}" Feb 13 19:16:39.959815 containerd[1499]: time="2025-02-13T19:16:39.959567054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-4ftv5,Uid:5c29a3a3-9cac-4c34-9f19-f7025b80d978,Namespace:kube-flannel,Attempt:0,}" Feb 13 19:16:39.994126 containerd[1499]: time="2025-02-13T19:16:39.993367974Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:16:39.994126 containerd[1499]: time="2025-02-13T19:16:39.993448815Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:16:39.994126 containerd[1499]: time="2025-02-13T19:16:39.993472456Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:16:39.994126 containerd[1499]: time="2025-02-13T19:16:39.993575977Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:16:40.019633 containerd[1499]: time="2025-02-13T19:16:40.018520507Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:16:40.019633 containerd[1499]: time="2025-02-13T19:16:40.018601468Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:16:40.019633 containerd[1499]: time="2025-02-13T19:16:40.018620108Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:16:40.019633 containerd[1499]: time="2025-02-13T19:16:40.018779030Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:16:40.019004 systemd[1]: Started cri-containerd-70bdf97c2e439225741af7abf16c02c7828be31a399458f6e3c55d066ef679ca.scope - libcontainer container 70bdf97c2e439225741af7abf16c02c7828be31a399458f6e3c55d066ef679ca. Feb 13 19:16:40.056166 systemd[1]: Started cri-containerd-009966426c83707653920d7143fc7ababd005beec667c2624d4bbec68d626cf7.scope - libcontainer container 009966426c83707653920d7143fc7ababd005beec667c2624d4bbec68d626cf7. Feb 13 19:16:40.119974 containerd[1499]: time="2025-02-13T19:16:40.119874879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-czb88,Uid:fe6ff944-cc56-4dd1-95fc-ee49f1c1c852,Namespace:kube-system,Attempt:0,} returns sandbox id \"70bdf97c2e439225741af7abf16c02c7828be31a399458f6e3c55d066ef679ca\"" Feb 13 19:16:40.125680 containerd[1499]: time="2025-02-13T19:16:40.125627425Z" level=info msg="CreateContainer within sandbox \"70bdf97c2e439225741af7abf16c02c7828be31a399458f6e3c55d066ef679ca\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:16:40.148598 containerd[1499]: time="2025-02-13T19:16:40.148458849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-4ftv5,Uid:5c29a3a3-9cac-4c34-9f19-f7025b80d978,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"009966426c83707653920d7143fc7ababd005beec667c2624d4bbec68d626cf7\"" Feb 13 19:16:40.154086 containerd[1499]: time="2025-02-13T19:16:40.153209264Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 19:16:40.166457 containerd[1499]: time="2025-02-13T19:16:40.166182414Z" level=info msg="CreateContainer within sandbox \"70bdf97c2e439225741af7abf16c02c7828be31a399458f6e3c55d066ef679ca\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2f6948545fc44860983f84e159d87a117ab76ea290e0c7492a527a3f1acee1d1\"" Feb 13 19:16:40.169355 containerd[1499]: time="2025-02-13T19:16:40.167827713Z" level=info msg="StartContainer for \"2f6948545fc44860983f84e159d87a117ab76ea290e0c7492a527a3f1acee1d1\"" Feb 13 19:16:40.208819 systemd[1]: Started cri-containerd-2f6948545fc44860983f84e159d87a117ab76ea290e0c7492a527a3f1acee1d1.scope - libcontainer container 2f6948545fc44860983f84e159d87a117ab76ea290e0c7492a527a3f1acee1d1. Feb 13 19:16:40.266336 containerd[1499]: time="2025-02-13T19:16:40.266274531Z" level=info msg="StartContainer for \"2f6948545fc44860983f84e159d87a117ab76ea290e0c7492a527a3f1acee1d1\" returns successfully" Feb 13 19:16:40.428338 kubelet[2813]: I0213 19:16:40.428248 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-czb88" podStartSLOduration=1.428214963 podStartE2EDuration="1.428214963s" podCreationTimestamp="2025-02-13 19:16:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:16:40.428088081 +0000 UTC m=+16.228438254" watchObservedRunningTime="2025-02-13 19:16:40.428214963 +0000 UTC m=+16.228565336" Feb 13 19:16:43.008003 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2908310149.mount: Deactivated successfully. Feb 13 19:16:43.056174 containerd[1499]: time="2025-02-13T19:16:43.056038464Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:16:43.057967 containerd[1499]: time="2025-02-13T19:16:43.057891764Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3673532" Feb 13 19:16:43.059648 containerd[1499]: time="2025-02-13T19:16:43.059162777Z" level=info msg="ImageCreate event name:\"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:16:43.062356 containerd[1499]: time="2025-02-13T19:16:43.062311811Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:16:43.063927 containerd[1499]: time="2025-02-13T19:16:43.063884668Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3662650\" in 2.910590484s" Feb 13 19:16:43.064092 containerd[1499]: time="2025-02-13T19:16:43.064074670Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" Feb 13 19:16:43.068834 containerd[1499]: time="2025-02-13T19:16:43.067413906Z" level=info msg="CreateContainer within sandbox \"009966426c83707653920d7143fc7ababd005beec667c2624d4bbec68d626cf7\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Feb 13 19:16:43.088167 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount917706550.mount: Deactivated successfully. Feb 13 19:16:43.097236 containerd[1499]: time="2025-02-13T19:16:43.097151707Z" level=info msg="CreateContainer within sandbox \"009966426c83707653920d7143fc7ababd005beec667c2624d4bbec68d626cf7\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"3ec3a488d13001777ed82fe93d124bd63822b232a446151b3ce0f869b0005c0a\"" Feb 13 19:16:43.098543 containerd[1499]: time="2025-02-13T19:16:43.098314479Z" level=info msg="StartContainer for \"3ec3a488d13001777ed82fe93d124bd63822b232a446151b3ce0f869b0005c0a\"" Feb 13 19:16:43.128104 systemd[1]: Started cri-containerd-3ec3a488d13001777ed82fe93d124bd63822b232a446151b3ce0f869b0005c0a.scope - libcontainer container 3ec3a488d13001777ed82fe93d124bd63822b232a446151b3ce0f869b0005c0a. Feb 13 19:16:43.163567 systemd[1]: cri-containerd-3ec3a488d13001777ed82fe93d124bd63822b232a446151b3ce0f869b0005c0a.scope: Deactivated successfully. Feb 13 19:16:43.166910 containerd[1499]: time="2025-02-13T19:16:43.166269172Z" level=info msg="StartContainer for \"3ec3a488d13001777ed82fe93d124bd63822b232a446151b3ce0f869b0005c0a\" returns successfully" Feb 13 19:16:43.208337 containerd[1499]: time="2025-02-13T19:16:43.208017743Z" level=info msg="shim disconnected" id=3ec3a488d13001777ed82fe93d124bd63822b232a446151b3ce0f869b0005c0a namespace=k8s.io Feb 13 19:16:43.208337 containerd[1499]: time="2025-02-13T19:16:43.208101983Z" level=warning msg="cleaning up after shim disconnected" id=3ec3a488d13001777ed82fe93d124bd63822b232a446151b3ce0f869b0005c0a namespace=k8s.io Feb 13 19:16:43.208337 containerd[1499]: time="2025-02-13T19:16:43.208118704Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:16:43.221961 containerd[1499]: time="2025-02-13T19:16:43.221781811Z" level=warning msg="cleanup warnings time=\"2025-02-13T19:16:43Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 19:16:43.427415 containerd[1499]: time="2025-02-13T19:16:43.426544459Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Feb 13 19:16:43.934303 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3ec3a488d13001777ed82fe93d124bd63822b232a446151b3ce0f869b0005c0a-rootfs.mount: Deactivated successfully. Feb 13 19:16:46.442122 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2593005203.mount: Deactivated successfully. Feb 13 19:16:47.212573 containerd[1499]: time="2025-02-13T19:16:47.212520911Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:16:47.216738 containerd[1499]: time="2025-02-13T19:16:47.216654792Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26874261" Feb 13 19:16:47.218615 containerd[1499]: time="2025-02-13T19:16:47.218575571Z" level=info msg="ImageCreate event name:\"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:16:47.223212 containerd[1499]: time="2025-02-13T19:16:47.223091135Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:16:47.224802 containerd[1499]: time="2025-02-13T19:16:47.224311227Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26863435\" in 3.797711127s" Feb 13 19:16:47.224802 containerd[1499]: time="2025-02-13T19:16:47.224361708Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\"" Feb 13 19:16:47.229353 containerd[1499]: time="2025-02-13T19:16:47.229278516Z" level=info msg="CreateContainer within sandbox \"009966426c83707653920d7143fc7ababd005beec667c2624d4bbec68d626cf7\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 19:16:47.257557 containerd[1499]: time="2025-02-13T19:16:47.257489434Z" level=info msg="CreateContainer within sandbox \"009966426c83707653920d7143fc7ababd005beec667c2624d4bbec68d626cf7\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a288eb831b16f34a1ad93dcb1e847f81cf29a35aa1f737479cc2c977deb28948\"" Feb 13 19:16:47.258373 containerd[1499]: time="2025-02-13T19:16:47.258302802Z" level=info msg="StartContainer for \"a288eb831b16f34a1ad93dcb1e847f81cf29a35aa1f737479cc2c977deb28948\"" Feb 13 19:16:47.294078 systemd[1]: Started cri-containerd-a288eb831b16f34a1ad93dcb1e847f81cf29a35aa1f737479cc2c977deb28948.scope - libcontainer container a288eb831b16f34a1ad93dcb1e847f81cf29a35aa1f737479cc2c977deb28948. Feb 13 19:16:47.328054 containerd[1499]: time="2025-02-13T19:16:47.327978489Z" level=info msg="StartContainer for \"a288eb831b16f34a1ad93dcb1e847f81cf29a35aa1f737479cc2c977deb28948\" returns successfully" Feb 13 19:16:47.328696 systemd[1]: cri-containerd-a288eb831b16f34a1ad93dcb1e847f81cf29a35aa1f737479cc2c977deb28948.scope: Deactivated successfully. Feb 13 19:16:47.353623 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a288eb831b16f34a1ad93dcb1e847f81cf29a35aa1f737479cc2c977deb28948-rootfs.mount: Deactivated successfully. Feb 13 19:16:47.424704 kubelet[2813]: I0213 19:16:47.424671 2813 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 19:16:47.429479 containerd[1499]: time="2025-02-13T19:16:47.428976445Z" level=info msg="shim disconnected" id=a288eb831b16f34a1ad93dcb1e847f81cf29a35aa1f737479cc2c977deb28948 namespace=k8s.io Feb 13 19:16:47.429479 containerd[1499]: time="2025-02-13T19:16:47.429041166Z" level=warning msg="cleaning up after shim disconnected" id=a288eb831b16f34a1ad93dcb1e847f81cf29a35aa1f737479cc2c977deb28948 namespace=k8s.io Feb 13 19:16:47.429479 containerd[1499]: time="2025-02-13T19:16:47.429049766Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:16:47.473471 kubelet[2813]: I0213 19:16:47.470545 2813 topology_manager.go:215] "Topology Admit Handler" podUID="17339fd1-2684-48db-b315-70c311bd92fc" podNamespace="kube-system" podName="coredns-7db6d8ff4d-mzhwq" Feb 13 19:16:47.476516 kubelet[2813]: I0213 19:16:47.476162 2813 topology_manager.go:215] "Topology Admit Handler" podUID="14e9f450-0099-4298-b30f-5b6e14f88403" podNamespace="kube-system" podName="coredns-7db6d8ff4d-pmvdm" Feb 13 19:16:47.489603 systemd[1]: Created slice kubepods-burstable-pod17339fd1_2684_48db_b315_70c311bd92fc.slice - libcontainer container kubepods-burstable-pod17339fd1_2684_48db_b315_70c311bd92fc.slice. Feb 13 19:16:47.500064 systemd[1]: Created slice kubepods-burstable-pod14e9f450_0099_4298_b30f_5b6e14f88403.slice - libcontainer container kubepods-burstable-pod14e9f450_0099_4298_b30f_5b6e14f88403.slice. Feb 13 19:16:47.572024 kubelet[2813]: I0213 19:16:47.571970 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/14e9f450-0099-4298-b30f-5b6e14f88403-config-volume\") pod \"coredns-7db6d8ff4d-pmvdm\" (UID: \"14e9f450-0099-4298-b30f-5b6e14f88403\") " pod="kube-system/coredns-7db6d8ff4d-pmvdm" Feb 13 19:16:47.572317 kubelet[2813]: I0213 19:16:47.572289 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/17339fd1-2684-48db-b315-70c311bd92fc-config-volume\") pod \"coredns-7db6d8ff4d-mzhwq\" (UID: \"17339fd1-2684-48db-b315-70c311bd92fc\") " pod="kube-system/coredns-7db6d8ff4d-mzhwq" Feb 13 19:16:47.572486 kubelet[2813]: I0213 19:16:47.572459 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nc9b6\" (UniqueName: \"kubernetes.io/projected/17339fd1-2684-48db-b315-70c311bd92fc-kube-api-access-nc9b6\") pod \"coredns-7db6d8ff4d-mzhwq\" (UID: \"17339fd1-2684-48db-b315-70c311bd92fc\") " pod="kube-system/coredns-7db6d8ff4d-mzhwq" Feb 13 19:16:47.572628 kubelet[2813]: I0213 19:16:47.572605 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b2wk\" (UniqueName: \"kubernetes.io/projected/14e9f450-0099-4298-b30f-5b6e14f88403-kube-api-access-5b2wk\") pod \"coredns-7db6d8ff4d-pmvdm\" (UID: \"14e9f450-0099-4298-b30f-5b6e14f88403\") " pod="kube-system/coredns-7db6d8ff4d-pmvdm" Feb 13 19:16:47.797612 containerd[1499]: time="2025-02-13T19:16:47.797206756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-mzhwq,Uid:17339fd1-2684-48db-b315-70c311bd92fc,Namespace:kube-system,Attempt:0,}" Feb 13 19:16:47.809634 containerd[1499]: time="2025-02-13T19:16:47.809584758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pmvdm,Uid:14e9f450-0099-4298-b30f-5b6e14f88403,Namespace:kube-system,Attempt:0,}" Feb 13 19:16:47.840452 containerd[1499]: time="2025-02-13T19:16:47.840396782Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-mzhwq,Uid:17339fd1-2684-48db-b315-70c311bd92fc,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1dc7ae6099c163a4ed22bc1af7c85e5825d61c704b7a430309bf18c901f056c0\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 19:16:47.840733 kubelet[2813]: E0213 19:16:47.840695 2813 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1dc7ae6099c163a4ed22bc1af7c85e5825d61c704b7a430309bf18c901f056c0\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 19:16:47.840891 kubelet[2813]: E0213 19:16:47.840870 2813 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1dc7ae6099c163a4ed22bc1af7c85e5825d61c704b7a430309bf18c901f056c0\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-mzhwq" Feb 13 19:16:47.840936 kubelet[2813]: E0213 19:16:47.840900 2813 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1dc7ae6099c163a4ed22bc1af7c85e5825d61c704b7a430309bf18c901f056c0\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-mzhwq" Feb 13 19:16:47.840983 kubelet[2813]: E0213 19:16:47.840959 2813 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-mzhwq_kube-system(17339fd1-2684-48db-b315-70c311bd92fc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-mzhwq_kube-system(17339fd1-2684-48db-b315-70c311bd92fc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1dc7ae6099c163a4ed22bc1af7c85e5825d61c704b7a430309bf18c901f056c0\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-mzhwq" podUID="17339fd1-2684-48db-b315-70c311bd92fc" Feb 13 19:16:47.852365 containerd[1499]: time="2025-02-13T19:16:47.852301739Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pmvdm,Uid:14e9f450-0099-4298-b30f-5b6e14f88403,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7769db24932b311acc163885ffb2e3e4273842757b27ce730eae9eafa06f10da\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 19:16:47.852631 kubelet[2813]: E0213 19:16:47.852587 2813 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7769db24932b311acc163885ffb2e3e4273842757b27ce730eae9eafa06f10da\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 19:16:47.852693 kubelet[2813]: E0213 19:16:47.852655 2813 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7769db24932b311acc163885ffb2e3e4273842757b27ce730eae9eafa06f10da\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-pmvdm" Feb 13 19:16:47.852693 kubelet[2813]: E0213 19:16:47.852676 2813 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7769db24932b311acc163885ffb2e3e4273842757b27ce730eae9eafa06f10da\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-pmvdm" Feb 13 19:16:47.852803 kubelet[2813]: E0213 19:16:47.852712 2813 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-pmvdm_kube-system(14e9f450-0099-4298-b30f-5b6e14f88403)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-pmvdm_kube-system(14e9f450-0099-4298-b30f-5b6e14f88403)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7769db24932b311acc163885ffb2e3e4273842757b27ce730eae9eafa06f10da\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-pmvdm" podUID="14e9f450-0099-4298-b30f-5b6e14f88403" Feb 13 19:16:48.334172 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1dc7ae6099c163a4ed22bc1af7c85e5825d61c704b7a430309bf18c901f056c0-shm.mount: Deactivated successfully. Feb 13 19:16:48.452679 containerd[1499]: time="2025-02-13T19:16:48.452630323Z" level=info msg="CreateContainer within sandbox \"009966426c83707653920d7143fc7ababd005beec667c2624d4bbec68d626cf7\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Feb 13 19:16:48.475131 containerd[1499]: time="2025-02-13T19:16:48.475081299Z" level=info msg="CreateContainer within sandbox \"009966426c83707653920d7143fc7ababd005beec667c2624d4bbec68d626cf7\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"9ba8238b135b4de292080d474b557cbcee131583417b9404c908ff6d4567ed01\"" Feb 13 19:16:48.476275 containerd[1499]: time="2025-02-13T19:16:48.476202190Z" level=info msg="StartContainer for \"9ba8238b135b4de292080d474b557cbcee131583417b9404c908ff6d4567ed01\"" Feb 13 19:16:48.514129 systemd[1]: Started cri-containerd-9ba8238b135b4de292080d474b557cbcee131583417b9404c908ff6d4567ed01.scope - libcontainer container 9ba8238b135b4de292080d474b557cbcee131583417b9404c908ff6d4567ed01. Feb 13 19:16:48.548814 containerd[1499]: time="2025-02-13T19:16:48.548640929Z" level=info msg="StartContainer for \"9ba8238b135b4de292080d474b557cbcee131583417b9404c908ff6d4567ed01\" returns successfully" Feb 13 19:16:49.634718 systemd-networkd[1398]: flannel.1: Link UP Feb 13 19:16:49.634725 systemd-networkd[1398]: flannel.1: Gained carrier Feb 13 19:16:51.378045 systemd-networkd[1398]: flannel.1: Gained IPv6LL Feb 13 19:16:59.329830 containerd[1499]: time="2025-02-13T19:16:59.329723377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-mzhwq,Uid:17339fd1-2684-48db-b315-70c311bd92fc,Namespace:kube-system,Attempt:0,}" Feb 13 19:16:59.365661 systemd-networkd[1398]: cni0: Link UP Feb 13 19:16:59.365670 systemd-networkd[1398]: cni0: Gained carrier Feb 13 19:16:59.371059 systemd-networkd[1398]: veth27659388: Link UP Feb 13 19:16:59.372158 systemd-networkd[1398]: cni0: Lost carrier Feb 13 19:16:59.375026 kernel: cni0: port 1(veth27659388) entered blocking state Feb 13 19:16:59.375150 kernel: cni0: port 1(veth27659388) entered disabled state Feb 13 19:16:59.377952 kernel: veth27659388: entered allmulticast mode Feb 13 19:16:59.378067 kernel: veth27659388: entered promiscuous mode Feb 13 19:16:59.389826 kernel: cni0: port 1(veth27659388) entered blocking state Feb 13 19:16:59.389951 kernel: cni0: port 1(veth27659388) entered forwarding state Feb 13 19:16:59.392301 systemd-networkd[1398]: veth27659388: Gained carrier Feb 13 19:16:59.394060 systemd-networkd[1398]: cni0: Gained carrier Feb 13 19:16:59.398271 containerd[1499]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40000928e8), "name":"cbr0", "type":"bridge"} Feb 13 19:16:59.398271 containerd[1499]: delegateAdd: netconf sent to delegate plugin: Feb 13 19:16:59.421927 containerd[1499]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-02-13T19:16:59.421500563Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:16:59.421927 containerd[1499]: time="2025-02-13T19:16:59.421566084Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:16:59.421927 containerd[1499]: time="2025-02-13T19:16:59.421581804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:16:59.421927 containerd[1499]: time="2025-02-13T19:16:59.421667484Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:16:59.444028 systemd[1]: Started cri-containerd-8255a197629835959728774c96df2d6a230e4456e65633d8193dda063503f88d.scope - libcontainer container 8255a197629835959728774c96df2d6a230e4456e65633d8193dda063503f88d. Feb 13 19:16:59.482151 containerd[1499]: time="2025-02-13T19:16:59.482042669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-mzhwq,Uid:17339fd1-2684-48db-b315-70c311bd92fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"8255a197629835959728774c96df2d6a230e4456e65633d8193dda063503f88d\"" Feb 13 19:16:59.486909 containerd[1499]: time="2025-02-13T19:16:59.486732425Z" level=info msg="CreateContainer within sandbox \"8255a197629835959728774c96df2d6a230e4456e65633d8193dda063503f88d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:16:59.502720 containerd[1499]: time="2025-02-13T19:16:59.502416106Z" level=info msg="CreateContainer within sandbox \"8255a197629835959728774c96df2d6a230e4456e65633d8193dda063503f88d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f36c80e6d02f3654145f3347cad069505366eed74720fc576de686dfea280d51\"" Feb 13 19:16:59.505996 containerd[1499]: time="2025-02-13T19:16:59.503309993Z" level=info msg="StartContainer for \"f36c80e6d02f3654145f3347cad069505366eed74720fc576de686dfea280d51\"" Feb 13 19:16:59.536149 systemd[1]: Started cri-containerd-f36c80e6d02f3654145f3347cad069505366eed74720fc576de686dfea280d51.scope - libcontainer container f36c80e6d02f3654145f3347cad069505366eed74720fc576de686dfea280d51. Feb 13 19:16:59.570105 containerd[1499]: time="2025-02-13T19:16:59.569945186Z" level=info msg="StartContainer for \"f36c80e6d02f3654145f3347cad069505366eed74720fc576de686dfea280d51\" returns successfully" Feb 13 19:17:00.500191 kubelet[2813]: I0213 19:17:00.499251 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-mzhwq" podStartSLOduration=20.499228428 podStartE2EDuration="20.499228428s" podCreationTimestamp="2025-02-13 19:16:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:17:00.497638256 +0000 UTC m=+36.297988469" watchObservedRunningTime="2025-02-13 19:17:00.499228428 +0000 UTC m=+36.299578641" Feb 13 19:17:00.500191 kubelet[2813]: I0213 19:17:00.500010 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-4ftv5" podStartSLOduration=14.423993538 podStartE2EDuration="21.499998833s" podCreationTimestamp="2025-02-13 19:16:39 +0000 UTC" firstStartedPulling="2025-02-13 19:16:40.151295762 +0000 UTC m=+15.951645975" lastFinishedPulling="2025-02-13 19:16:47.227301097 +0000 UTC m=+23.027651270" observedRunningTime="2025-02-13 19:16:49.469621317 +0000 UTC m=+25.269971530" watchObservedRunningTime="2025-02-13 19:17:00.499998833 +0000 UTC m=+36.300349006" Feb 13 19:17:01.042133 systemd-networkd[1398]: veth27659388: Gained IPv6LL Feb 13 19:17:01.298007 systemd-networkd[1398]: cni0: Gained IPv6LL Feb 13 19:17:02.333029 containerd[1499]: time="2025-02-13T19:17:02.332389121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pmvdm,Uid:14e9f450-0099-4298-b30f-5b6e14f88403,Namespace:kube-system,Attempt:0,}" Feb 13 19:17:02.367819 kernel: cni0: port 2(veth490f08ac) entered blocking state Feb 13 19:17:02.367943 kernel: cni0: port 2(veth490f08ac) entered disabled state Feb 13 19:17:02.367979 systemd-networkd[1398]: veth490f08ac: Link UP Feb 13 19:17:02.372860 kernel: veth490f08ac: entered allmulticast mode Feb 13 19:17:02.373014 kernel: veth490f08ac: entered promiscuous mode Feb 13 19:17:02.377279 kernel: cni0: port 2(veth490f08ac) entered blocking state Feb 13 19:17:02.377485 kernel: cni0: port 2(veth490f08ac) entered forwarding state Feb 13 19:17:02.377373 systemd-networkd[1398]: veth490f08ac: Gained carrier Feb 13 19:17:02.383651 containerd[1499]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x4000012938), "name":"cbr0", "type":"bridge"} Feb 13 19:17:02.383651 containerd[1499]: delegateAdd: netconf sent to delegate plugin: Feb 13 19:17:02.404313 containerd[1499]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-02-13T19:17:02.403884921Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:17:02.404313 containerd[1499]: time="2025-02-13T19:17:02.403965881Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:17:02.404313 containerd[1499]: time="2025-02-13T19:17:02.404060082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:17:02.404313 containerd[1499]: time="2025-02-13T19:17:02.404198643Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:17:02.423590 systemd[1]: run-containerd-runc-k8s.io-437cfc8ac423e398c389afa21d08cee0da278e931e351c794b2c70f7fec50d25-runc.vu9c4P.mount: Deactivated successfully. Feb 13 19:17:02.435029 systemd[1]: Started cri-containerd-437cfc8ac423e398c389afa21d08cee0da278e931e351c794b2c70f7fec50d25.scope - libcontainer container 437cfc8ac423e398c389afa21d08cee0da278e931e351c794b2c70f7fec50d25. Feb 13 19:17:02.472563 containerd[1499]: time="2025-02-13T19:17:02.472509940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pmvdm,Uid:14e9f450-0099-4298-b30f-5b6e14f88403,Namespace:kube-system,Attempt:0,} returns sandbox id \"437cfc8ac423e398c389afa21d08cee0da278e931e351c794b2c70f7fec50d25\"" Feb 13 19:17:02.478337 containerd[1499]: time="2025-02-13T19:17:02.478264782Z" level=info msg="CreateContainer within sandbox \"437cfc8ac423e398c389afa21d08cee0da278e931e351c794b2c70f7fec50d25\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:17:02.505111 containerd[1499]: time="2025-02-13T19:17:02.505040097Z" level=info msg="CreateContainer within sandbox \"437cfc8ac423e398c389afa21d08cee0da278e931e351c794b2c70f7fec50d25\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3d33572a4f14d1fceee7275720c735214117ec10f6e9ea35c9c6b5292835f8ad\"" Feb 13 19:17:02.512830 containerd[1499]: time="2025-02-13T19:17:02.510792218Z" level=info msg="StartContainer for \"3d33572a4f14d1fceee7275720c735214117ec10f6e9ea35c9c6b5292835f8ad\"" Feb 13 19:17:02.546187 systemd[1]: Started cri-containerd-3d33572a4f14d1fceee7275720c735214117ec10f6e9ea35c9c6b5292835f8ad.scope - libcontainer container 3d33572a4f14d1fceee7275720c735214117ec10f6e9ea35c9c6b5292835f8ad. Feb 13 19:17:02.578723 containerd[1499]: time="2025-02-13T19:17:02.578433471Z" level=info msg="StartContainer for \"3d33572a4f14d1fceee7275720c735214117ec10f6e9ea35c9c6b5292835f8ad\" returns successfully" Feb 13 19:17:03.512093 kubelet[2813]: I0213 19:17:03.512013 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-pmvdm" podStartSLOduration=23.511990355000002 podStartE2EDuration="23.511990355s" podCreationTimestamp="2025-02-13 19:16:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:17:03.510642745 +0000 UTC m=+39.310992918" watchObservedRunningTime="2025-02-13 19:17:03.511990355 +0000 UTC m=+39.312340528" Feb 13 19:17:04.050046 systemd-networkd[1398]: veth490f08ac: Gained IPv6LL Feb 13 19:19:03.426464 update_engine[1480]: I20250213 19:19:03.426365 1480 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Feb 13 19:19:03.426464 update_engine[1480]: I20250213 19:19:03.426434 1480 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Feb 13 19:19:03.430944 update_engine[1480]: I20250213 19:19:03.426731 1480 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Feb 13 19:19:03.430944 update_engine[1480]: I20250213 19:19:03.428427 1480 omaha_request_params.cc:62] Current group set to alpha Feb 13 19:19:03.430944 update_engine[1480]: I20250213 19:19:03.428557 1480 update_attempter.cc:499] Already updated boot flags. Skipping. Feb 13 19:19:03.430944 update_engine[1480]: I20250213 19:19:03.428568 1480 update_attempter.cc:643] Scheduling an action processor start. Feb 13 19:19:03.430944 update_engine[1480]: I20250213 19:19:03.428591 1480 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 19:19:03.430944 update_engine[1480]: I20250213 19:19:03.428634 1480 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Feb 13 19:19:03.430944 update_engine[1480]: I20250213 19:19:03.428699 1480 omaha_request_action.cc:271] Posting an Omaha request to disabled Feb 13 19:19:03.430944 update_engine[1480]: I20250213 19:19:03.428712 1480 omaha_request_action.cc:272] Request: Feb 13 19:19:03.430944 update_engine[1480]: Feb 13 19:19:03.430944 update_engine[1480]: Feb 13 19:19:03.430944 update_engine[1480]: Feb 13 19:19:03.430944 update_engine[1480]: Feb 13 19:19:03.430944 update_engine[1480]: Feb 13 19:19:03.430944 update_engine[1480]: Feb 13 19:19:03.430944 update_engine[1480]: Feb 13 19:19:03.430944 update_engine[1480]: Feb 13 19:19:03.430944 update_engine[1480]: I20250213 19:19:03.428721 1480 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 19:19:03.430944 update_engine[1480]: I20250213 19:19:03.430102 1480 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 19:19:03.430944 update_engine[1480]: I20250213 19:19:03.430480 1480 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 19:19:03.431306 locksmithd[1523]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Feb 13 19:19:03.432188 update_engine[1480]: E20250213 19:19:03.432057 1480 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 19:19:03.432188 update_engine[1480]: I20250213 19:19:03.432153 1480 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Feb 13 19:19:13.354945 update_engine[1480]: I20250213 19:19:13.354660 1480 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 19:19:13.355918 update_engine[1480]: I20250213 19:19:13.355021 1480 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 19:19:13.355918 update_engine[1480]: I20250213 19:19:13.355357 1480 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 19:19:13.355918 update_engine[1480]: E20250213 19:19:13.355894 1480 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 19:19:13.356133 update_engine[1480]: I20250213 19:19:13.355959 1480 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Feb 13 19:19:23.349392 update_engine[1480]: I20250213 19:19:23.349058 1480 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 19:19:23.350090 update_engine[1480]: I20250213 19:19:23.349454 1480 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 19:19:23.350090 update_engine[1480]: I20250213 19:19:23.349750 1480 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 19:19:23.350522 update_engine[1480]: E20250213 19:19:23.350404 1480 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 19:19:23.350522 update_engine[1480]: I20250213 19:19:23.350486 1480 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Feb 13 19:19:33.356800 update_engine[1480]: I20250213 19:19:33.356628 1480 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 19:19:33.357419 update_engine[1480]: I20250213 19:19:33.357126 1480 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 19:19:33.357604 update_engine[1480]: I20250213 19:19:33.357499 1480 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 19:19:33.358120 update_engine[1480]: E20250213 19:19:33.358039 1480 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 19:19:33.358228 update_engine[1480]: I20250213 19:19:33.358157 1480 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 13 19:19:33.358228 update_engine[1480]: I20250213 19:19:33.358181 1480 omaha_request_action.cc:617] Omaha request response: Feb 13 19:19:33.358331 update_engine[1480]: E20250213 19:19:33.358302 1480 omaha_request_action.cc:636] Omaha request network transfer failed. Feb 13 19:19:33.358367 update_engine[1480]: I20250213 19:19:33.358341 1480 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Feb 13 19:19:33.358367 update_engine[1480]: I20250213 19:19:33.358356 1480 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 19:19:33.358429 update_engine[1480]: I20250213 19:19:33.358367 1480 update_attempter.cc:306] Processing Done. Feb 13 19:19:33.358429 update_engine[1480]: E20250213 19:19:33.358393 1480 update_attempter.cc:619] Update failed. Feb 13 19:19:33.358429 update_engine[1480]: I20250213 19:19:33.358405 1480 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Feb 13 19:19:33.358429 update_engine[1480]: I20250213 19:19:33.358417 1480 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Feb 13 19:19:33.358562 update_engine[1480]: I20250213 19:19:33.358428 1480 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Feb 13 19:19:33.358562 update_engine[1480]: I20250213 19:19:33.358544 1480 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 19:19:33.358641 update_engine[1480]: I20250213 19:19:33.358582 1480 omaha_request_action.cc:271] Posting an Omaha request to disabled Feb 13 19:19:33.358641 update_engine[1480]: I20250213 19:19:33.358597 1480 omaha_request_action.cc:272] Request: Feb 13 19:19:33.358641 update_engine[1480]: Feb 13 19:19:33.358641 update_engine[1480]: Feb 13 19:19:33.358641 update_engine[1480]: Feb 13 19:19:33.358641 update_engine[1480]: Feb 13 19:19:33.358641 update_engine[1480]: Feb 13 19:19:33.358641 update_engine[1480]: Feb 13 19:19:33.358641 update_engine[1480]: I20250213 19:19:33.358608 1480 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 19:19:33.359346 update_engine[1480]: I20250213 19:19:33.358919 1480 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 19:19:33.359390 locksmithd[1523]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Feb 13 19:19:33.359781 update_engine[1480]: I20250213 19:19:33.359328 1480 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 19:19:33.359834 update_engine[1480]: E20250213 19:19:33.359795 1480 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 19:19:33.359906 update_engine[1480]: I20250213 19:19:33.359833 1480 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 13 19:19:33.359906 update_engine[1480]: I20250213 19:19:33.359840 1480 omaha_request_action.cc:617] Omaha request response: Feb 13 19:19:33.359906 update_engine[1480]: I20250213 19:19:33.359846 1480 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 19:19:33.359906 update_engine[1480]: I20250213 19:19:33.359851 1480 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 19:19:33.359906 update_engine[1480]: I20250213 19:19:33.359863 1480 update_attempter.cc:306] Processing Done. Feb 13 19:19:33.359906 update_engine[1480]: I20250213 19:19:33.359870 1480 update_attempter.cc:310] Error event sent. Feb 13 19:19:33.359906 update_engine[1480]: I20250213 19:19:33.359879 1480 update_check_scheduler.cc:74] Next update check in 48m33s Feb 13 19:19:33.360491 locksmithd[1523]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Feb 13 19:21:09.872523 systemd[1]: Started sshd@5-142.132.176.244:22-139.178.68.195:45636.service - OpenSSH per-connection server daemon (139.178.68.195:45636). Feb 13 19:21:10.854978 sshd[4770]: Accepted publickey for core from 139.178.68.195 port 45636 ssh2: RSA SHA256:GiO19KKKK6dAgXb8V1C7vI95O6t/PswdbHT7p8WkVYc Feb 13 19:21:10.857575 sshd-session[4770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:21:10.865071 systemd-logind[1478]: New session 6 of user core. Feb 13 19:21:10.871117 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:21:11.623138 sshd[4780]: Connection closed by 139.178.68.195 port 45636 Feb 13 19:21:11.622289 sshd-session[4770]: pam_unix(sshd:session): session closed for user core Feb 13 19:21:11.628148 systemd[1]: sshd@5-142.132.176.244:22-139.178.68.195:45636.service: Deactivated successfully. Feb 13 19:21:11.631234 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:21:11.632541 systemd-logind[1478]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:21:11.634098 systemd-logind[1478]: Removed session 6. Feb 13 19:21:16.803619 systemd[1]: Started sshd@6-142.132.176.244:22-139.178.68.195:56616.service - OpenSSH per-connection server daemon (139.178.68.195:56616). Feb 13 19:21:17.785938 sshd[4829]: Accepted publickey for core from 139.178.68.195 port 56616 ssh2: RSA SHA256:GiO19KKKK6dAgXb8V1C7vI95O6t/PswdbHT7p8WkVYc Feb 13 19:21:17.787933 sshd-session[4829]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:21:17.794574 systemd-logind[1478]: New session 7 of user core. Feb 13 19:21:17.804015 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:21:18.538866 sshd[4831]: Connection closed by 139.178.68.195 port 56616 Feb 13 19:21:18.539975 sshd-session[4829]: pam_unix(sshd:session): session closed for user core Feb 13 19:21:18.544861 systemd[1]: sshd@6-142.132.176.244:22-139.178.68.195:56616.service: Deactivated successfully. Feb 13 19:21:18.547516 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:21:18.551261 systemd-logind[1478]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:21:18.552706 systemd-logind[1478]: Removed session 7. Feb 13 19:21:23.725911 systemd[1]: Started sshd@7-142.132.176.244:22-139.178.68.195:56626.service - OpenSSH per-connection server daemon (139.178.68.195:56626). Feb 13 19:21:24.710663 sshd[4864]: Accepted publickey for core from 139.178.68.195 port 56626 ssh2: RSA SHA256:GiO19KKKK6dAgXb8V1C7vI95O6t/PswdbHT7p8WkVYc Feb 13 19:21:24.714142 sshd-session[4864]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:21:24.722282 systemd-logind[1478]: New session 8 of user core. Feb 13 19:21:24.728204 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 19:21:25.475059 sshd[4868]: Connection closed by 139.178.68.195 port 56626 Feb 13 19:21:25.474749 sshd-session[4864]: pam_unix(sshd:session): session closed for user core Feb 13 19:21:25.482363 systemd[1]: sshd@7-142.132.176.244:22-139.178.68.195:56626.service: Deactivated successfully. Feb 13 19:21:25.486546 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 19:21:25.488850 systemd-logind[1478]: Session 8 logged out. Waiting for processes to exit. Feb 13 19:21:25.490584 systemd-logind[1478]: Removed session 8. Feb 13 19:21:25.663225 systemd[1]: Started sshd@8-142.132.176.244:22-139.178.68.195:56632.service - OpenSSH per-connection server daemon (139.178.68.195:56632). Feb 13 19:21:26.654977 sshd[4888]: Accepted publickey for core from 139.178.68.195 port 56632 ssh2: RSA SHA256:GiO19KKKK6dAgXb8V1C7vI95O6t/PswdbHT7p8WkVYc Feb 13 19:21:26.657503 sshd-session[4888]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:21:26.665378 systemd-logind[1478]: New session 9 of user core. Feb 13 19:21:26.672988 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 19:21:27.475395 sshd[4906]: Connection closed by 139.178.68.195 port 56632 Feb 13 19:21:27.476894 sshd-session[4888]: pam_unix(sshd:session): session closed for user core Feb 13 19:21:27.481919 systemd[1]: sshd@8-142.132.176.244:22-139.178.68.195:56632.service: Deactivated successfully. Feb 13 19:21:27.485427 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 19:21:27.487920 systemd-logind[1478]: Session 9 logged out. Waiting for processes to exit. Feb 13 19:21:27.489753 systemd-logind[1478]: Removed session 9. Feb 13 19:21:27.649332 systemd[1]: Started sshd@9-142.132.176.244:22-139.178.68.195:53320.service - OpenSSH per-connection server daemon (139.178.68.195:53320). Feb 13 19:21:28.646180 sshd[4915]: Accepted publickey for core from 139.178.68.195 port 53320 ssh2: RSA SHA256:GiO19KKKK6dAgXb8V1C7vI95O6t/PswdbHT7p8WkVYc Feb 13 19:21:28.647859 sshd-session[4915]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:21:28.656793 systemd-logind[1478]: New session 10 of user core. Feb 13 19:21:28.662020 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 19:21:29.412310 sshd[4917]: Connection closed by 139.178.68.195 port 53320 Feb 13 19:21:29.415522 sshd-session[4915]: pam_unix(sshd:session): session closed for user core Feb 13 19:21:29.419409 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 19:21:29.421934 systemd-logind[1478]: Session 10 logged out. Waiting for processes to exit. Feb 13 19:21:29.422591 systemd[1]: sshd@9-142.132.176.244:22-139.178.68.195:53320.service: Deactivated successfully. Feb 13 19:21:29.426686 systemd-logind[1478]: Removed session 10. Feb 13 19:21:34.596112 systemd[1]: Started sshd@10-142.132.176.244:22-139.178.68.195:53324.service - OpenSSH per-connection server daemon (139.178.68.195:53324). Feb 13 19:21:35.583015 sshd[4949]: Accepted publickey for core from 139.178.68.195 port 53324 ssh2: RSA SHA256:GiO19KKKK6dAgXb8V1C7vI95O6t/PswdbHT7p8WkVYc Feb 13 19:21:35.585321 sshd-session[4949]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:21:35.592370 systemd-logind[1478]: New session 11 of user core. Feb 13 19:21:35.598111 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 19:21:36.348392 sshd[4957]: Connection closed by 139.178.68.195 port 53324 Feb 13 19:21:36.349067 sshd-session[4949]: pam_unix(sshd:session): session closed for user core Feb 13 19:21:36.353653 systemd-logind[1478]: Session 11 logged out. Waiting for processes to exit. Feb 13 19:21:36.354068 systemd[1]: sshd@10-142.132.176.244:22-139.178.68.195:53324.service: Deactivated successfully. Feb 13 19:21:36.358264 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 19:21:36.361592 systemd-logind[1478]: Removed session 11. Feb 13 19:21:36.524117 systemd[1]: Started sshd@11-142.132.176.244:22-139.178.68.195:53330.service - OpenSSH per-connection server daemon (139.178.68.195:53330). Feb 13 19:21:37.513310 sshd[4984]: Accepted publickey for core from 139.178.68.195 port 53330 ssh2: RSA SHA256:GiO19KKKK6dAgXb8V1C7vI95O6t/PswdbHT7p8WkVYc Feb 13 19:21:37.515684 sshd-session[4984]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:21:37.521193 systemd-logind[1478]: New session 12 of user core. Feb 13 19:21:37.526994 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 19:21:38.322018 sshd[4986]: Connection closed by 139.178.68.195 port 53330 Feb 13 19:21:38.321848 sshd-session[4984]: pam_unix(sshd:session): session closed for user core Feb 13 19:21:38.329507 systemd[1]: sshd@11-142.132.176.244:22-139.178.68.195:53330.service: Deactivated successfully. Feb 13 19:21:38.333340 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 19:21:38.336315 systemd-logind[1478]: Session 12 logged out. Waiting for processes to exit. Feb 13 19:21:38.338926 systemd-logind[1478]: Removed session 12. Feb 13 19:21:38.495936 systemd[1]: Started sshd@12-142.132.176.244:22-139.178.68.195:59362.service - OpenSSH per-connection server daemon (139.178.68.195:59362). Feb 13 19:21:39.501852 sshd[4996]: Accepted publickey for core from 139.178.68.195 port 59362 ssh2: RSA SHA256:GiO19KKKK6dAgXb8V1C7vI95O6t/PswdbHT7p8WkVYc Feb 13 19:21:39.504549 sshd-session[4996]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:21:39.510984 systemd-logind[1478]: New session 13 of user core. Feb 13 19:21:39.522145 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 19:21:41.754816 sshd[4998]: Connection closed by 139.178.68.195 port 59362 Feb 13 19:21:41.755296 sshd-session[4996]: pam_unix(sshd:session): session closed for user core Feb 13 19:21:41.760626 systemd[1]: sshd@12-142.132.176.244:22-139.178.68.195:59362.service: Deactivated successfully. Feb 13 19:21:41.763745 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 19:21:41.765435 systemd-logind[1478]: Session 13 logged out. Waiting for processes to exit. Feb 13 19:21:41.768129 systemd-logind[1478]: Removed session 13. Feb 13 19:21:41.937226 systemd[1]: Started sshd@13-142.132.176.244:22-139.178.68.195:59372.service - OpenSSH per-connection server daemon (139.178.68.195:59372). Feb 13 19:21:42.938830 sshd[5039]: Accepted publickey for core from 139.178.68.195 port 59372 ssh2: RSA SHA256:GiO19KKKK6dAgXb8V1C7vI95O6t/PswdbHT7p8WkVYc Feb 13 19:21:42.940883 sshd-session[5039]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:21:42.948905 systemd-logind[1478]: New session 14 of user core. Feb 13 19:21:42.952251 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 19:21:43.824831 sshd[5041]: Connection closed by 139.178.68.195 port 59372 Feb 13 19:21:43.825526 sshd-session[5039]: pam_unix(sshd:session): session closed for user core Feb 13 19:21:43.830283 systemd-logind[1478]: Session 14 logged out. Waiting for processes to exit. Feb 13 19:21:43.830621 systemd[1]: sshd@13-142.132.176.244:22-139.178.68.195:59372.service: Deactivated successfully. Feb 13 19:21:43.833143 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 19:21:43.837632 systemd-logind[1478]: Removed session 14. Feb 13 19:21:44.006272 systemd[1]: Started sshd@14-142.132.176.244:22-139.178.68.195:59374.service - OpenSSH per-connection server daemon (139.178.68.195:59374). Feb 13 19:21:44.996688 sshd[5051]: Accepted publickey for core from 139.178.68.195 port 59374 ssh2: RSA SHA256:GiO19KKKK6dAgXb8V1C7vI95O6t/PswdbHT7p8WkVYc Feb 13 19:21:44.998833 sshd-session[5051]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:21:45.006277 systemd-logind[1478]: New session 15 of user core. Feb 13 19:21:45.020200 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 19:21:45.753420 sshd[5053]: Connection closed by 139.178.68.195 port 59374 Feb 13 19:21:45.754289 sshd-session[5051]: pam_unix(sshd:session): session closed for user core Feb 13 19:21:45.759839 systemd[1]: sshd@14-142.132.176.244:22-139.178.68.195:59374.service: Deactivated successfully. Feb 13 19:21:45.763211 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 19:21:45.764457 systemd-logind[1478]: Session 15 logged out. Waiting for processes to exit. Feb 13 19:21:45.765474 systemd-logind[1478]: Removed session 15. Feb 13 19:21:50.946383 systemd[1]: Started sshd@15-142.132.176.244:22-139.178.68.195:42730.service - OpenSSH per-connection server daemon (139.178.68.195:42730). Feb 13 19:21:51.960605 sshd[5095]: Accepted publickey for core from 139.178.68.195 port 42730 ssh2: RSA SHA256:GiO19KKKK6dAgXb8V1C7vI95O6t/PswdbHT7p8WkVYc Feb 13 19:21:51.963368 sshd-session[5095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:21:51.971076 systemd-logind[1478]: New session 16 of user core. Feb 13 19:21:51.978135 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 19:21:52.732873 sshd[5112]: Connection closed by 139.178.68.195 port 42730 Feb 13 19:21:52.733207 sshd-session[5095]: pam_unix(sshd:session): session closed for user core Feb 13 19:21:52.737942 systemd-logind[1478]: Session 16 logged out. Waiting for processes to exit. Feb 13 19:21:52.738053 systemd[1]: sshd@15-142.132.176.244:22-139.178.68.195:42730.service: Deactivated successfully. Feb 13 19:21:52.740279 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 19:21:52.741939 systemd-logind[1478]: Removed session 16. Feb 13 19:21:57.916724 systemd[1]: Started sshd@16-142.132.176.244:22-139.178.68.195:41658.service - OpenSSH per-connection server daemon (139.178.68.195:41658). Feb 13 19:21:58.905581 sshd[5146]: Accepted publickey for core from 139.178.68.195 port 41658 ssh2: RSA SHA256:GiO19KKKK6dAgXb8V1C7vI95O6t/PswdbHT7p8WkVYc Feb 13 19:21:58.907580 sshd-session[5146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:21:58.913997 systemd-logind[1478]: New session 17 of user core. Feb 13 19:21:58.922157 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 19:21:59.660819 sshd[5148]: Connection closed by 139.178.68.195 port 41658 Feb 13 19:21:59.661850 sshd-session[5146]: pam_unix(sshd:session): session closed for user core Feb 13 19:21:59.668373 systemd[1]: sshd@16-142.132.176.244:22-139.178.68.195:41658.service: Deactivated successfully. Feb 13 19:21:59.671431 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 19:21:59.672521 systemd-logind[1478]: Session 17 logged out. Waiting for processes to exit. Feb 13 19:21:59.675475 systemd-logind[1478]: Removed session 17.