Jan 29 10:56:40.882155 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 29 10:56:40.885724 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Wed Jan 29 09:30:22 -00 2025 Jan 29 10:56:40.885757 kernel: KASLR enabled Jan 29 10:56:40.885763 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Jan 29 10:56:40.885769 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390b8118 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b41218 Jan 29 10:56:40.885774 kernel: random: crng init done Jan 29 10:56:40.885781 kernel: secureboot: Secure boot disabled Jan 29 10:56:40.885787 kernel: ACPI: Early table checksum verification disabled Jan 29 10:56:40.885793 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Jan 29 10:56:40.885801 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Jan 29 10:56:40.885807 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 10:56:40.885813 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 10:56:40.885818 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 10:56:40.885824 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 10:56:40.885831 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 10:56:40.885839 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 10:56:40.885846 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 10:56:40.885852 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 10:56:40.885858 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 10:56:40.885864 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Jan 29 10:56:40.885870 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Jan 29 10:56:40.885876 kernel: NUMA: Failed to initialise from firmware Jan 29 10:56:40.885882 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Jan 29 10:56:40.885888 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] Jan 29 10:56:40.885894 kernel: Zone ranges: Jan 29 10:56:40.885901 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 29 10:56:40.885907 kernel: DMA32 empty Jan 29 10:56:40.885913 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Jan 29 10:56:40.885919 kernel: Movable zone start for each node Jan 29 10:56:40.885925 kernel: Early memory node ranges Jan 29 10:56:40.885931 kernel: node 0: [mem 0x0000000040000000-0x000000013666ffff] Jan 29 10:56:40.885937 kernel: node 0: [mem 0x0000000136670000-0x000000013667ffff] Jan 29 10:56:40.885943 kernel: node 0: [mem 0x0000000136680000-0x000000013676ffff] Jan 29 10:56:40.885949 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Jan 29 10:56:40.885955 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Jan 29 10:56:40.885961 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Jan 29 10:56:40.885967 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Jan 29 10:56:40.885975 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Jan 29 10:56:40.885981 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Jan 29 10:56:40.885987 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Jan 29 10:56:40.885996 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Jan 29 10:56:40.886003 kernel: psci: probing for conduit method from ACPI. Jan 29 10:56:40.886009 kernel: psci: PSCIv1.1 detected in firmware. Jan 29 10:56:40.886017 kernel: psci: Using standard PSCI v0.2 function IDs Jan 29 10:56:40.886024 kernel: psci: Trusted OS migration not required Jan 29 10:56:40.886030 kernel: psci: SMC Calling Convention v1.1 Jan 29 10:56:40.886036 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 29 10:56:40.886043 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 29 10:56:40.886049 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 29 10:56:40.886056 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 29 10:56:40.886062 kernel: Detected PIPT I-cache on CPU0 Jan 29 10:56:40.886069 kernel: CPU features: detected: GIC system register CPU interface Jan 29 10:56:40.886075 kernel: CPU features: detected: Hardware dirty bit management Jan 29 10:56:40.886083 kernel: CPU features: detected: Spectre-v4 Jan 29 10:56:40.886089 kernel: CPU features: detected: Spectre-BHB Jan 29 10:56:40.886096 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 29 10:56:40.886102 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 29 10:56:40.886109 kernel: CPU features: detected: ARM erratum 1418040 Jan 29 10:56:40.886127 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 29 10:56:40.886134 kernel: alternatives: applying boot alternatives Jan 29 10:56:40.886141 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=e6957044c3256d96283265c263579aa4275d1d707b02496fcb081f5fc6356346 Jan 29 10:56:40.886148 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 10:56:40.886155 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 10:56:40.886162 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 10:56:40.886170 kernel: Fallback order for Node 0: 0 Jan 29 10:56:40.886219 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Jan 29 10:56:40.886227 kernel: Policy zone: Normal Jan 29 10:56:40.886234 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 10:56:40.886240 kernel: software IO TLB: area num 2. Jan 29 10:56:40.886247 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Jan 29 10:56:40.886253 kernel: Memory: 3882296K/4096000K available (10304K kernel code, 2186K rwdata, 8092K rodata, 39936K init, 897K bss, 213704K reserved, 0K cma-reserved) Jan 29 10:56:40.886260 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 29 10:56:40.886266 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 10:56:40.886274 kernel: rcu: RCU event tracing is enabled. Jan 29 10:56:40.886280 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 29 10:56:40.886287 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 10:56:40.886296 kernel: Tracing variant of Tasks RCU enabled. Jan 29 10:56:40.886302 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 10:56:40.886309 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 29 10:56:40.886315 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 29 10:56:40.886321 kernel: GICv3: 256 SPIs implemented Jan 29 10:56:40.886328 kernel: GICv3: 0 Extended SPIs implemented Jan 29 10:56:40.886334 kernel: Root IRQ handler: gic_handle_irq Jan 29 10:56:40.886340 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 29 10:56:40.886347 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 29 10:56:40.886353 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 29 10:56:40.886360 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Jan 29 10:56:40.886368 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Jan 29 10:56:40.886375 kernel: GICv3: using LPI property table @0x00000001000e0000 Jan 29 10:56:40.886381 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Jan 29 10:56:40.886388 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 10:56:40.886394 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 10:56:40.886401 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 29 10:56:40.886407 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 29 10:56:40.886414 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 29 10:56:40.886420 kernel: Console: colour dummy device 80x25 Jan 29 10:56:40.886427 kernel: ACPI: Core revision 20230628 Jan 29 10:56:40.886434 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 29 10:56:40.886442 kernel: pid_max: default: 32768 minimum: 301 Jan 29 10:56:40.886449 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 10:56:40.886456 kernel: landlock: Up and running. Jan 29 10:56:40.886462 kernel: SELinux: Initializing. Jan 29 10:56:40.886469 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 10:56:40.886476 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 10:56:40.886482 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 10:56:40.886489 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 10:56:40.886496 kernel: rcu: Hierarchical SRCU implementation. Jan 29 10:56:40.886504 kernel: rcu: Max phase no-delay instances is 400. Jan 29 10:56:40.886511 kernel: Platform MSI: ITS@0x8080000 domain created Jan 29 10:56:40.886518 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 29 10:56:40.886524 kernel: Remapping and enabling EFI services. Jan 29 10:56:40.886531 kernel: smp: Bringing up secondary CPUs ... Jan 29 10:56:40.886537 kernel: Detected PIPT I-cache on CPU1 Jan 29 10:56:40.886544 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 29 10:56:40.886551 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Jan 29 10:56:40.886558 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 10:56:40.886566 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 29 10:56:40.886573 kernel: smp: Brought up 1 node, 2 CPUs Jan 29 10:56:40.886585 kernel: SMP: Total of 2 processors activated. Jan 29 10:56:40.886594 kernel: CPU features: detected: 32-bit EL0 Support Jan 29 10:56:40.886601 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 29 10:56:40.886608 kernel: CPU features: detected: Common not Private translations Jan 29 10:56:40.886615 kernel: CPU features: detected: CRC32 instructions Jan 29 10:56:40.886622 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 29 10:56:40.886629 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 29 10:56:40.886638 kernel: CPU features: detected: LSE atomic instructions Jan 29 10:56:40.886645 kernel: CPU features: detected: Privileged Access Never Jan 29 10:56:40.886652 kernel: CPU features: detected: RAS Extension Support Jan 29 10:56:40.886659 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 29 10:56:40.886666 kernel: CPU: All CPU(s) started at EL1 Jan 29 10:56:40.886673 kernel: alternatives: applying system-wide alternatives Jan 29 10:56:40.886680 kernel: devtmpfs: initialized Jan 29 10:56:40.886687 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 10:56:40.886696 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 29 10:56:40.886703 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 10:56:40.886711 kernel: SMBIOS 3.0.0 present. Jan 29 10:56:40.886718 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Jan 29 10:56:40.886725 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 10:56:40.886732 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 29 10:56:40.886739 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 29 10:56:40.886746 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 29 10:56:40.886753 kernel: audit: initializing netlink subsys (disabled) Jan 29 10:56:40.886762 kernel: audit: type=2000 audit(0.015:1): state=initialized audit_enabled=0 res=1 Jan 29 10:56:40.886769 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 10:56:40.886777 kernel: cpuidle: using governor menu Jan 29 10:56:40.886784 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 29 10:56:40.886791 kernel: ASID allocator initialised with 32768 entries Jan 29 10:56:40.886798 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 10:56:40.886805 kernel: Serial: AMBA PL011 UART driver Jan 29 10:56:40.886812 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 29 10:56:40.886819 kernel: Modules: 0 pages in range for non-PLT usage Jan 29 10:56:40.886827 kernel: Modules: 508880 pages in range for PLT usage Jan 29 10:56:40.886834 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 10:56:40.886841 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 10:56:40.886848 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 29 10:56:40.886855 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 29 10:56:40.886862 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 10:56:40.886870 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 10:56:40.886877 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 29 10:56:40.886884 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 29 10:56:40.886892 kernel: ACPI: Added _OSI(Module Device) Jan 29 10:56:40.886899 kernel: ACPI: Added _OSI(Processor Device) Jan 29 10:56:40.886906 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 10:56:40.886913 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 10:56:40.886920 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 10:56:40.886927 kernel: ACPI: Interpreter enabled Jan 29 10:56:40.886934 kernel: ACPI: Using GIC for interrupt routing Jan 29 10:56:40.886941 kernel: ACPI: MCFG table detected, 1 entries Jan 29 10:56:40.886948 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 29 10:56:40.886956 kernel: printk: console [ttyAMA0] enabled Jan 29 10:56:40.886964 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 10:56:40.889210 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 10:56:40.889369 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 29 10:56:40.889438 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 29 10:56:40.889502 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 29 10:56:40.889565 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 29 10:56:40.889580 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 29 10:56:40.889588 kernel: PCI host bridge to bus 0000:00 Jan 29 10:56:40.889662 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 29 10:56:40.889721 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 29 10:56:40.889778 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 29 10:56:40.889835 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 10:56:40.889915 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 29 10:56:40.890002 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Jan 29 10:56:40.890069 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Jan 29 10:56:40.890368 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Jan 29 10:56:40.890474 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 29 10:56:40.890544 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Jan 29 10:56:40.890616 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 29 10:56:40.890689 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Jan 29 10:56:40.890761 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 29 10:56:40.890826 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Jan 29 10:56:40.890899 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 29 10:56:40.890965 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Jan 29 10:56:40.891036 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 29 10:56:40.891103 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Jan 29 10:56:40.891231 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 29 10:56:40.891303 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Jan 29 10:56:40.891375 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 29 10:56:40.891440 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Jan 29 10:56:40.891518 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 29 10:56:40.891588 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Jan 29 10:56:40.891658 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Jan 29 10:56:40.891723 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Jan 29 10:56:40.891796 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Jan 29 10:56:40.891861 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Jan 29 10:56:40.891936 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Jan 29 10:56:40.892003 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Jan 29 10:56:40.892073 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 29 10:56:40.892157 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 29 10:56:40.893913 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 29 10:56:40.893997 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Jan 29 10:56:40.894075 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Jan 29 10:56:40.894161 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Jan 29 10:56:40.894376 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Jan 29 10:56:40.894458 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Jan 29 10:56:40.894524 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Jan 29 10:56:40.894598 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 29 10:56:40.894663 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Jan 29 10:56:40.894728 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Jan 29 10:56:40.894806 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Jan 29 10:56:40.894878 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Jan 29 10:56:40.894943 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Jan 29 10:56:40.895016 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Jan 29 10:56:40.895082 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Jan 29 10:56:40.895161 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Jan 29 10:56:40.895252 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 29 10:56:40.895342 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jan 29 10:56:40.895409 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Jan 29 10:56:40.895472 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Jan 29 10:56:40.895540 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jan 29 10:56:40.895605 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jan 29 10:56:40.895671 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Jan 29 10:56:40.895745 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 29 10:56:40.895812 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Jan 29 10:56:40.895877 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Jan 29 10:56:40.895944 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 29 10:56:40.896007 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Jan 29 10:56:40.896070 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jan 29 10:56:40.897132 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 29 10:56:40.900277 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Jan 29 10:56:40.900378 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Jan 29 10:56:40.900460 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 29 10:56:40.900526 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Jan 29 10:56:40.900593 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Jan 29 10:56:40.900661 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 29 10:56:40.900725 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Jan 29 10:56:40.900788 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Jan 29 10:56:40.900857 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 29 10:56:40.900929 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Jan 29 10:56:40.900993 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Jan 29 10:56:40.901062 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 29 10:56:40.901162 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Jan 29 10:56:40.901254 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Jan 29 10:56:40.901327 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Jan 29 10:56:40.901394 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Jan 29 10:56:40.901470 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Jan 29 10:56:40.901536 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Jan 29 10:56:40.901606 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Jan 29 10:56:40.901672 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Jan 29 10:56:40.901744 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Jan 29 10:56:40.901808 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Jan 29 10:56:40.901875 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Jan 29 10:56:40.901944 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Jan 29 10:56:40.902013 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Jan 29 10:56:40.902079 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 29 10:56:40.902164 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Jan 29 10:56:40.904683 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 29 10:56:40.904772 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Jan 29 10:56:40.904840 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 29 10:56:40.904922 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Jan 29 10:56:40.904988 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Jan 29 10:56:40.905060 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Jan 29 10:56:40.905394 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Jan 29 10:56:40.905530 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Jan 29 10:56:40.905619 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jan 29 10:56:40.905710 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Jan 29 10:56:40.905823 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jan 29 10:56:40.906005 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Jan 29 10:56:40.906126 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jan 29 10:56:40.906245 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Jan 29 10:56:40.906334 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jan 29 10:56:40.906424 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Jan 29 10:56:40.906508 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jan 29 10:56:40.906596 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Jan 29 10:56:40.906687 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jan 29 10:56:40.906774 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Jan 29 10:56:40.906859 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jan 29 10:56:40.906946 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Jan 29 10:56:40.907030 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jan 29 10:56:40.907135 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Jan 29 10:56:40.907279 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Jan 29 10:56:40.907372 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Jan 29 10:56:40.907475 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Jan 29 10:56:40.907567 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 29 10:56:40.907654 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Jan 29 10:56:40.907755 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 29 10:56:40.907842 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 29 10:56:40.907929 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Jan 29 10:56:40.908013 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Jan 29 10:56:40.908107 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Jan 29 10:56:40.908283 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 29 10:56:40.908373 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 29 10:56:40.908458 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Jan 29 10:56:40.908542 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Jan 29 10:56:40.908641 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Jan 29 10:56:40.908731 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Jan 29 10:56:40.908818 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 29 10:56:40.908902 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 29 10:56:40.908986 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Jan 29 10:56:40.909070 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Jan 29 10:56:40.909265 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Jan 29 10:56:40.909367 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 29 10:56:40.909463 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 29 10:56:40.909560 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Jan 29 10:56:40.909627 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Jan 29 10:56:40.909700 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Jan 29 10:56:40.909768 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Jan 29 10:56:40.909834 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 29 10:56:40.909897 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 29 10:56:40.909961 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Jan 29 10:56:40.910040 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Jan 29 10:56:40.910126 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Jan 29 10:56:40.910223 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Jan 29 10:56:40.910292 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 29 10:56:40.910358 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 29 10:56:40.910422 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Jan 29 10:56:40.910486 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 29 10:56:40.910558 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Jan 29 10:56:40.910629 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Jan 29 10:56:40.910696 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Jan 29 10:56:40.910761 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 29 10:56:40.910825 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 29 10:56:40.910889 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Jan 29 10:56:40.910954 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 29 10:56:40.911022 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 29 10:56:40.911099 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 29 10:56:40.911238 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Jan 29 10:56:40.911312 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 29 10:56:40.911380 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 29 10:56:40.911442 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Jan 29 10:56:40.911504 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Jan 29 10:56:40.911566 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Jan 29 10:56:40.911631 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 29 10:56:40.911688 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 29 10:56:40.911749 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 29 10:56:40.911819 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 29 10:56:40.911878 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Jan 29 10:56:40.911935 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Jan 29 10:56:40.912001 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Jan 29 10:56:40.912059 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Jan 29 10:56:40.912154 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Jan 29 10:56:40.912313 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Jan 29 10:56:40.912377 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Jan 29 10:56:40.912435 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Jan 29 10:56:40.912503 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jan 29 10:56:40.912561 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Jan 29 10:56:40.912619 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Jan 29 10:56:40.912688 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Jan 29 10:56:40.912747 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Jan 29 10:56:40.912808 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Jan 29 10:56:40.912874 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Jan 29 10:56:40.912935 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Jan 29 10:56:40.913005 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 29 10:56:40.913080 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Jan 29 10:56:40.913174 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Jan 29 10:56:40.913289 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 29 10:56:40.913364 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Jan 29 10:56:40.913424 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Jan 29 10:56:40.913486 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 29 10:56:40.913551 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Jan 29 10:56:40.913611 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Jan 29 10:56:40.913669 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Jan 29 10:56:40.913678 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 29 10:56:40.913686 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 29 10:56:40.913694 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 29 10:56:40.913704 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 29 10:56:40.913712 kernel: iommu: Default domain type: Translated Jan 29 10:56:40.913719 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 29 10:56:40.913727 kernel: efivars: Registered efivars operations Jan 29 10:56:40.913734 kernel: vgaarb: loaded Jan 29 10:56:40.913741 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 29 10:56:40.913748 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 10:56:40.913756 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 10:56:40.913763 kernel: pnp: PnP ACPI init Jan 29 10:56:40.913836 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 29 10:56:40.913847 kernel: pnp: PnP ACPI: found 1 devices Jan 29 10:56:40.913855 kernel: NET: Registered PF_INET protocol family Jan 29 10:56:40.913863 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 10:56:40.913870 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 29 10:56:40.913878 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 10:56:40.913885 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 10:56:40.913893 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 29 10:56:40.913900 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 29 10:56:40.913910 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 10:56:40.913917 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 10:56:40.913925 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 10:56:40.913998 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Jan 29 10:56:40.914009 kernel: PCI: CLS 0 bytes, default 64 Jan 29 10:56:40.914017 kernel: kvm [1]: HYP mode not available Jan 29 10:56:40.914024 kernel: Initialise system trusted keyrings Jan 29 10:56:40.914032 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 29 10:56:40.914039 kernel: Key type asymmetric registered Jan 29 10:56:40.914048 kernel: Asymmetric key parser 'x509' registered Jan 29 10:56:40.914056 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 29 10:56:40.914063 kernel: io scheduler mq-deadline registered Jan 29 10:56:40.914071 kernel: io scheduler kyber registered Jan 29 10:56:40.914078 kernel: io scheduler bfq registered Jan 29 10:56:40.914086 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 29 10:56:40.914167 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Jan 29 10:56:40.914298 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Jan 29 10:56:40.914368 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 10:56:40.914435 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Jan 29 10:56:40.914499 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Jan 29 10:56:40.914562 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 10:56:40.914628 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Jan 29 10:56:40.914692 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Jan 29 10:56:40.914758 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 10:56:40.914824 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Jan 29 10:56:40.914888 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Jan 29 10:56:40.914950 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 10:56:40.915017 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Jan 29 10:56:40.915082 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Jan 29 10:56:40.915232 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 10:56:40.915308 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Jan 29 10:56:40.915372 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Jan 29 10:56:40.915434 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 10:56:40.915499 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Jan 29 10:56:40.915562 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Jan 29 10:56:40.915629 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 10:56:40.915694 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Jan 29 10:56:40.915760 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Jan 29 10:56:40.915822 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 10:56:40.915832 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Jan 29 10:56:40.915897 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Jan 29 10:56:40.915963 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Jan 29 10:56:40.916026 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 10:56:40.916036 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 29 10:56:40.916044 kernel: ACPI: button: Power Button [PWRB] Jan 29 10:56:40.916052 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 29 10:56:40.916156 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Jan 29 10:56:40.918348 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Jan 29 10:56:40.918383 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 10:56:40.918399 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 29 10:56:40.918476 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Jan 29 10:56:40.918487 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Jan 29 10:56:40.918495 kernel: thunder_xcv, ver 1.0 Jan 29 10:56:40.918502 kernel: thunder_bgx, ver 1.0 Jan 29 10:56:40.918510 kernel: nicpf, ver 1.0 Jan 29 10:56:40.918517 kernel: nicvf, ver 1.0 Jan 29 10:56:40.918594 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 29 10:56:40.918661 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-29T10:56:40 UTC (1738148200) Jan 29 10:56:40.918671 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 29 10:56:40.918679 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 29 10:56:40.918687 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 29 10:56:40.918694 kernel: watchdog: Hard watchdog permanently disabled Jan 29 10:56:40.918701 kernel: NET: Registered PF_INET6 protocol family Jan 29 10:56:40.918709 kernel: Segment Routing with IPv6 Jan 29 10:56:40.918716 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 10:56:40.918724 kernel: NET: Registered PF_PACKET protocol family Jan 29 10:56:40.918733 kernel: Key type dns_resolver registered Jan 29 10:56:40.918740 kernel: registered taskstats version 1 Jan 29 10:56:40.918748 kernel: Loading compiled-in X.509 certificates Jan 29 10:56:40.918755 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: c31663d2c680b3b306c17f44b5295280d3a2e28a' Jan 29 10:56:40.918763 kernel: Key type .fscrypt registered Jan 29 10:56:40.918770 kernel: Key type fscrypt-provisioning registered Jan 29 10:56:40.918777 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 10:56:40.918785 kernel: ima: Allocated hash algorithm: sha1 Jan 29 10:56:40.918792 kernel: ima: No architecture policies found Jan 29 10:56:40.918802 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 29 10:56:40.918811 kernel: clk: Disabling unused clocks Jan 29 10:56:40.918819 kernel: Freeing unused kernel memory: 39936K Jan 29 10:56:40.918826 kernel: Run /init as init process Jan 29 10:56:40.918833 kernel: with arguments: Jan 29 10:56:40.918840 kernel: /init Jan 29 10:56:40.918847 kernel: with environment: Jan 29 10:56:40.918854 kernel: HOME=/ Jan 29 10:56:40.918862 kernel: TERM=linux Jan 29 10:56:40.918870 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 10:56:40.918879 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 10:56:40.918889 systemd[1]: Detected virtualization kvm. Jan 29 10:56:40.918897 systemd[1]: Detected architecture arm64. Jan 29 10:56:40.918905 systemd[1]: Running in initrd. Jan 29 10:56:40.918913 systemd[1]: No hostname configured, using default hostname. Jan 29 10:56:40.918920 systemd[1]: Hostname set to . Jan 29 10:56:40.918930 systemd[1]: Initializing machine ID from VM UUID. Jan 29 10:56:40.918937 systemd[1]: Queued start job for default target initrd.target. Jan 29 10:56:40.918945 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 10:56:40.918954 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 10:56:40.918962 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 10:56:40.918970 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 10:56:40.918978 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 10:56:40.918986 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 10:56:40.918997 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 10:56:40.919005 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 10:56:40.919013 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 10:56:40.919021 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 10:56:40.919029 systemd[1]: Reached target paths.target - Path Units. Jan 29 10:56:40.919036 systemd[1]: Reached target slices.target - Slice Units. Jan 29 10:56:40.919044 systemd[1]: Reached target swap.target - Swaps. Jan 29 10:56:40.919053 systemd[1]: Reached target timers.target - Timer Units. Jan 29 10:56:40.919061 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 10:56:40.919069 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 10:56:40.919078 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 10:56:40.919086 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 10:56:40.919094 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 10:56:40.919102 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 10:56:40.919123 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 10:56:40.919134 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 10:56:40.919144 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 10:56:40.919152 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 10:56:40.919160 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 10:56:40.919168 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 10:56:40.919257 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 10:56:40.919277 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 10:56:40.919286 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 10:56:40.919294 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 10:56:40.919305 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 10:56:40.919314 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 10:56:40.919351 systemd-journald[236]: Collecting audit messages is disabled. Jan 29 10:56:40.919374 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 10:56:40.919382 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 10:56:40.919390 kernel: Bridge firewalling registered Jan 29 10:56:40.919398 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 10:56:40.919406 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 10:56:40.919414 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 10:56:40.919425 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 10:56:40.919433 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 10:56:40.919441 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 10:56:40.919450 systemd-journald[236]: Journal started Jan 29 10:56:40.919474 systemd-journald[236]: Runtime Journal (/run/log/journal/131e25c68977439490f4aca5ff658c65) is 8.0M, max 76.6M, 68.6M free. Jan 29 10:56:40.877861 systemd-modules-load[237]: Inserted module 'overlay' Jan 29 10:56:40.895663 systemd-modules-load[237]: Inserted module 'br_netfilter' Jan 29 10:56:40.923901 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 10:56:40.937022 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 10:56:40.942728 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 10:56:40.945381 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 10:56:40.947146 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 10:56:40.953444 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 10:56:40.954250 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 10:56:40.965485 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 10:56:40.984279 dracut-cmdline[274]: dracut-dracut-053 Jan 29 10:56:40.985324 dracut-cmdline[274]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=e6957044c3256d96283265c263579aa4275d1d707b02496fcb081f5fc6356346 Jan 29 10:56:41.003754 systemd-resolved[276]: Positive Trust Anchors: Jan 29 10:56:41.005097 systemd-resolved[276]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 10:56:41.005871 systemd-resolved[276]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 10:56:41.013668 systemd-resolved[276]: Defaulting to hostname 'linux'. Jan 29 10:56:41.015667 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 10:56:41.016832 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 10:56:41.064210 kernel: SCSI subsystem initialized Jan 29 10:56:41.069250 kernel: Loading iSCSI transport class v2.0-870. Jan 29 10:56:41.077210 kernel: iscsi: registered transport (tcp) Jan 29 10:56:41.094216 kernel: iscsi: registered transport (qla4xxx) Jan 29 10:56:41.094282 kernel: QLogic iSCSI HBA Driver Jan 29 10:56:41.143652 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 10:56:41.150455 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 10:56:41.170443 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 10:56:41.170515 kernel: device-mapper: uevent: version 1.0.3 Jan 29 10:56:41.171195 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 10:56:41.226231 kernel: raid6: neonx8 gen() 15670 MB/s Jan 29 10:56:41.243223 kernel: raid6: neonx4 gen() 15761 MB/s Jan 29 10:56:41.260230 kernel: raid6: neonx2 gen() 13152 MB/s Jan 29 10:56:41.277214 kernel: raid6: neonx1 gen() 10466 MB/s Jan 29 10:56:41.294213 kernel: raid6: int64x8 gen() 6761 MB/s Jan 29 10:56:41.311215 kernel: raid6: int64x4 gen() 7330 MB/s Jan 29 10:56:41.328230 kernel: raid6: int64x2 gen() 6087 MB/s Jan 29 10:56:41.345234 kernel: raid6: int64x1 gen() 5039 MB/s Jan 29 10:56:41.345280 kernel: raid6: using algorithm neonx4 gen() 15761 MB/s Jan 29 10:56:41.362262 kernel: raid6: .... xor() 12368 MB/s, rmw enabled Jan 29 10:56:41.362363 kernel: raid6: using neon recovery algorithm Jan 29 10:56:41.367221 kernel: xor: measuring software checksum speed Jan 29 10:56:41.367282 kernel: 8regs : 21584 MB/sec Jan 29 10:56:41.367310 kernel: 32regs : 21727 MB/sec Jan 29 10:56:41.367350 kernel: arm64_neon : 21271 MB/sec Jan 29 10:56:41.368210 kernel: xor: using function: 32regs (21727 MB/sec) Jan 29 10:56:41.417223 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 10:56:41.431861 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 10:56:41.439471 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 10:56:41.454921 systemd-udevd[458]: Using default interface naming scheme 'v255'. Jan 29 10:56:41.458160 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 10:56:41.467399 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 10:56:41.484251 dracut-pre-trigger[465]: rd.md=0: removing MD RAID activation Jan 29 10:56:41.517490 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 10:56:41.526467 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 10:56:41.574834 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 10:56:41.581640 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 10:56:41.597912 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 10:56:41.598610 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 10:56:41.599879 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 10:56:41.600923 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 10:56:41.609244 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 10:56:41.621855 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 10:56:41.669212 kernel: scsi host0: Virtio SCSI HBA Jan 29 10:56:41.684296 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 29 10:56:41.684512 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 29 10:56:41.699150 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 10:56:41.699307 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 10:56:41.700875 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 10:56:41.701473 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 10:56:41.701676 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 10:56:41.702322 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 10:56:41.711405 kernel: ACPI: bus type USB registered Jan 29 10:56:41.711457 kernel: usbcore: registered new interface driver usbfs Jan 29 10:56:41.711468 kernel: usbcore: registered new interface driver hub Jan 29 10:56:41.711477 kernel: usbcore: registered new device driver usb Jan 29 10:56:41.712440 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 10:56:41.731427 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 10:56:41.735435 kernel: sr 0:0:0:0: Power-on or device reset occurred Jan 29 10:56:41.738874 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Jan 29 10:56:41.739003 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 29 10:56:41.739014 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Jan 29 10:56:41.740668 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 10:56:41.744910 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 29 10:56:41.749754 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jan 29 10:56:41.749868 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 29 10:56:41.749947 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 29 10:56:41.750023 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jan 29 10:56:41.750102 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jan 29 10:56:41.750228 kernel: hub 1-0:1.0: USB hub found Jan 29 10:56:41.750332 kernel: hub 1-0:1.0: 4 ports detected Jan 29 10:56:41.750413 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 29 10:56:41.750502 kernel: hub 2-0:1.0: USB hub found Jan 29 10:56:41.750585 kernel: hub 2-0:1.0: 4 ports detected Jan 29 10:56:41.761385 kernel: sd 0:0:0:1: Power-on or device reset occurred Jan 29 10:56:41.768448 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Jan 29 10:56:41.768565 kernel: sd 0:0:0:1: [sda] Write Protect is off Jan 29 10:56:41.768650 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Jan 29 10:56:41.768731 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 29 10:56:41.768811 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 10:56:41.768821 kernel: GPT:17805311 != 80003071 Jan 29 10:56:41.768830 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 10:56:41.768839 kernel: GPT:17805311 != 80003071 Jan 29 10:56:41.768847 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 10:56:41.768856 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 10:56:41.768865 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Jan 29 10:56:41.769570 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 10:56:41.808358 kernel: BTRFS: device fsid 1e2e5fa7-c757-4d5d-af66-73afe98fbaae devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (510) Jan 29 10:56:41.811211 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (509) Jan 29 10:56:41.821763 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 29 10:56:41.828803 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 29 10:56:41.834944 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 29 10:56:41.839062 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jan 29 10:56:41.840480 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 29 10:56:41.852382 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 10:56:41.862324 disk-uuid[579]: Primary Header is updated. Jan 29 10:56:41.862324 disk-uuid[579]: Secondary Entries is updated. Jan 29 10:56:41.862324 disk-uuid[579]: Secondary Header is updated. Jan 29 10:56:41.868672 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 10:56:41.989266 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 29 10:56:42.232314 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Jan 29 10:56:42.369236 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Jan 29 10:56:42.369386 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jan 29 10:56:42.369709 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Jan 29 10:56:42.425245 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Jan 29 10:56:42.425713 kernel: usbcore: registered new interface driver usbhid Jan 29 10:56:42.427973 kernel: usbhid: USB HID core driver Jan 29 10:56:42.884239 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 10:56:42.885358 disk-uuid[581]: The operation has completed successfully. Jan 29 10:56:42.946545 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 10:56:42.946678 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 10:56:42.960385 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 10:56:42.973921 sh[595]: Success Jan 29 10:56:42.990262 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 29 10:56:43.041625 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 10:56:43.057672 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 10:56:43.058395 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 10:56:43.079229 kernel: BTRFS info (device dm-0): first mount of filesystem 1e2e5fa7-c757-4d5d-af66-73afe98fbaae Jan 29 10:56:43.079294 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 29 10:56:43.079308 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 10:56:43.080193 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 10:56:43.080221 kernel: BTRFS info (device dm-0): using free space tree Jan 29 10:56:43.087207 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 29 10:56:43.090200 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 10:56:43.090904 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 10:56:43.097403 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 10:56:43.100498 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 10:56:43.112565 kernel: BTRFS info (device sda6): first mount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 29 10:56:43.112618 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 10:56:43.112640 kernel: BTRFS info (device sda6): using free space tree Jan 29 10:56:43.116248 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 10:56:43.116314 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 10:56:43.128201 kernel: BTRFS info (device sda6): last unmount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 29 10:56:43.128449 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 10:56:43.134984 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 10:56:43.142420 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 10:56:43.225571 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 10:56:43.236383 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 10:56:43.241630 ignition[687]: Ignition 2.20.0 Jan 29 10:56:43.242244 ignition[687]: Stage: fetch-offline Jan 29 10:56:43.242633 ignition[687]: no configs at "/usr/lib/ignition/base.d" Jan 29 10:56:43.242643 ignition[687]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 10:56:43.242807 ignition[687]: parsed url from cmdline: "" Jan 29 10:56:43.242810 ignition[687]: no config URL provided Jan 29 10:56:43.242815 ignition[687]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 10:56:43.242823 ignition[687]: no config at "/usr/lib/ignition/user.ign" Jan 29 10:56:43.242829 ignition[687]: failed to fetch config: resource requires networking Jan 29 10:56:43.243039 ignition[687]: Ignition finished successfully Jan 29 10:56:43.246700 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 10:56:43.258998 systemd-networkd[781]: lo: Link UP Jan 29 10:56:43.259008 systemd-networkd[781]: lo: Gained carrier Jan 29 10:56:43.260712 systemd-networkd[781]: Enumeration completed Jan 29 10:56:43.260909 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 10:56:43.261664 systemd[1]: Reached target network.target - Network. Jan 29 10:56:43.262538 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 10:56:43.262542 systemd-networkd[781]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 10:56:43.265021 systemd-networkd[781]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 10:56:43.265024 systemd-networkd[781]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 10:56:43.267695 systemd-networkd[781]: eth0: Link UP Jan 29 10:56:43.267698 systemd-networkd[781]: eth0: Gained carrier Jan 29 10:56:43.267708 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 10:56:43.268428 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 29 10:56:43.273549 systemd-networkd[781]: eth1: Link UP Jan 29 10:56:43.273556 systemd-networkd[781]: eth1: Gained carrier Jan 29 10:56:43.273568 systemd-networkd[781]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 10:56:43.283229 ignition[785]: Ignition 2.20.0 Jan 29 10:56:43.283242 ignition[785]: Stage: fetch Jan 29 10:56:43.283439 ignition[785]: no configs at "/usr/lib/ignition/base.d" Jan 29 10:56:43.283450 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 10:56:43.283541 ignition[785]: parsed url from cmdline: "" Jan 29 10:56:43.283544 ignition[785]: no config URL provided Jan 29 10:56:43.283549 ignition[785]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 10:56:43.283556 ignition[785]: no config at "/usr/lib/ignition/user.ign" Jan 29 10:56:43.283642 ignition[785]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jan 29 10:56:43.284524 ignition[785]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 29 10:56:43.301336 systemd-networkd[781]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 10:56:43.332308 systemd-networkd[781]: eth0: DHCPv4 address 188.34.178.132/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 29 10:56:43.485163 ignition[785]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Jan 29 10:56:43.489582 ignition[785]: GET result: OK Jan 29 10:56:43.489669 ignition[785]: parsing config with SHA512: db5e47b831d3e3ca5fdf89055496c1b8f6a452110d234e7158f62dd589bbed1a973c86b7d55a4a6f9348d01a2d3e6cf4f19f1c1dacdae1f7b8dcd25e8dca0a0b Jan 29 10:56:43.494982 unknown[785]: fetched base config from "system" Jan 29 10:56:43.494993 unknown[785]: fetched base config from "system" Jan 29 10:56:43.495380 ignition[785]: fetch: fetch complete Jan 29 10:56:43.495001 unknown[785]: fetched user config from "hetzner" Jan 29 10:56:43.495385 ignition[785]: fetch: fetch passed Jan 29 10:56:43.496937 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 29 10:56:43.495433 ignition[785]: Ignition finished successfully Jan 29 10:56:43.503420 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 10:56:43.515437 ignition[793]: Ignition 2.20.0 Jan 29 10:56:43.515448 ignition[793]: Stage: kargs Jan 29 10:56:43.515622 ignition[793]: no configs at "/usr/lib/ignition/base.d" Jan 29 10:56:43.515633 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 10:56:43.518239 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 10:56:43.516520 ignition[793]: kargs: kargs passed Jan 29 10:56:43.516568 ignition[793]: Ignition finished successfully Jan 29 10:56:43.524453 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 10:56:43.534806 ignition[799]: Ignition 2.20.0 Jan 29 10:56:43.534818 ignition[799]: Stage: disks Jan 29 10:56:43.534992 ignition[799]: no configs at "/usr/lib/ignition/base.d" Jan 29 10:56:43.535001 ignition[799]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 10:56:43.535896 ignition[799]: disks: disks passed Jan 29 10:56:43.537626 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 10:56:43.535941 ignition[799]: Ignition finished successfully Jan 29 10:56:43.538666 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 10:56:43.539219 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 10:56:43.540510 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 10:56:43.541304 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 10:56:43.542275 systemd[1]: Reached target basic.target - Basic System. Jan 29 10:56:43.547411 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 10:56:43.564851 systemd-fsck[808]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 29 10:56:43.568687 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 10:56:43.575364 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 10:56:43.630240 kernel: EXT4-fs (sda9): mounted filesystem 88903c49-366d-43ff-90b1-141790b6e85c r/w with ordered data mode. Quota mode: none. Jan 29 10:56:43.631412 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 10:56:43.633236 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 10:56:43.640318 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 10:56:43.644407 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 10:56:43.648485 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 29 10:56:43.649488 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 10:56:43.649576 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 10:56:43.658226 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (816) Jan 29 10:56:43.660359 kernel: BTRFS info (device sda6): first mount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 29 10:56:43.660409 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 10:56:43.660422 kernel: BTRFS info (device sda6): using free space tree Jan 29 10:56:43.665291 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 10:56:43.665345 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 10:56:43.665977 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 10:56:43.673944 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 10:56:43.677878 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 10:56:43.716901 initrd-setup-root[843]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 10:56:43.720723 coreos-metadata[818]: Jan 29 10:56:43.720 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jan 29 10:56:43.721991 coreos-metadata[818]: Jan 29 10:56:43.721 INFO Fetch successful Jan 29 10:56:43.722676 coreos-metadata[818]: Jan 29 10:56:43.722 INFO wrote hostname ci-4186-1-0-3-8e4516c670 to /sysroot/etc/hostname Jan 29 10:56:43.724091 initrd-setup-root[850]: cut: /sysroot/etc/group: No such file or directory Jan 29 10:56:43.726977 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 29 10:56:43.730248 initrd-setup-root[858]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 10:56:43.734014 initrd-setup-root[865]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 10:56:43.833659 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 10:56:43.843365 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 10:56:43.847566 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 10:56:43.857220 kernel: BTRFS info (device sda6): last unmount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 29 10:56:43.875226 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 10:56:43.882561 ignition[934]: INFO : Ignition 2.20.0 Jan 29 10:56:43.882561 ignition[934]: INFO : Stage: mount Jan 29 10:56:43.883715 ignition[934]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 10:56:43.883715 ignition[934]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 10:56:43.883715 ignition[934]: INFO : mount: mount passed Jan 29 10:56:43.885243 ignition[934]: INFO : Ignition finished successfully Jan 29 10:56:43.885227 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 10:56:43.890322 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 10:56:44.078357 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 10:56:44.086443 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 10:56:44.094748 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (944) Jan 29 10:56:44.094814 kernel: BTRFS info (device sda6): first mount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 29 10:56:44.094829 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 10:56:44.096198 kernel: BTRFS info (device sda6): using free space tree Jan 29 10:56:44.098233 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 10:56:44.098308 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 10:56:44.101590 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 10:56:44.119915 ignition[961]: INFO : Ignition 2.20.0 Jan 29 10:56:44.119915 ignition[961]: INFO : Stage: files Jan 29 10:56:44.120954 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 10:56:44.120954 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 10:56:44.123866 ignition[961]: DEBUG : files: compiled without relabeling support, skipping Jan 29 10:56:44.123866 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 10:56:44.123866 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 10:56:44.128246 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 10:56:44.128246 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 10:56:44.128246 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 10:56:44.126814 unknown[961]: wrote ssh authorized keys file for user: core Jan 29 10:56:44.132694 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 29 10:56:44.132694 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 29 10:56:44.785353 systemd-networkd[781]: eth1: Gained IPv6LL Jan 29 10:56:44.977506 systemd-networkd[781]: eth0: Gained IPv6LL Jan 29 10:56:45.910205 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 10:56:51.715622 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 29 10:56:51.715622 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 29 10:56:51.718372 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 10:56:51.718372 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 10:56:51.718372 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 10:56:51.718372 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 10:56:51.718372 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 10:56:51.718372 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 10:56:51.718372 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 10:56:51.718372 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 10:56:51.718372 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 10:56:51.718372 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 29 10:56:51.718372 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 29 10:56:51.718372 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 29 10:56:51.718372 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Jan 29 10:56:52.099798 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 29 10:56:53.415404 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 29 10:56:53.415404 ignition[961]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 29 10:56:53.420340 ignition[961]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 10:56:53.420340 ignition[961]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 10:56:53.420340 ignition[961]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 29 10:56:53.420340 ignition[961]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 29 10:56:53.420340 ignition[961]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 29 10:56:53.420340 ignition[961]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 29 10:56:53.420340 ignition[961]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 29 10:56:53.420340 ignition[961]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Jan 29 10:56:53.420340 ignition[961]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 10:56:53.420340 ignition[961]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 10:56:53.420340 ignition[961]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 10:56:53.420340 ignition[961]: INFO : files: files passed Jan 29 10:56:53.420340 ignition[961]: INFO : Ignition finished successfully Jan 29 10:56:53.419418 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 10:56:53.427387 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 10:56:53.433035 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 10:56:53.436536 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 10:56:53.436648 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 10:56:53.445132 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 10:56:53.445132 initrd-setup-root-after-ignition[990]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 10:56:53.447665 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 10:56:53.450018 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 10:56:53.450841 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 10:56:53.456347 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 10:56:53.489565 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 10:56:53.489710 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 10:56:53.493126 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 10:56:53.494598 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 10:56:53.496427 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 10:56:53.501389 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 10:56:53.512174 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 10:56:53.519536 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 10:56:53.528957 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 10:56:53.530208 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 10:56:53.530914 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 10:56:53.531754 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 10:56:53.531869 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 10:56:53.533063 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 10:56:53.533667 systemd[1]: Stopped target basic.target - Basic System. Jan 29 10:56:53.534629 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 10:56:53.535563 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 10:56:53.536463 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 10:56:53.537452 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 10:56:53.538462 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 10:56:53.539489 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 10:56:53.540371 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 10:56:53.541318 systemd[1]: Stopped target swap.target - Swaps. Jan 29 10:56:53.542097 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 10:56:53.542227 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 10:56:53.543367 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 10:56:53.544336 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 10:56:53.545264 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 10:56:53.547228 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 10:56:53.547824 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 10:56:53.547928 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 10:56:53.549427 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 10:56:53.549534 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 10:56:53.550598 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 10:56:53.550684 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 10:56:53.551657 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 29 10:56:53.551745 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 29 10:56:53.560816 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 10:56:53.566675 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 10:56:53.569338 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 10:56:53.569702 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 10:56:53.572566 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 10:56:53.572678 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 10:56:53.578454 ignition[1014]: INFO : Ignition 2.20.0 Jan 29 10:56:53.578454 ignition[1014]: INFO : Stage: umount Jan 29 10:56:53.590609 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 10:56:53.590609 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 10:56:53.590609 ignition[1014]: INFO : umount: umount passed Jan 29 10:56:53.590609 ignition[1014]: INFO : Ignition finished successfully Jan 29 10:56:53.587496 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 10:56:53.587592 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 10:56:53.589025 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 10:56:53.592250 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 10:56:53.592889 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 10:56:53.592937 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 10:56:53.593863 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 29 10:56:53.593905 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 29 10:56:53.594522 systemd[1]: Stopped target network.target - Network. Jan 29 10:56:53.595201 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 10:56:53.595243 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 10:56:53.598651 systemd[1]: Stopped target paths.target - Path Units. Jan 29 10:56:53.599358 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 10:56:53.600803 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 10:56:53.601976 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 10:56:53.602741 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 10:56:53.604361 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 10:56:53.604406 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 10:56:53.605268 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 10:56:53.605304 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 10:56:53.606029 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 10:56:53.606083 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 10:56:53.608398 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 10:56:53.609227 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 10:56:53.610736 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 10:56:53.611669 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 10:56:53.613078 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 10:56:53.613693 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 10:56:53.613780 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 10:56:53.617491 systemd-networkd[781]: eth0: DHCPv6 lease lost Jan 29 10:56:53.619418 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 10:56:53.619516 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 10:56:53.621488 systemd-networkd[781]: eth1: DHCPv6 lease lost Jan 29 10:56:53.623800 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 10:56:53.623896 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 10:56:53.628960 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 10:56:53.629247 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 10:56:53.631479 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 10:56:53.631772 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 10:56:53.632608 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 10:56:53.632658 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 10:56:53.642360 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 10:56:53.643115 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 10:56:53.643218 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 10:56:53.645139 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 10:56:53.645195 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 10:56:53.646265 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 10:56:53.646319 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 10:56:53.647248 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 10:56:53.647292 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 10:56:53.648328 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 10:56:53.656303 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 10:56:53.656668 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 10:56:53.658313 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 10:56:53.658351 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 10:56:53.662329 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 10:56:53.662367 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 10:56:53.663310 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 10:56:53.663359 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 10:56:53.664676 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 10:56:53.664716 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 10:56:53.665981 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 10:56:53.666025 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 10:56:53.673406 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 10:56:53.675093 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 10:56:53.675209 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 10:56:53.677370 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 29 10:56:53.677439 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 10:56:53.680021 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 10:56:53.680103 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 10:56:53.683161 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 10:56:53.683279 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 10:56:53.686064 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 10:56:53.686275 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 10:56:53.688131 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 10:56:53.688252 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 10:56:53.689650 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 10:56:53.695353 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 10:56:53.702930 systemd[1]: Switching root. Jan 29 10:56:53.752351 systemd-journald[236]: Journal stopped Jan 29 10:56:54.593774 systemd-journald[236]: Received SIGTERM from PID 1 (systemd). Jan 29 10:56:54.593833 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 10:56:54.593845 kernel: SELinux: policy capability open_perms=1 Jan 29 10:56:54.593855 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 10:56:54.593867 kernel: SELinux: policy capability always_check_network=0 Jan 29 10:56:54.593879 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 10:56:54.593888 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 10:56:54.593897 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 10:56:54.593906 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 10:56:54.593919 kernel: audit: type=1403 audit(1738148213.900:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 10:56:54.593929 systemd[1]: Successfully loaded SELinux policy in 33.730ms. Jan 29 10:56:54.593953 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.441ms. Jan 29 10:56:54.593965 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 10:56:54.593976 systemd[1]: Detected virtualization kvm. Jan 29 10:56:54.593986 systemd[1]: Detected architecture arm64. Jan 29 10:56:54.593996 systemd[1]: Detected first boot. Jan 29 10:56:54.594006 systemd[1]: Hostname set to . Jan 29 10:56:54.594016 systemd[1]: Initializing machine ID from VM UUID. Jan 29 10:56:54.594040 zram_generator::config[1057]: No configuration found. Jan 29 10:56:54.594053 systemd[1]: Populated /etc with preset unit settings. Jan 29 10:56:54.594066 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 10:56:54.594076 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 10:56:54.594086 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 10:56:54.594096 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 10:56:54.594106 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 10:56:54.594116 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 10:56:54.594128 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 10:56:54.594138 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 10:56:54.594149 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 10:56:54.594159 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 10:56:54.594169 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 10:56:54.596231 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 10:56:54.596260 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 10:56:54.596272 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 10:56:54.596282 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 10:56:54.596293 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 10:56:54.596304 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 10:56:54.596320 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 29 10:56:54.596330 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 10:56:54.596340 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 10:56:54.596351 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 10:56:54.596361 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 10:56:54.596371 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 10:56:54.596382 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 10:56:54.596398 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 10:56:54.596408 systemd[1]: Reached target slices.target - Slice Units. Jan 29 10:56:54.596418 systemd[1]: Reached target swap.target - Swaps. Jan 29 10:56:54.596431 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 10:56:54.596848 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 10:56:54.596872 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 10:56:54.596883 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 10:56:54.596893 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 10:56:54.596903 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 10:56:54.596918 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 10:56:54.596928 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 10:56:54.596938 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 10:56:54.596948 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 10:56:54.596957 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 10:56:54.596967 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 10:56:54.596983 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 10:56:54.596995 systemd[1]: Reached target machines.target - Containers. Jan 29 10:56:54.597005 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 10:56:54.597016 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 10:56:54.597026 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 10:56:54.597052 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 10:56:54.597064 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 10:56:54.597074 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 10:56:54.597087 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 10:56:54.597102 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 10:56:54.597112 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 10:56:54.597123 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 10:56:54.597133 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 10:56:54.597144 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 10:56:54.597153 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 10:56:54.597164 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 10:56:54.597189 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 10:56:54.597201 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 10:56:54.597212 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 10:56:54.597221 kernel: fuse: init (API version 7.39) Jan 29 10:56:54.597232 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 10:56:54.597242 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 10:56:54.597252 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 10:56:54.597262 systemd[1]: Stopped verity-setup.service. Jan 29 10:56:54.597275 kernel: loop: module loaded Jan 29 10:56:54.597285 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 10:56:54.597295 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 10:56:54.597336 systemd-journald[1125]: Collecting audit messages is disabled. Jan 29 10:56:54.597369 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 10:56:54.597381 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 10:56:54.597392 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 10:56:54.597403 systemd-journald[1125]: Journal started Jan 29 10:56:54.597429 systemd-journald[1125]: Runtime Journal (/run/log/journal/131e25c68977439490f4aca5ff658c65) is 8.0M, max 76.6M, 68.6M free. Jan 29 10:56:54.360095 systemd[1]: Queued start job for default target multi-user.target. Jan 29 10:56:54.382353 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 29 10:56:54.383074 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 10:56:54.602326 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 10:56:54.599530 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 10:56:54.602666 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 10:56:54.605955 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 10:56:54.606113 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 10:56:54.607095 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 10:56:54.607247 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 10:56:54.607992 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 10:56:54.608120 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 10:56:54.610559 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 10:56:54.610691 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 10:56:54.611460 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 10:56:54.611577 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 10:56:54.614486 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 10:56:54.616195 kernel: ACPI: bus type drm_connector registered Jan 29 10:56:54.617630 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 10:56:54.618148 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 10:56:54.629843 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 10:56:54.631282 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 10:56:54.636403 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 10:56:54.643079 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 10:56:54.645420 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 10:56:54.645995 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 10:56:54.646043 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 10:56:54.647539 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 10:56:54.651390 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 10:56:54.656620 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 10:56:54.657998 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 10:56:54.661379 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 10:56:54.663793 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 10:56:54.664791 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 10:56:54.667260 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 10:56:54.667854 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 10:56:54.672451 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 10:56:54.677605 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 10:56:54.684420 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 10:56:54.687060 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 10:56:54.688767 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 10:56:54.690223 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 10:56:54.706219 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 10:56:54.707655 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 10:56:54.711391 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 10:56:54.718517 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 10:56:54.734211 kernel: loop0: detected capacity change from 0 to 194096 Jan 29 10:56:54.740800 systemd-journald[1125]: Time spent on flushing to /var/log/journal/131e25c68977439490f4aca5ff658c65 is 87.961ms for 1134 entries. Jan 29 10:56:54.740800 systemd-journald[1125]: System Journal (/var/log/journal/131e25c68977439490f4aca5ff658c65) is 8.0M, max 584.8M, 576.8M free. Jan 29 10:56:54.862791 systemd-journald[1125]: Received client request to flush runtime journal. Jan 29 10:56:54.862854 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 10:56:54.862876 kernel: loop1: detected capacity change from 0 to 116784 Jan 29 10:56:54.770263 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 10:56:54.791575 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 10:56:54.803602 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 10:56:54.806731 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Jan 29 10:56:54.806741 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Jan 29 10:56:54.823432 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 10:56:54.828487 udevadm[1185]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 29 10:56:54.839005 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 10:56:54.841170 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 10:56:54.842400 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 10:56:54.866461 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 10:56:54.877239 kernel: loop2: detected capacity change from 0 to 8 Jan 29 10:56:54.891532 kernel: loop3: detected capacity change from 0 to 113552 Jan 29 10:56:54.896724 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 10:56:54.904391 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 10:56:54.925294 kernel: loop4: detected capacity change from 0 to 194096 Jan 29 10:56:54.932947 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Jan 29 10:56:54.933319 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Jan 29 10:56:54.939689 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 10:56:54.954229 kernel: loop5: detected capacity change from 0 to 116784 Jan 29 10:56:54.971512 kernel: loop6: detected capacity change from 0 to 8 Jan 29 10:56:54.974376 kernel: loop7: detected capacity change from 0 to 113552 Jan 29 10:56:54.995988 (sd-merge)[1198]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Jan 29 10:56:55.000587 (sd-merge)[1198]: Merged extensions into '/usr'. Jan 29 10:56:55.005342 systemd[1]: Reloading requested from client PID 1170 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 10:56:55.005358 systemd[1]: Reloading... Jan 29 10:56:55.140215 zram_generator::config[1225]: No configuration found. Jan 29 10:56:55.258843 ldconfig[1162]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 10:56:55.278437 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 10:56:55.324556 systemd[1]: Reloading finished in 318 ms. Jan 29 10:56:55.352213 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 10:56:55.353124 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 10:56:55.365093 systemd[1]: Starting ensure-sysext.service... Jan 29 10:56:55.372576 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 10:56:55.390440 systemd[1]: Reloading requested from client PID 1262 ('systemctl') (unit ensure-sysext.service)... Jan 29 10:56:55.390471 systemd[1]: Reloading... Jan 29 10:56:55.420454 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 10:56:55.420654 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 10:56:55.421812 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 10:56:55.422118 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. Jan 29 10:56:55.422259 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. Jan 29 10:56:55.425376 systemd-tmpfiles[1263]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 10:56:55.425478 systemd-tmpfiles[1263]: Skipping /boot Jan 29 10:56:55.438966 systemd-tmpfiles[1263]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 10:56:55.439113 systemd-tmpfiles[1263]: Skipping /boot Jan 29 10:56:55.475218 zram_generator::config[1289]: No configuration found. Jan 29 10:56:55.566688 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 10:56:55.612765 systemd[1]: Reloading finished in 221 ms. Jan 29 10:56:55.632450 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 10:56:55.634627 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 10:56:55.646066 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 10:56:55.650341 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 10:56:55.652967 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 10:56:55.664398 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 10:56:55.669087 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 10:56:55.681411 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 10:56:55.689453 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 10:56:55.692707 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 10:56:55.698512 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 10:56:55.702390 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 10:56:55.706425 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 10:56:55.707014 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 10:56:55.710917 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 10:56:55.711092 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 10:56:55.715203 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 10:56:55.716934 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 10:56:55.722085 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 10:56:55.732620 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 10:56:55.733428 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 10:56:55.737465 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 10:56:55.741848 systemd[1]: Finished ensure-sysext.service. Jan 29 10:56:55.749476 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 10:56:55.759801 systemd-udevd[1335]: Using default interface naming scheme 'v255'. Jan 29 10:56:55.764317 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 10:56:55.764506 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 10:56:55.765917 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 10:56:55.768247 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 10:56:55.769348 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 10:56:55.769474 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 10:56:55.772433 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 10:56:55.780550 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 10:56:55.781127 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 10:56:55.782627 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 10:56:55.791282 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 10:56:55.793254 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 10:56:55.795451 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 10:56:55.797563 augenrules[1368]: No rules Jan 29 10:56:55.800593 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 10:56:55.800793 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 10:56:55.805298 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 10:56:55.815128 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 10:56:55.827263 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 10:56:55.902086 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 10:56:55.903084 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 10:56:55.921173 systemd-resolved[1332]: Positive Trust Anchors: Jan 29 10:56:55.921209 systemd-resolved[1332]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 10:56:55.921248 systemd-resolved[1332]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 10:56:55.923568 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 29 10:56:55.926555 systemd-resolved[1332]: Using system hostname 'ci-4186-1-0-3-8e4516c670'. Jan 29 10:56:55.939366 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 10:56:55.939997 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 10:56:55.954383 systemd-networkd[1385]: lo: Link UP Jan 29 10:56:55.954391 systemd-networkd[1385]: lo: Gained carrier Jan 29 10:56:55.955681 systemd-networkd[1385]: Enumeration completed Jan 29 10:56:55.955792 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 10:56:55.956447 systemd[1]: Reached target network.target - Network. Jan 29 10:56:55.956537 systemd-networkd[1385]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 10:56:55.956540 systemd-networkd[1385]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 10:56:55.957217 systemd-networkd[1385]: eth0: Link UP Jan 29 10:56:55.957220 systemd-networkd[1385]: eth0: Gained carrier Jan 29 10:56:55.957233 systemd-networkd[1385]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 10:56:55.972724 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 10:56:56.001269 systemd-networkd[1385]: eth0: DHCPv4 address 188.34.178.132/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 29 10:56:56.002908 systemd-timesyncd[1358]: Network configuration changed, trying to establish connection. Jan 29 10:56:56.017295 systemd-networkd[1385]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 10:56:56.023203 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 10:56:56.033757 systemd-networkd[1385]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 10:56:56.033773 systemd-networkd[1385]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 10:56:56.035685 systemd-timesyncd[1358]: Network configuration changed, trying to establish connection. Jan 29 10:56:56.035799 systemd-networkd[1385]: eth1: Link UP Jan 29 10:56:56.035803 systemd-networkd[1385]: eth1: Gained carrier Jan 29 10:56:56.035820 systemd-networkd[1385]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 10:56:56.043940 systemd-timesyncd[1358]: Network configuration changed, trying to establish connection. Jan 29 10:56:56.046229 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1387) Jan 29 10:56:56.061330 systemd-networkd[1385]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 10:56:56.061893 systemd-timesyncd[1358]: Network configuration changed, trying to establish connection. Jan 29 10:56:56.085366 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Jan 29 10:56:56.085476 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 10:56:56.097474 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 10:56:56.100534 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 10:56:56.106068 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 10:56:56.107315 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 10:56:56.107364 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 10:56:56.107681 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 10:56:56.109704 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 10:56:56.111617 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 10:56:56.111745 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 10:56:56.120613 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 10:56:56.123421 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 10:56:56.124149 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 10:56:56.128552 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 10:56:56.130202 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Jan 29 10:56:56.130257 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 29 10:56:56.130273 kernel: [drm] features: -context_init Jan 29 10:56:56.131694 kernel: [drm] number of scanouts: 1 Jan 29 10:56:56.133661 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 29 10:56:56.137215 kernel: [drm] number of cap sets: 0 Jan 29 10:56:56.138448 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 10:56:56.141200 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Jan 29 10:56:56.148212 kernel: Console: switching to colour frame buffer device 160x50 Jan 29 10:56:56.164435 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 29 10:56:56.175230 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 10:56:56.193770 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 10:56:56.271280 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 10:56:56.340751 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 10:56:56.348486 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 10:56:56.359659 lvm[1443]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 10:56:56.387928 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 10:56:56.389764 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 10:56:56.391087 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 10:56:56.392522 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 10:56:56.394033 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 10:56:56.395843 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 10:56:56.396711 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 10:56:56.397368 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 10:56:56.397944 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 10:56:56.397978 systemd[1]: Reached target paths.target - Path Units. Jan 29 10:56:56.398452 systemd[1]: Reached target timers.target - Timer Units. Jan 29 10:56:56.400348 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 10:56:56.402368 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 10:56:56.407671 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 10:56:56.411468 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 10:56:56.413265 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 10:56:56.413968 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 10:56:56.414596 systemd[1]: Reached target basic.target - Basic System. Jan 29 10:56:56.415230 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 10:56:56.415254 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 10:56:56.427768 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 10:56:56.433064 lvm[1447]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 10:56:56.435457 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 29 10:56:56.441331 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 10:56:56.447630 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 10:56:56.456429 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 10:56:56.456953 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 10:56:56.458514 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 10:56:56.462613 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 10:56:56.467448 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jan 29 10:56:56.469705 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 10:56:56.473370 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 10:56:56.480292 jq[1453]: false Jan 29 10:56:56.477490 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 10:56:56.478763 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 10:56:56.479280 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 10:56:56.480716 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 10:56:56.483427 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 10:56:56.487243 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 10:56:56.489563 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 10:56:56.489738 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 10:56:56.511699 extend-filesystems[1454]: Found loop4 Jan 29 10:56:56.511699 extend-filesystems[1454]: Found loop5 Jan 29 10:56:56.525010 extend-filesystems[1454]: Found loop6 Jan 29 10:56:56.525010 extend-filesystems[1454]: Found loop7 Jan 29 10:56:56.525010 extend-filesystems[1454]: Found sda Jan 29 10:56:56.525010 extend-filesystems[1454]: Found sda1 Jan 29 10:56:56.525010 extend-filesystems[1454]: Found sda2 Jan 29 10:56:56.525010 extend-filesystems[1454]: Found sda3 Jan 29 10:56:56.525010 extend-filesystems[1454]: Found usr Jan 29 10:56:56.525010 extend-filesystems[1454]: Found sda4 Jan 29 10:56:56.525010 extend-filesystems[1454]: Found sda6 Jan 29 10:56:56.525010 extend-filesystems[1454]: Found sda7 Jan 29 10:56:56.525010 extend-filesystems[1454]: Found sda9 Jan 29 10:56:56.525010 extend-filesystems[1454]: Checking size of /dev/sda9 Jan 29 10:56:56.519865 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 10:56:56.519661 dbus-daemon[1450]: [system] SELinux support is enabled Jan 29 10:56:56.563273 coreos-metadata[1449]: Jan 29 10:56:56.549 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jan 29 10:56:56.522412 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 10:56:56.568329 extend-filesystems[1454]: Resized partition /dev/sda9 Jan 29 10:56:56.568994 coreos-metadata[1449]: Jan 29 10:56:56.563 INFO Fetch successful Jan 29 10:56:56.568994 coreos-metadata[1449]: Jan 29 10:56:56.563 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jan 29 10:56:56.568994 coreos-metadata[1449]: Jan 29 10:56:56.566 INFO Fetch successful Jan 29 10:56:56.522439 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 10:56:56.523268 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 10:56:56.523284 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 10:56:56.547748 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 10:56:56.547913 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 10:56:56.551588 (ntainerd)[1475]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 10:56:56.569824 jq[1462]: true Jan 29 10:56:56.569951 tar[1464]: linux-arm64/helm Jan 29 10:56:56.573006 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 10:56:56.574151 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 10:56:56.590934 extend-filesystems[1492]: resize2fs 1.47.1 (20-May-2024) Jan 29 10:56:56.594511 update_engine[1461]: I20250129 10:56:56.594367 1461 main.cc:92] Flatcar Update Engine starting Jan 29 10:56:56.602347 update_engine[1461]: I20250129 10:56:56.598403 1461 update_check_scheduler.cc:74] Next update check in 8m3s Jan 29 10:56:56.601640 systemd[1]: Started update-engine.service - Update Engine. Jan 29 10:56:56.605208 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 10:56:56.611567 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Jan 29 10:56:56.633923 jq[1485]: true Jan 29 10:56:56.701945 systemd-logind[1460]: New seat seat0. Jan 29 10:56:56.713212 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1381) Jan 29 10:56:56.721526 systemd-logind[1460]: Watching system buttons on /dev/input/event0 (Power Button) Jan 29 10:56:56.721549 systemd-logind[1460]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Jan 29 10:56:56.721756 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 10:56:56.761988 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 29 10:56:56.765091 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 10:56:56.772217 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Jan 29 10:56:56.790264 extend-filesystems[1492]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 29 10:56:56.790264 extend-filesystems[1492]: old_desc_blocks = 1, new_desc_blocks = 5 Jan 29 10:56:56.790264 extend-filesystems[1492]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Jan 29 10:56:56.797963 extend-filesystems[1454]: Resized filesystem in /dev/sda9 Jan 29 10:56:56.797963 extend-filesystems[1454]: Found sr0 Jan 29 10:56:56.793800 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 10:56:56.796763 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 10:56:56.807838 bash[1524]: Updated "/home/core/.ssh/authorized_keys" Jan 29 10:56:56.814341 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 10:56:56.824477 systemd[1]: Starting sshkeys.service... Jan 29 10:56:56.845332 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 29 10:56:56.856541 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 29 10:56:56.896910 coreos-metadata[1529]: Jan 29 10:56:56.896 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jan 29 10:56:56.898830 coreos-metadata[1529]: Jan 29 10:56:56.898 INFO Fetch successful Jan 29 10:56:56.906097 unknown[1529]: wrote ssh authorized keys file for user: core Jan 29 10:56:56.917202 containerd[1475]: time="2025-01-29T10:56:56.913709080Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 29 10:56:56.929894 sshd_keygen[1486]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 10:56:56.938162 update-ssh-keys[1534]: Updated "/home/core/.ssh/authorized_keys" Jan 29 10:56:56.939630 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 29 10:56:56.944998 systemd[1]: Finished sshkeys.service. Jan 29 10:56:56.956124 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 10:56:56.968487 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 10:56:56.980335 containerd[1475]: time="2025-01-29T10:56:56.980277640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 10:56:56.982091 containerd[1475]: time="2025-01-29T10:56:56.982039280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 10:56:56.982235 containerd[1475]: time="2025-01-29T10:56:56.982219480Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 10:56:56.983673 containerd[1475]: time="2025-01-29T10:56:56.982321400Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 10:56:56.983492 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 10:56:56.986742 containerd[1475]: time="2025-01-29T10:56:56.982487680Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 10:56:56.986742 containerd[1475]: time="2025-01-29T10:56:56.983925760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 10:56:56.986742 containerd[1475]: time="2025-01-29T10:56:56.984059840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 10:56:56.986742 containerd[1475]: time="2025-01-29T10:56:56.984100120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 10:56:56.986261 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 10:56:56.987366 containerd[1475]: time="2025-01-29T10:56:56.987143040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 10:56:56.989210 containerd[1475]: time="2025-01-29T10:56:56.987559320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 10:56:56.989210 containerd[1475]: time="2025-01-29T10:56:56.987590560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 10:56:56.989210 containerd[1475]: time="2025-01-29T10:56:56.987601520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 10:56:56.989210 containerd[1475]: time="2025-01-29T10:56:56.987721560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 10:56:56.989210 containerd[1475]: time="2025-01-29T10:56:56.987921960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 10:56:56.989210 containerd[1475]: time="2025-01-29T10:56:56.988081920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 10:56:56.989210 containerd[1475]: time="2025-01-29T10:56:56.988099760Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 10:56:56.991211 containerd[1475]: time="2025-01-29T10:56:56.990564040Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 10:56:56.991211 containerd[1475]: time="2025-01-29T10:56:56.990675800Z" level=info msg="metadata content store policy set" policy=shared Jan 29 10:56:56.996208 containerd[1475]: time="2025-01-29T10:56:56.995259560Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 10:56:56.996390 containerd[1475]: time="2025-01-29T10:56:56.996370640Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 10:56:56.996493 containerd[1475]: time="2025-01-29T10:56:56.996480640Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 10:56:56.996559 containerd[1475]: time="2025-01-29T10:56:56.996547000Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 10:56:56.996618 containerd[1475]: time="2025-01-29T10:56:56.996607200Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 10:56:56.997111 containerd[1475]: time="2025-01-29T10:56:56.996952000Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 10:56:56.997877 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 10:56:57.000760 containerd[1475]: time="2025-01-29T10:56:56.999334360Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 10:56:57.000760 containerd[1475]: time="2025-01-29T10:56:56.999616480Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 10:56:57.000760 containerd[1475]: time="2025-01-29T10:56:56.999636520Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 10:56:57.000760 containerd[1475]: time="2025-01-29T10:56:56.999651280Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 10:56:57.000760 containerd[1475]: time="2025-01-29T10:56:56.999665000Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 10:56:57.000760 containerd[1475]: time="2025-01-29T10:56:56.999678000Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 10:56:57.000760 containerd[1475]: time="2025-01-29T10:56:56.999690160Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 10:56:57.000760 containerd[1475]: time="2025-01-29T10:56:56.999705200Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 10:56:57.000760 containerd[1475]: time="2025-01-29T10:56:56.999719400Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 10:56:57.000760 containerd[1475]: time="2025-01-29T10:56:56.999752080Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 10:56:57.000760 containerd[1475]: time="2025-01-29T10:56:56.999765400Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 10:56:57.000760 containerd[1475]: time="2025-01-29T10:56:56.999776880Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 10:56:57.000760 containerd[1475]: time="2025-01-29T10:56:56.999798600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 10:56:57.000760 containerd[1475]: time="2025-01-29T10:56:56.999812960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 10:56:57.001284 containerd[1475]: time="2025-01-29T10:56:56.999824880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 10:56:57.001284 containerd[1475]: time="2025-01-29T10:56:56.999837160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 10:56:57.001284 containerd[1475]: time="2025-01-29T10:56:56.999848760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 10:56:57.001284 containerd[1475]: time="2025-01-29T10:56:56.999863080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 10:56:57.001284 containerd[1475]: time="2025-01-29T10:56:56.999877000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 10:56:57.001284 containerd[1475]: time="2025-01-29T10:56:56.999889720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 10:56:57.001284 containerd[1475]: time="2025-01-29T10:56:56.999901680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 10:56:57.001284 containerd[1475]: time="2025-01-29T10:56:56.999917880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 10:56:57.001284 containerd[1475]: time="2025-01-29T10:56:56.999930120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 10:56:57.001284 containerd[1475]: time="2025-01-29T10:56:56.999943320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 10:56:57.001284 containerd[1475]: time="2025-01-29T10:56:56.999955280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 10:56:57.001284 containerd[1475]: time="2025-01-29T10:56:56.999970040Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 10:56:57.001284 containerd[1475]: time="2025-01-29T10:56:56.999990320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 10:56:57.001284 containerd[1475]: time="2025-01-29T10:56:57.000003080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 10:56:57.001284 containerd[1475]: time="2025-01-29T10:56:57.000013160Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 10:56:57.005630 containerd[1475]: time="2025-01-29T10:56:57.004289440Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 10:56:57.005630 containerd[1475]: time="2025-01-29T10:56:57.004389600Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 10:56:57.005630 containerd[1475]: time="2025-01-29T10:56:57.004402760Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 10:56:57.005630 containerd[1475]: time="2025-01-29T10:56:57.004415360Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 10:56:57.005630 containerd[1475]: time="2025-01-29T10:56:57.004424480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 10:56:57.005630 containerd[1475]: time="2025-01-29T10:56:57.004436680Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 10:56:57.005630 containerd[1475]: time="2025-01-29T10:56:57.004447400Z" level=info msg="NRI interface is disabled by configuration." Jan 29 10:56:57.005630 containerd[1475]: time="2025-01-29T10:56:57.004456920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 10:56:57.005843 containerd[1475]: time="2025-01-29T10:56:57.004805080Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 10:56:57.005843 containerd[1475]: time="2025-01-29T10:56:57.004852120Z" level=info msg="Connect containerd service" Jan 29 10:56:57.005843 containerd[1475]: time="2025-01-29T10:56:57.004884440Z" level=info msg="using legacy CRI server" Jan 29 10:56:57.005843 containerd[1475]: time="2025-01-29T10:56:57.004891600Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 10:56:57.005843 containerd[1475]: time="2025-01-29T10:56:57.005210720Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 10:56:57.006495 containerd[1475]: time="2025-01-29T10:56:57.006468080Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 10:56:57.010300 containerd[1475]: time="2025-01-29T10:56:57.009279320Z" level=info msg="Start subscribing containerd event" Jan 29 10:56:57.010300 containerd[1475]: time="2025-01-29T10:56:57.009344640Z" level=info msg="Start recovering state" Jan 29 10:56:57.010300 containerd[1475]: time="2025-01-29T10:56:57.009419880Z" level=info msg="Start event monitor" Jan 29 10:56:57.010300 containerd[1475]: time="2025-01-29T10:56:57.009430880Z" level=info msg="Start snapshots syncer" Jan 29 10:56:57.010300 containerd[1475]: time="2025-01-29T10:56:57.009440600Z" level=info msg="Start cni network conf syncer for default" Jan 29 10:56:57.010300 containerd[1475]: time="2025-01-29T10:56:57.009447760Z" level=info msg="Start streaming server" Jan 29 10:56:57.010300 containerd[1475]: time="2025-01-29T10:56:57.010225080Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 10:56:57.010300 containerd[1475]: time="2025-01-29T10:56:57.010267400Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 10:56:57.011744 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 10:56:57.012537 containerd[1475]: time="2025-01-29T10:56:57.012514880Z" level=info msg="containerd successfully booted in 0.100114s" Jan 29 10:56:57.024238 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 10:56:57.034457 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 10:56:57.042449 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 29 10:56:57.042774 locksmithd[1495]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 10:56:57.043165 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 10:56:57.198286 tar[1464]: linux-arm64/LICENSE Jan 29 10:56:57.198286 tar[1464]: linux-arm64/README.md Jan 29 10:56:57.210569 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 10:56:57.265392 systemd-networkd[1385]: eth1: Gained IPv6LL Jan 29 10:56:57.266223 systemd-timesyncd[1358]: Network configuration changed, trying to establish connection. Jan 29 10:56:57.268683 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 10:56:57.271847 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 10:56:57.281484 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 10:56:57.284138 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 10:56:57.314038 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 10:56:57.329321 systemd-networkd[1385]: eth0: Gained IPv6LL Jan 29 10:56:57.330647 systemd-timesyncd[1358]: Network configuration changed, trying to establish connection. Jan 29 10:56:57.950325 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 10:56:57.952085 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 10:56:57.957948 systemd[1]: Startup finished in 760ms (kernel) + 13.215s (initrd) + 4.091s (userspace) = 18.067s. Jan 29 10:56:57.964657 (kubelet)[1579]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 10:56:57.977913 agetty[1559]: failed to open credentials directory Jan 29 10:56:57.979203 agetty[1558]: failed to open credentials directory Jan 29 10:56:58.517477 kubelet[1579]: E0129 10:56:58.517433 1579 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 10:56:58.519423 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 10:56:58.519549 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 10:57:08.610314 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 10:57:08.616445 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 10:57:08.727631 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 10:57:08.739737 (kubelet)[1599]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 10:57:08.794345 kubelet[1599]: E0129 10:57:08.794300 1599 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 10:57:08.797674 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 10:57:08.797822 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 10:57:18.860852 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 10:57:18.871529 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 10:57:18.973771 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 10:57:18.986771 (kubelet)[1615]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 10:57:19.035451 kubelet[1615]: E0129 10:57:19.035388 1615 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 10:57:19.038440 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 10:57:19.038633 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 10:57:27.536714 systemd-timesyncd[1358]: Contacted time server 194.50.19.117:123 (2.flatcar.pool.ntp.org). Jan 29 10:57:27.536826 systemd-timesyncd[1358]: Initial clock synchronization to Wed 2025-01-29 10:57:27.284563 UTC. Jan 29 10:57:29.110376 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 29 10:57:29.115544 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 10:57:29.212589 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 10:57:29.216685 (kubelet)[1631]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 10:57:29.260008 kubelet[1631]: E0129 10:57:29.259947 1631 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 10:57:29.262703 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 10:57:29.262967 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 10:57:39.360556 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 29 10:57:39.367481 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 10:57:39.489878 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 10:57:39.494996 (kubelet)[1647]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 10:57:39.539435 kubelet[1647]: E0129 10:57:39.539344 1647 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 10:57:39.542608 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 10:57:39.542900 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 10:57:41.936305 update_engine[1461]: I20250129 10:57:41.935609 1461 update_attempter.cc:509] Updating boot flags... Jan 29 10:57:41.983280 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1665) Jan 29 10:57:42.036223 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1665) Jan 29 10:57:42.087199 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1665) Jan 29 10:57:49.610785 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 29 10:57:49.621546 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 10:57:49.728102 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 10:57:49.740798 (kubelet)[1685]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 10:57:49.786595 kubelet[1685]: E0129 10:57:49.786539 1685 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 10:57:49.790197 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 10:57:49.790682 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 10:57:59.860630 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 29 10:57:59.868538 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 10:57:59.994453 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 10:57:59.999167 (kubelet)[1700]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 10:58:00.046610 kubelet[1700]: E0129 10:58:00.046550 1700 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 10:58:00.049105 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 10:58:00.049322 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 10:58:10.110878 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 29 10:58:10.121570 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 10:58:10.238877 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 10:58:10.250723 (kubelet)[1717]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 10:58:10.304485 kubelet[1717]: E0129 10:58:10.304402 1717 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 10:58:10.306985 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 10:58:10.307114 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 10:58:20.360604 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 29 10:58:20.374723 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 10:58:20.535446 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 10:58:20.545865 (kubelet)[1733]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 10:58:20.603911 kubelet[1733]: E0129 10:58:20.603834 1733 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 10:58:20.607249 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 10:58:20.607416 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 10:58:30.610981 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Jan 29 10:58:30.620541 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 10:58:30.755446 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 10:58:30.755504 (kubelet)[1749]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 10:58:30.797251 kubelet[1749]: E0129 10:58:30.797154 1749 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 10:58:30.800127 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 10:58:30.800445 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 10:58:40.860696 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Jan 29 10:58:40.867559 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 10:58:40.977902 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 10:58:40.985760 (kubelet)[1764]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 10:58:41.033908 kubelet[1764]: E0129 10:58:41.033839 1764 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 10:58:41.037537 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 10:58:41.038002 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 10:58:44.515706 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 10:58:44.528692 systemd[1]: Started sshd@0-188.34.178.132:22-147.75.109.163:58720.service - OpenSSH per-connection server daemon (147.75.109.163:58720). Jan 29 10:58:45.534850 sshd[1774]: Accepted publickey for core from 147.75.109.163 port 58720 ssh2: RSA SHA256:ud3vxHVNIak4XcjfYqyE/gz2LSgCDnXIPhIjlh5WLRg Jan 29 10:58:45.537599 sshd-session[1774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:58:45.548565 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 10:58:45.556640 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 10:58:45.561647 systemd-logind[1460]: New session 1 of user core. Jan 29 10:58:45.569527 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 10:58:45.575567 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 10:58:45.580457 (systemd)[1778]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 10:58:45.680372 systemd[1778]: Queued start job for default target default.target. Jan 29 10:58:45.691713 systemd[1778]: Created slice app.slice - User Application Slice. Jan 29 10:58:45.692210 systemd[1778]: Reached target paths.target - Paths. Jan 29 10:58:45.692714 systemd[1778]: Reached target timers.target - Timers. Jan 29 10:58:45.694655 systemd[1778]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 10:58:45.709101 systemd[1778]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 10:58:45.709700 systemd[1778]: Reached target sockets.target - Sockets. Jan 29 10:58:45.709743 systemd[1778]: Reached target basic.target - Basic System. Jan 29 10:58:45.709819 systemd[1778]: Reached target default.target - Main User Target. Jan 29 10:58:45.709869 systemd[1778]: Startup finished in 122ms. Jan 29 10:58:45.710005 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 10:58:45.720524 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 10:58:46.424690 systemd[1]: Started sshd@1-188.34.178.132:22-147.75.109.163:58734.service - OpenSSH per-connection server daemon (147.75.109.163:58734). Jan 29 10:58:47.411578 sshd[1789]: Accepted publickey for core from 147.75.109.163 port 58734 ssh2: RSA SHA256:ud3vxHVNIak4XcjfYqyE/gz2LSgCDnXIPhIjlh5WLRg Jan 29 10:58:47.413483 sshd-session[1789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:58:47.419599 systemd-logind[1460]: New session 2 of user core. Jan 29 10:58:47.428527 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 10:58:48.093386 sshd[1791]: Connection closed by 147.75.109.163 port 58734 Jan 29 10:58:48.094350 sshd-session[1789]: pam_unix(sshd:session): session closed for user core Jan 29 10:58:48.099723 systemd[1]: sshd@1-188.34.178.132:22-147.75.109.163:58734.service: Deactivated successfully. Jan 29 10:58:48.102913 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 10:58:48.103714 systemd-logind[1460]: Session 2 logged out. Waiting for processes to exit. Jan 29 10:58:48.104772 systemd-logind[1460]: Removed session 2. Jan 29 10:58:48.269539 systemd[1]: Started sshd@2-188.34.178.132:22-147.75.109.163:58388.service - OpenSSH per-connection server daemon (147.75.109.163:58388). Jan 29 10:58:49.251788 sshd[1796]: Accepted publickey for core from 147.75.109.163 port 58388 ssh2: RSA SHA256:ud3vxHVNIak4XcjfYqyE/gz2LSgCDnXIPhIjlh5WLRg Jan 29 10:58:49.254315 sshd-session[1796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:58:49.260658 systemd-logind[1460]: New session 3 of user core. Jan 29 10:58:49.267741 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 10:58:49.928787 sshd[1798]: Connection closed by 147.75.109.163 port 58388 Jan 29 10:58:49.929508 sshd-session[1796]: pam_unix(sshd:session): session closed for user core Jan 29 10:58:49.933524 systemd[1]: sshd@2-188.34.178.132:22-147.75.109.163:58388.service: Deactivated successfully. Jan 29 10:58:49.935279 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 10:58:49.935962 systemd-logind[1460]: Session 3 logged out. Waiting for processes to exit. Jan 29 10:58:49.937230 systemd-logind[1460]: Removed session 3. Jan 29 10:58:50.106658 systemd[1]: Started sshd@3-188.34.178.132:22-147.75.109.163:58396.service - OpenSSH per-connection server daemon (147.75.109.163:58396). Jan 29 10:58:51.088864 sshd[1803]: Accepted publickey for core from 147.75.109.163 port 58396 ssh2: RSA SHA256:ud3vxHVNIak4XcjfYqyE/gz2LSgCDnXIPhIjlh5WLRg Jan 29 10:58:51.091057 sshd-session[1803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:58:51.091936 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Jan 29 10:58:51.097455 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 10:58:51.102100 systemd-logind[1460]: New session 4 of user core. Jan 29 10:58:51.105543 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 10:58:51.195717 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 10:58:51.199922 (kubelet)[1814]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 10:58:51.239986 kubelet[1814]: E0129 10:58:51.239920 1814 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 10:58:51.242524 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 10:58:51.242680 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 10:58:51.768723 sshd[1808]: Connection closed by 147.75.109.163 port 58396 Jan 29 10:58:51.769311 sshd-session[1803]: pam_unix(sshd:session): session closed for user core Jan 29 10:58:51.772736 systemd[1]: sshd@3-188.34.178.132:22-147.75.109.163:58396.service: Deactivated successfully. Jan 29 10:58:51.775112 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 10:58:51.776224 systemd-logind[1460]: Session 4 logged out. Waiting for processes to exit. Jan 29 10:58:51.777378 systemd-logind[1460]: Removed session 4. Jan 29 10:58:51.943114 systemd[1]: Started sshd@4-188.34.178.132:22-147.75.109.163:58406.service - OpenSSH per-connection server daemon (147.75.109.163:58406). Jan 29 10:58:52.953531 sshd[1826]: Accepted publickey for core from 147.75.109.163 port 58406 ssh2: RSA SHA256:ud3vxHVNIak4XcjfYqyE/gz2LSgCDnXIPhIjlh5WLRg Jan 29 10:58:52.956077 sshd-session[1826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:58:52.961267 systemd-logind[1460]: New session 5 of user core. Jan 29 10:58:52.968497 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 10:58:53.491579 sudo[1829]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 10:58:53.491880 sudo[1829]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 10:58:53.789480 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 10:58:53.791714 (dockerd)[1847]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 10:58:54.008933 dockerd[1847]: time="2025-01-29T10:58:54.008510596Z" level=info msg="Starting up" Jan 29 10:58:54.106574 dockerd[1847]: time="2025-01-29T10:58:54.106454213Z" level=info msg="Loading containers: start." Jan 29 10:58:54.273314 kernel: Initializing XFRM netlink socket Jan 29 10:58:54.353590 systemd-networkd[1385]: docker0: Link UP Jan 29 10:58:54.393756 dockerd[1847]: time="2025-01-29T10:58:54.393656984Z" level=info msg="Loading containers: done." Jan 29 10:58:54.408564 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1758296419-merged.mount: Deactivated successfully. Jan 29 10:58:54.413242 dockerd[1847]: time="2025-01-29T10:58:54.412551851Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 10:58:54.413242 dockerd[1847]: time="2025-01-29T10:58:54.412737693Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jan 29 10:58:54.413242 dockerd[1847]: time="2025-01-29T10:58:54.412931376Z" level=info msg="Daemon has completed initialization" Jan 29 10:58:54.448329 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 10:58:54.449086 dockerd[1847]: time="2025-01-29T10:58:54.448225040Z" level=info msg="API listen on /run/docker.sock" Jan 29 10:58:55.528097 containerd[1475]: time="2025-01-29T10:58:55.527691333Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 29 10:58:56.202690 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount289789687.mount: Deactivated successfully. Jan 29 10:58:57.906210 containerd[1475]: time="2025-01-29T10:58:57.905983820Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:58:57.907856 containerd[1475]: time="2025-01-29T10:58:57.907303595Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=29865027" Jan 29 10:58:57.908992 containerd[1475]: time="2025-01-29T10:58:57.908871933Z" level=info msg="ImageCreate event name:\"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:58:57.913588 containerd[1475]: time="2025-01-29T10:58:57.913509867Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:58:57.915436 containerd[1475]: time="2025-01-29T10:58:57.915234367Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"29861735\" in 2.387477713s" Jan 29 10:58:57.915436 containerd[1475]: time="2025-01-29T10:58:57.915275088Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\"" Jan 29 10:58:57.941792 containerd[1475]: time="2025-01-29T10:58:57.941704554Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 29 10:58:59.899806 containerd[1475]: time="2025-01-29T10:58:59.899744174Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:58:59.901114 containerd[1475]: time="2025-01-29T10:58:59.901069829Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=26901581" Jan 29 10:58:59.901773 containerd[1475]: time="2025-01-29T10:58:59.901466914Z" level=info msg="ImageCreate event name:\"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:58:59.904666 containerd[1475]: time="2025-01-29T10:58:59.904608910Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:58:59.905994 containerd[1475]: time="2025-01-29T10:58:59.905862284Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"28305351\" in 1.964115329s" Jan 29 10:58:59.905994 containerd[1475]: time="2025-01-29T10:58:59.905900244Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\"" Jan 29 10:58:59.932191 containerd[1475]: time="2025-01-29T10:58:59.932127342Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 29 10:59:01.340271 containerd[1475]: time="2025-01-29T10:59:01.340228687Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:59:01.342167 containerd[1475]: time="2025-01-29T10:59:01.342104628Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=16164358" Jan 29 10:59:01.343081 containerd[1475]: time="2025-01-29T10:59:01.343037198Z" level=info msg="ImageCreate event name:\"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:59:01.346712 containerd[1475]: time="2025-01-29T10:59:01.346673919Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:59:01.348636 containerd[1475]: time="2025-01-29T10:59:01.348584300Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"17568146\" in 1.416412517s" Jan 29 10:59:01.348636 containerd[1475]: time="2025-01-29T10:59:01.348624701Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\"" Jan 29 10:59:01.360201 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Jan 29 10:59:01.368351 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 10:59:01.374147 containerd[1475]: time="2025-01-29T10:59:01.373748261Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 29 10:59:01.465331 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 10:59:01.470153 (kubelet)[2128]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 10:59:01.518035 kubelet[2128]: E0129 10:59:01.517984 2128 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 10:59:01.521233 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 10:59:01.521410 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 10:59:02.755238 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1416634164.mount: Deactivated successfully. Jan 29 10:59:03.072414 containerd[1475]: time="2025-01-29T10:59:03.072168063Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:59:03.073780 containerd[1475]: time="2025-01-29T10:59:03.073716200Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=25662738" Jan 29 10:59:03.074593 containerd[1475]: time="2025-01-29T10:59:03.074422287Z" level=info msg="ImageCreate event name:\"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:59:03.077513 containerd[1475]: time="2025-01-29T10:59:03.077343720Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:59:03.079603 containerd[1475]: time="2025-01-29T10:59:03.078263490Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"25661731\" in 1.704470548s" Jan 29 10:59:03.079603 containerd[1475]: time="2025-01-29T10:59:03.078307810Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\"" Jan 29 10:59:03.102363 containerd[1475]: time="2025-01-29T10:59:03.102278753Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 29 10:59:03.713747 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1604314738.mount: Deactivated successfully. Jan 29 10:59:04.334316 containerd[1475]: time="2025-01-29T10:59:04.334258104Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:59:04.335686 containerd[1475]: time="2025-01-29T10:59:04.335412957Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485461" Jan 29 10:59:04.336407 containerd[1475]: time="2025-01-29T10:59:04.336356807Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:59:04.339531 containerd[1475]: time="2025-01-29T10:59:04.339451960Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:59:04.340804 containerd[1475]: time="2025-01-29T10:59:04.340654172Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.238135816s" Jan 29 10:59:04.340804 containerd[1475]: time="2025-01-29T10:59:04.340693973Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 29 10:59:04.363351 containerd[1475]: time="2025-01-29T10:59:04.363291454Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 29 10:59:04.890235 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3918709395.mount: Deactivated successfully. Jan 29 10:59:04.895437 containerd[1475]: time="2025-01-29T10:59:04.895378248Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:59:04.896948 containerd[1475]: time="2025-01-29T10:59:04.896776543Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268841" Jan 29 10:59:04.896948 containerd[1475]: time="2025-01-29T10:59:04.896881504Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:59:04.899224 containerd[1475]: time="2025-01-29T10:59:04.899146448Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:59:04.900129 containerd[1475]: time="2025-01-29T10:59:04.899994538Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 536.646363ms" Jan 29 10:59:04.900129 containerd[1475]: time="2025-01-29T10:59:04.900029258Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jan 29 10:59:04.927719 containerd[1475]: time="2025-01-29T10:59:04.927534391Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 29 10:59:05.523706 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2931344428.mount: Deactivated successfully. Jan 29 10:59:08.516044 containerd[1475]: time="2025-01-29T10:59:08.515992880Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:59:08.518348 containerd[1475]: time="2025-01-29T10:59:08.518307944Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191552" Jan 29 10:59:08.519402 containerd[1475]: time="2025-01-29T10:59:08.519356115Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:59:08.523112 containerd[1475]: time="2025-01-29T10:59:08.523067354Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:59:08.524621 containerd[1475]: time="2025-01-29T10:59:08.524576050Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 3.596986418s" Jan 29 10:59:08.524621 containerd[1475]: time="2025-01-29T10:59:08.524618971Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Jan 29 10:59:11.610622 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Jan 29 10:59:11.620588 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 10:59:11.742388 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 10:59:11.743942 (kubelet)[2316]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 10:59:11.787210 kubelet[2316]: E0129 10:59:11.786582 2316 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 10:59:11.789220 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 10:59:11.789355 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 10:59:14.287452 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 10:59:14.293653 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 10:59:14.316567 systemd[1]: Reloading requested from client PID 2331 ('systemctl') (unit session-5.scope)... Jan 29 10:59:14.316585 systemd[1]: Reloading... Jan 29 10:59:14.436212 zram_generator::config[2368]: No configuration found. Jan 29 10:59:14.526027 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 10:59:14.593729 systemd[1]: Reloading finished in 276 ms. Jan 29 10:59:14.642624 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 29 10:59:14.642782 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 29 10:59:14.643269 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 10:59:14.656486 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 10:59:14.757425 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 10:59:14.760967 (kubelet)[2419]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 10:59:14.807290 kubelet[2419]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 10:59:14.807290 kubelet[2419]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 10:59:14.807290 kubelet[2419]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 10:59:14.807703 kubelet[2419]: I0129 10:59:14.807382 2419 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 10:59:15.677224 kubelet[2419]: I0129 10:59:15.676985 2419 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 10:59:15.677224 kubelet[2419]: I0129 10:59:15.677016 2419 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 10:59:15.678204 kubelet[2419]: I0129 10:59:15.677544 2419 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 10:59:15.695418 kubelet[2419]: I0129 10:59:15.695368 2419 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 10:59:15.695743 kubelet[2419]: E0129 10:59:15.695712 2419 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://188.34.178.132:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 188.34.178.132:6443: connect: connection refused Jan 29 10:59:15.704941 kubelet[2419]: I0129 10:59:15.704909 2419 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 10:59:15.707202 kubelet[2419]: I0129 10:59:15.707102 2419 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 10:59:15.707376 kubelet[2419]: I0129 10:59:15.707146 2419 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186-1-0-3-8e4516c670","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 10:59:15.707532 kubelet[2419]: I0129 10:59:15.707418 2419 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 10:59:15.707532 kubelet[2419]: I0129 10:59:15.707429 2419 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 10:59:15.707778 kubelet[2419]: I0129 10:59:15.707739 2419 state_mem.go:36] "Initialized new in-memory state store" Jan 29 10:59:15.708820 kubelet[2419]: I0129 10:59:15.708785 2419 kubelet.go:400] "Attempting to sync node with API server" Jan 29 10:59:15.708820 kubelet[2419]: I0129 10:59:15.708812 2419 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 10:59:15.709442 kubelet[2419]: I0129 10:59:15.709075 2419 kubelet.go:312] "Adding apiserver pod source" Jan 29 10:59:15.709442 kubelet[2419]: I0129 10:59:15.709154 2419 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 10:59:15.711937 kubelet[2419]: W0129 10:59:15.711884 2419 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://188.34.178.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-1-0-3-8e4516c670&limit=500&resourceVersion=0": dial tcp 188.34.178.132:6443: connect: connection refused Jan 29 10:59:15.712135 kubelet[2419]: E0129 10:59:15.712116 2419 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://188.34.178.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-1-0-3-8e4516c670&limit=500&resourceVersion=0": dial tcp 188.34.178.132:6443: connect: connection refused Jan 29 10:59:15.712479 kubelet[2419]: I0129 10:59:15.712454 2419 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 10:59:15.713217 kubelet[2419]: I0129 10:59:15.713051 2419 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 10:59:15.713318 kubelet[2419]: W0129 10:59:15.713301 2419 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 10:59:15.714740 kubelet[2419]: I0129 10:59:15.714395 2419 server.go:1264] "Started kubelet" Jan 29 10:59:15.717913 kubelet[2419]: W0129 10:59:15.717862 2419 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://188.34.178.132:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 188.34.178.132:6443: connect: connection refused Jan 29 10:59:15.717913 kubelet[2419]: E0129 10:59:15.717914 2419 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://188.34.178.132:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 188.34.178.132:6443: connect: connection refused Jan 29 10:59:15.718091 kubelet[2419]: I0129 10:59:15.718048 2419 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 10:59:15.718337 kubelet[2419]: I0129 10:59:15.718285 2419 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 10:59:15.718745 kubelet[2419]: I0129 10:59:15.718716 2419 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 10:59:15.719132 kubelet[2419]: E0129 10:59:15.718955 2419 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://188.34.178.132:6443/api/v1/namespaces/default/events\": dial tcp 188.34.178.132:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4186-1-0-3-8e4516c670.181f24b86d488191 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186-1-0-3-8e4516c670,UID:ci-4186-1-0-3-8e4516c670,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4186-1-0-3-8e4516c670,},FirstTimestamp:2025-01-29 10:59:15.714351505 +0000 UTC m=+0.948795032,LastTimestamp:2025-01-29 10:59:15.714351505 +0000 UTC m=+0.948795032,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186-1-0-3-8e4516c670,}" Jan 29 10:59:15.719275 kubelet[2419]: I0129 10:59:15.719137 2419 server.go:455] "Adding debug handlers to kubelet server" Jan 29 10:59:15.723201 kubelet[2419]: I0129 10:59:15.721475 2419 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 10:59:15.725634 kubelet[2419]: E0129 10:59:15.725537 2419 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://188.34.178.132:6443/api/v1/namespaces/default/events\": dial tcp 188.34.178.132:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4186-1-0-3-8e4516c670.181f24b86d488191 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186-1-0-3-8e4516c670,UID:ci-4186-1-0-3-8e4516c670,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4186-1-0-3-8e4516c670,},FirstTimestamp:2025-01-29 10:59:15.714351505 +0000 UTC m=+0.948795032,LastTimestamp:2025-01-29 10:59:15.714351505 +0000 UTC m=+0.948795032,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186-1-0-3-8e4516c670,}" Jan 29 10:59:15.726374 kubelet[2419]: E0129 10:59:15.726354 2419 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 10:59:15.727009 kubelet[2419]: E0129 10:59:15.726681 2419 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4186-1-0-3-8e4516c670\" not found" Jan 29 10:59:15.727009 kubelet[2419]: I0129 10:59:15.726788 2419 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 10:59:15.727009 kubelet[2419]: I0129 10:59:15.726903 2419 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 10:59:15.727972 kubelet[2419]: I0129 10:59:15.727956 2419 reconciler.go:26] "Reconciler: start to sync state" Jan 29 10:59:15.728453 kubelet[2419]: E0129 10:59:15.728421 2419 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.34.178.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-1-0-3-8e4516c670?timeout=10s\": dial tcp 188.34.178.132:6443: connect: connection refused" interval="200ms" Jan 29 10:59:15.728676 kubelet[2419]: W0129 10:59:15.728630 2419 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://188.34.178.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 188.34.178.132:6443: connect: connection refused Jan 29 10:59:15.728767 kubelet[2419]: E0129 10:59:15.728755 2419 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://188.34.178.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 188.34.178.132:6443: connect: connection refused Jan 29 10:59:15.729351 kubelet[2419]: I0129 10:59:15.729331 2419 factory.go:221] Registration of the systemd container factory successfully Jan 29 10:59:15.729538 kubelet[2419]: I0129 10:59:15.729505 2419 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 10:59:15.731447 kubelet[2419]: I0129 10:59:15.731427 2419 factory.go:221] Registration of the containerd container factory successfully Jan 29 10:59:15.741386 kubelet[2419]: I0129 10:59:15.740737 2419 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 10:59:15.742803 kubelet[2419]: I0129 10:59:15.742772 2419 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 10:59:15.742803 kubelet[2419]: I0129 10:59:15.742807 2419 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 10:59:15.742898 kubelet[2419]: I0129 10:59:15.742826 2419 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 10:59:15.742898 kubelet[2419]: E0129 10:59:15.742867 2419 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 10:59:15.749596 kubelet[2419]: W0129 10:59:15.749542 2419 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://188.34.178.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 188.34.178.132:6443: connect: connection refused Jan 29 10:59:15.749596 kubelet[2419]: E0129 10:59:15.749598 2419 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://188.34.178.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 188.34.178.132:6443: connect: connection refused Jan 29 10:59:15.765895 kubelet[2419]: I0129 10:59:15.765870 2419 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 10:59:15.765895 kubelet[2419]: I0129 10:59:15.765890 2419 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 10:59:15.765895 kubelet[2419]: I0129 10:59:15.765907 2419 state_mem.go:36] "Initialized new in-memory state store" Jan 29 10:59:15.768505 kubelet[2419]: I0129 10:59:15.768475 2419 policy_none.go:49] "None policy: Start" Jan 29 10:59:15.769162 kubelet[2419]: I0129 10:59:15.769139 2419 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 10:59:15.769239 kubelet[2419]: I0129 10:59:15.769202 2419 state_mem.go:35] "Initializing new in-memory state store" Jan 29 10:59:15.776728 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 10:59:15.792911 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 10:59:15.796659 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 10:59:15.808226 kubelet[2419]: I0129 10:59:15.807981 2419 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 10:59:15.808685 kubelet[2419]: I0129 10:59:15.808303 2419 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 10:59:15.808685 kubelet[2419]: I0129 10:59:15.808455 2419 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 10:59:15.811972 kubelet[2419]: E0129 10:59:15.811723 2419 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4186-1-0-3-8e4516c670\" not found" Jan 29 10:59:15.829726 kubelet[2419]: I0129 10:59:15.829658 2419 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186-1-0-3-8e4516c670" Jan 29 10:59:15.830133 kubelet[2419]: E0129 10:59:15.830095 2419 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://188.34.178.132:6443/api/v1/nodes\": dial tcp 188.34.178.132:6443: connect: connection refused" node="ci-4186-1-0-3-8e4516c670" Jan 29 10:59:15.843673 kubelet[2419]: I0129 10:59:15.843476 2419 topology_manager.go:215] "Topology Admit Handler" podUID="7f7220b8fc2a93265b74d37db8de1420" podNamespace="kube-system" podName="kube-apiserver-ci-4186-1-0-3-8e4516c670" Jan 29 10:59:15.846319 kubelet[2419]: I0129 10:59:15.846277 2419 topology_manager.go:215] "Topology Admit Handler" podUID="9e1318d1478cf74647bb5d2d48299d60" podNamespace="kube-system" podName="kube-controller-manager-ci-4186-1-0-3-8e4516c670" Jan 29 10:59:15.849031 kubelet[2419]: I0129 10:59:15.848996 2419 topology_manager.go:215] "Topology Admit Handler" podUID="8b9012e57c9935cc147e0187d2988b5b" podNamespace="kube-system" podName="kube-scheduler-ci-4186-1-0-3-8e4516c670" Jan 29 10:59:15.857571 systemd[1]: Created slice kubepods-burstable-pod7f7220b8fc2a93265b74d37db8de1420.slice - libcontainer container kubepods-burstable-pod7f7220b8fc2a93265b74d37db8de1420.slice. Jan 29 10:59:15.860800 systemd[1]: Created slice kubepods-burstable-pod9e1318d1478cf74647bb5d2d48299d60.slice - libcontainer container kubepods-burstable-pod9e1318d1478cf74647bb5d2d48299d60.slice. Jan 29 10:59:15.876110 systemd[1]: Created slice kubepods-burstable-pod8b9012e57c9935cc147e0187d2988b5b.slice - libcontainer container kubepods-burstable-pod8b9012e57c9935cc147e0187d2988b5b.slice. Jan 29 10:59:15.929171 kubelet[2419]: I0129 10:59:15.928750 2419 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9e1318d1478cf74647bb5d2d48299d60-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186-1-0-3-8e4516c670\" (UID: \"9e1318d1478cf74647bb5d2d48299d60\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-3-8e4516c670" Jan 29 10:59:15.929171 kubelet[2419]: I0129 10:59:15.928800 2419 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7f7220b8fc2a93265b74d37db8de1420-ca-certs\") pod \"kube-apiserver-ci-4186-1-0-3-8e4516c670\" (UID: \"7f7220b8fc2a93265b74d37db8de1420\") " pod="kube-system/kube-apiserver-ci-4186-1-0-3-8e4516c670" Jan 29 10:59:15.929171 kubelet[2419]: I0129 10:59:15.928822 2419 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7f7220b8fc2a93265b74d37db8de1420-k8s-certs\") pod \"kube-apiserver-ci-4186-1-0-3-8e4516c670\" (UID: \"7f7220b8fc2a93265b74d37db8de1420\") " pod="kube-system/kube-apiserver-ci-4186-1-0-3-8e4516c670" Jan 29 10:59:15.929171 kubelet[2419]: I0129 10:59:15.928842 2419 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7f7220b8fc2a93265b74d37db8de1420-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186-1-0-3-8e4516c670\" (UID: \"7f7220b8fc2a93265b74d37db8de1420\") " pod="kube-system/kube-apiserver-ci-4186-1-0-3-8e4516c670" Jan 29 10:59:15.929171 kubelet[2419]: I0129 10:59:15.928879 2419 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9e1318d1478cf74647bb5d2d48299d60-flexvolume-dir\") pod \"kube-controller-manager-ci-4186-1-0-3-8e4516c670\" (UID: \"9e1318d1478cf74647bb5d2d48299d60\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-3-8e4516c670" Jan 29 10:59:15.929436 kubelet[2419]: I0129 10:59:15.928904 2419 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9e1318d1478cf74647bb5d2d48299d60-k8s-certs\") pod \"kube-controller-manager-ci-4186-1-0-3-8e4516c670\" (UID: \"9e1318d1478cf74647bb5d2d48299d60\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-3-8e4516c670" Jan 29 10:59:15.929436 kubelet[2419]: I0129 10:59:15.928927 2419 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9e1318d1478cf74647bb5d2d48299d60-kubeconfig\") pod \"kube-controller-manager-ci-4186-1-0-3-8e4516c670\" (UID: \"9e1318d1478cf74647bb5d2d48299d60\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-3-8e4516c670" Jan 29 10:59:15.929436 kubelet[2419]: I0129 10:59:15.928946 2419 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9e1318d1478cf74647bb5d2d48299d60-ca-certs\") pod \"kube-controller-manager-ci-4186-1-0-3-8e4516c670\" (UID: \"9e1318d1478cf74647bb5d2d48299d60\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-3-8e4516c670" Jan 29 10:59:15.929436 kubelet[2419]: I0129 10:59:15.928966 2419 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8b9012e57c9935cc147e0187d2988b5b-kubeconfig\") pod \"kube-scheduler-ci-4186-1-0-3-8e4516c670\" (UID: \"8b9012e57c9935cc147e0187d2988b5b\") " pod="kube-system/kube-scheduler-ci-4186-1-0-3-8e4516c670" Jan 29 10:59:15.929755 kubelet[2419]: E0129 10:59:15.929710 2419 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.34.178.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-1-0-3-8e4516c670?timeout=10s\": dial tcp 188.34.178.132:6443: connect: connection refused" interval="400ms" Jan 29 10:59:16.032900 kubelet[2419]: I0129 10:59:16.032413 2419 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186-1-0-3-8e4516c670" Jan 29 10:59:16.032900 kubelet[2419]: E0129 10:59:16.032828 2419 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://188.34.178.132:6443/api/v1/nodes\": dial tcp 188.34.178.132:6443: connect: connection refused" node="ci-4186-1-0-3-8e4516c670" Jan 29 10:59:16.172460 containerd[1475]: time="2025-01-29T10:59:16.172385424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186-1-0-3-8e4516c670,Uid:9e1318d1478cf74647bb5d2d48299d60,Namespace:kube-system,Attempt:0,}" Jan 29 10:59:16.173160 containerd[1475]: time="2025-01-29T10:59:16.172392625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186-1-0-3-8e4516c670,Uid:7f7220b8fc2a93265b74d37db8de1420,Namespace:kube-system,Attempt:0,}" Jan 29 10:59:16.180795 containerd[1475]: time="2025-01-29T10:59:16.180223986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186-1-0-3-8e4516c670,Uid:8b9012e57c9935cc147e0187d2988b5b,Namespace:kube-system,Attempt:0,}" Jan 29 10:59:16.331348 kubelet[2419]: E0129 10:59:16.331280 2419 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.34.178.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-1-0-3-8e4516c670?timeout=10s\": dial tcp 188.34.178.132:6443: connect: connection refused" interval="800ms" Jan 29 10:59:16.435840 kubelet[2419]: I0129 10:59:16.435418 2419 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186-1-0-3-8e4516c670" Jan 29 10:59:16.436260 kubelet[2419]: E0129 10:59:16.436219 2419 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://188.34.178.132:6443/api/v1/nodes\": dial tcp 188.34.178.132:6443: connect: connection refused" node="ci-4186-1-0-3-8e4516c670" Jan 29 10:59:16.698119 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount824547428.mount: Deactivated successfully. Jan 29 10:59:16.705827 containerd[1475]: time="2025-01-29T10:59:16.705773440Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 10:59:16.708199 containerd[1475]: time="2025-01-29T10:59:16.708063504Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Jan 29 10:59:16.711211 containerd[1475]: time="2025-01-29T10:59:16.709296077Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 10:59:16.712871 containerd[1475]: time="2025-01-29T10:59:16.712836034Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 10:59:16.715130 containerd[1475]: time="2025-01-29T10:59:16.715064017Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 10:59:16.716000 containerd[1475]: time="2025-01-29T10:59:16.715968066Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 10:59:16.716249 containerd[1475]: time="2025-01-29T10:59:16.716216829Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 10:59:16.722306 containerd[1475]: time="2025-01-29T10:59:16.722249211Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 10:59:16.723227 containerd[1475]: time="2025-01-29T10:59:16.723196261Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 542.846034ms" Jan 29 10:59:16.724348 containerd[1475]: time="2025-01-29T10:59:16.724303993Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 551.787527ms" Jan 29 10:59:16.725330 containerd[1475]: time="2025-01-29T10:59:16.725299443Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 552.126811ms" Jan 29 10:59:16.850232 kubelet[2419]: W0129 10:59:16.848664 2419 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://188.34.178.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-1-0-3-8e4516c670&limit=500&resourceVersion=0": dial tcp 188.34.178.132:6443: connect: connection refused Jan 29 10:59:16.850232 kubelet[2419]: E0129 10:59:16.848723 2419 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://188.34.178.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-1-0-3-8e4516c670&limit=500&resourceVersion=0": dial tcp 188.34.178.132:6443: connect: connection refused Jan 29 10:59:16.863383 containerd[1475]: time="2025-01-29T10:59:16.863265595Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 10:59:16.863565 containerd[1475]: time="2025-01-29T10:59:16.863346196Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 10:59:16.863565 containerd[1475]: time="2025-01-29T10:59:16.863363236Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:59:16.863565 containerd[1475]: time="2025-01-29T10:59:16.863444677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:59:16.868502 containerd[1475]: time="2025-01-29T10:59:16.867364598Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 10:59:16.868502 containerd[1475]: time="2025-01-29T10:59:16.867423958Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 10:59:16.868502 containerd[1475]: time="2025-01-29T10:59:16.867444318Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:59:16.868502 containerd[1475]: time="2025-01-29T10:59:16.867512319Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:59:16.870130 containerd[1475]: time="2025-01-29T10:59:16.869990665Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 10:59:16.870130 containerd[1475]: time="2025-01-29T10:59:16.870057505Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 10:59:16.870130 containerd[1475]: time="2025-01-29T10:59:16.870072106Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:59:16.871066 containerd[1475]: time="2025-01-29T10:59:16.871004035Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:59:16.888376 systemd[1]: Started cri-containerd-9d5f6fb9d6dff246dd7ef6c9375a7455405ef080a2556591ca4fe7a30cee9d8c.scope - libcontainer container 9d5f6fb9d6dff246dd7ef6c9375a7455405ef080a2556591ca4fe7a30cee9d8c. Jan 29 10:59:16.896495 systemd[1]: Started cri-containerd-035d37e5e183e218b0b3956e1d344195e35629242153c1565a7df503a5f01f6b.scope - libcontainer container 035d37e5e183e218b0b3956e1d344195e35629242153c1565a7df503a5f01f6b. Jan 29 10:59:16.898017 systemd[1]: Started cri-containerd-24cf50658591043c00ed98dedd90c558786e8bc79431c0fc49f4e26d6c0c76e8.scope - libcontainer container 24cf50658591043c00ed98dedd90c558786e8bc79431c0fc49f4e26d6c0c76e8. Jan 29 10:59:16.942921 containerd[1475]: time="2025-01-29T10:59:16.942831421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186-1-0-3-8e4516c670,Uid:9e1318d1478cf74647bb5d2d48299d60,Namespace:kube-system,Attempt:0,} returns sandbox id \"9d5f6fb9d6dff246dd7ef6c9375a7455405ef080a2556591ca4fe7a30cee9d8c\"" Jan 29 10:59:16.948928 containerd[1475]: time="2025-01-29T10:59:16.948741162Z" level=info msg="CreateContainer within sandbox \"9d5f6fb9d6dff246dd7ef6c9375a7455405ef080a2556591ca4fe7a30cee9d8c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 10:59:16.959123 containerd[1475]: time="2025-01-29T10:59:16.958941708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186-1-0-3-8e4516c670,Uid:8b9012e57c9935cc147e0187d2988b5b,Namespace:kube-system,Attempt:0,} returns sandbox id \"035d37e5e183e218b0b3956e1d344195e35629242153c1565a7df503a5f01f6b\"" Jan 29 10:59:16.960487 containerd[1475]: time="2025-01-29T10:59:16.960369843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186-1-0-3-8e4516c670,Uid:7f7220b8fc2a93265b74d37db8de1420,Namespace:kube-system,Attempt:0,} returns sandbox id \"24cf50658591043c00ed98dedd90c558786e8bc79431c0fc49f4e26d6c0c76e8\"" Jan 29 10:59:16.962379 containerd[1475]: time="2025-01-29T10:59:16.962214862Z" level=info msg="CreateContainer within sandbox \"035d37e5e183e218b0b3956e1d344195e35629242153c1565a7df503a5f01f6b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 10:59:16.964380 containerd[1475]: time="2025-01-29T10:59:16.964352084Z" level=info msg="CreateContainer within sandbox \"24cf50658591043c00ed98dedd90c558786e8bc79431c0fc49f4e26d6c0c76e8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 10:59:16.973786 containerd[1475]: time="2025-01-29T10:59:16.973740862Z" level=info msg="CreateContainer within sandbox \"9d5f6fb9d6dff246dd7ef6c9375a7455405ef080a2556591ca4fe7a30cee9d8c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e6395a83bdd0ae6f7b2fbc2f73dc00a9f2e7b0e02da310415db52b05ab6cf78e\"" Jan 29 10:59:16.974683 containerd[1475]: time="2025-01-29T10:59:16.974609911Z" level=info msg="StartContainer for \"e6395a83bdd0ae6f7b2fbc2f73dc00a9f2e7b0e02da310415db52b05ab6cf78e\"" Jan 29 10:59:16.988396 containerd[1475]: time="2025-01-29T10:59:16.988270852Z" level=info msg="CreateContainer within sandbox \"24cf50658591043c00ed98dedd90c558786e8bc79431c0fc49f4e26d6c0c76e8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"54b3ae4c7b76fcbf6ba8ece7d2e8089dece61b51db8050b838ff42527cb074ea\"" Jan 29 10:59:16.989380 containerd[1475]: time="2025-01-29T10:59:16.989347064Z" level=info msg="StartContainer for \"54b3ae4c7b76fcbf6ba8ece7d2e8089dece61b51db8050b838ff42527cb074ea\"" Jan 29 10:59:16.989891 containerd[1475]: time="2025-01-29T10:59:16.989750468Z" level=info msg="CreateContainer within sandbox \"035d37e5e183e218b0b3956e1d344195e35629242153c1565a7df503a5f01f6b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1181a63ee4f9b43cd96c84e255b7af97911d8f4ffe7ef8902ae2c4fc97148c76\"" Jan 29 10:59:16.990513 containerd[1475]: time="2025-01-29T10:59:16.990355834Z" level=info msg="StartContainer for \"1181a63ee4f9b43cd96c84e255b7af97911d8f4ffe7ef8902ae2c4fc97148c76\"" Jan 29 10:59:17.004379 systemd[1]: Started cri-containerd-e6395a83bdd0ae6f7b2fbc2f73dc00a9f2e7b0e02da310415db52b05ab6cf78e.scope - libcontainer container e6395a83bdd0ae6f7b2fbc2f73dc00a9f2e7b0e02da310415db52b05ab6cf78e. Jan 29 10:59:17.028392 systemd[1]: Started cri-containerd-1181a63ee4f9b43cd96c84e255b7af97911d8f4ffe7ef8902ae2c4fc97148c76.scope - libcontainer container 1181a63ee4f9b43cd96c84e255b7af97911d8f4ffe7ef8902ae2c4fc97148c76. Jan 29 10:59:17.037330 systemd[1]: Started cri-containerd-54b3ae4c7b76fcbf6ba8ece7d2e8089dece61b51db8050b838ff42527cb074ea.scope - libcontainer container 54b3ae4c7b76fcbf6ba8ece7d2e8089dece61b51db8050b838ff42527cb074ea. Jan 29 10:59:17.087380 containerd[1475]: time="2025-01-29T10:59:17.087167277Z" level=info msg="StartContainer for \"e6395a83bdd0ae6f7b2fbc2f73dc00a9f2e7b0e02da310415db52b05ab6cf78e\" returns successfully" Jan 29 10:59:17.093480 containerd[1475]: time="2025-01-29T10:59:17.093381902Z" level=info msg="StartContainer for \"1181a63ee4f9b43cd96c84e255b7af97911d8f4ffe7ef8902ae2c4fc97148c76\" returns successfully" Jan 29 10:59:17.101733 containerd[1475]: time="2025-01-29T10:59:17.101544346Z" level=info msg="StartContainer for \"54b3ae4c7b76fcbf6ba8ece7d2e8089dece61b51db8050b838ff42527cb074ea\" returns successfully" Jan 29 10:59:17.131886 kubelet[2419]: E0129 10:59:17.131842 2419 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.34.178.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-1-0-3-8e4516c670?timeout=10s\": dial tcp 188.34.178.132:6443: connect: connection refused" interval="1.6s" Jan 29 10:59:17.148672 kubelet[2419]: W0129 10:59:17.148605 2419 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://188.34.178.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 188.34.178.132:6443: connect: connection refused Jan 29 10:59:17.148672 kubelet[2419]: E0129 10:59:17.148651 2419 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://188.34.178.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 188.34.178.132:6443: connect: connection refused Jan 29 10:59:17.178232 kubelet[2419]: W0129 10:59:17.176997 2419 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://188.34.178.132:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 188.34.178.132:6443: connect: connection refused Jan 29 10:59:17.178232 kubelet[2419]: E0129 10:59:17.177075 2419 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://188.34.178.132:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 188.34.178.132:6443: connect: connection refused Jan 29 10:59:17.203031 kubelet[2419]: W0129 10:59:17.202873 2419 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://188.34.178.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 188.34.178.132:6443: connect: connection refused Jan 29 10:59:17.203031 kubelet[2419]: E0129 10:59:17.202936 2419 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://188.34.178.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 188.34.178.132:6443: connect: connection refused Jan 29 10:59:17.238583 kubelet[2419]: I0129 10:59:17.238347 2419 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186-1-0-3-8e4516c670" Jan 29 10:59:19.591360 kubelet[2419]: E0129 10:59:19.591032 2419 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4186-1-0-3-8e4516c670\" not found" node="ci-4186-1-0-3-8e4516c670" Jan 29 10:59:19.627004 kubelet[2419]: I0129 10:59:19.626965 2419 kubelet_node_status.go:76] "Successfully registered node" node="ci-4186-1-0-3-8e4516c670" Jan 29 10:59:19.719166 kubelet[2419]: I0129 10:59:19.718915 2419 apiserver.go:52] "Watching apiserver" Jan 29 10:59:19.727476 kubelet[2419]: I0129 10:59:19.727439 2419 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 10:59:19.786939 kubelet[2419]: E0129 10:59:19.786833 2419 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4186-1-0-3-8e4516c670\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4186-1-0-3-8e4516c670" Jan 29 10:59:19.803361 kubelet[2419]: E0129 10:59:19.803212 2419 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4186-1-0-3-8e4516c670\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4186-1-0-3-8e4516c670" Jan 29 10:59:21.709523 systemd[1]: Reloading requested from client PID 2690 ('systemctl') (unit session-5.scope)... Jan 29 10:59:21.709578 systemd[1]: Reloading... Jan 29 10:59:21.825265 zram_generator::config[2733]: No configuration found. Jan 29 10:59:21.929065 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 10:59:22.007506 systemd[1]: Reloading finished in 297 ms. Jan 29 10:59:22.048730 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 10:59:22.049458 kubelet[2419]: I0129 10:59:22.049360 2419 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 10:59:22.065316 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 10:59:22.065913 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 10:59:22.067259 systemd[1]: kubelet.service: Consumed 1.327s CPU time, 113.5M memory peak, 0B memory swap peak. Jan 29 10:59:22.072684 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 10:59:22.174973 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 10:59:22.179415 (kubelet)[2775]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 10:59:22.232195 kubelet[2775]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 10:59:22.232195 kubelet[2775]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 10:59:22.232195 kubelet[2775]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 10:59:22.232533 kubelet[2775]: I0129 10:59:22.232314 2775 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 10:59:22.237119 kubelet[2775]: I0129 10:59:22.237086 2775 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 10:59:22.237119 kubelet[2775]: I0129 10:59:22.237111 2775 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 10:59:22.237328 kubelet[2775]: I0129 10:59:22.237310 2775 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 10:59:22.238926 kubelet[2775]: I0129 10:59:22.238883 2775 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 10:59:22.240532 kubelet[2775]: I0129 10:59:22.240498 2775 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 10:59:22.251299 kubelet[2775]: I0129 10:59:22.250466 2775 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 10:59:22.251299 kubelet[2775]: I0129 10:59:22.250713 2775 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 10:59:22.252289 kubelet[2775]: I0129 10:59:22.250735 2775 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186-1-0-3-8e4516c670","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 10:59:22.252289 kubelet[2775]: I0129 10:59:22.251870 2775 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 10:59:22.252289 kubelet[2775]: I0129 10:59:22.251883 2775 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 10:59:22.252289 kubelet[2775]: I0129 10:59:22.251927 2775 state_mem.go:36] "Initialized new in-memory state store" Jan 29 10:59:22.252289 kubelet[2775]: I0129 10:59:22.252028 2775 kubelet.go:400] "Attempting to sync node with API server" Jan 29 10:59:22.252584 kubelet[2775]: I0129 10:59:22.252038 2775 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 10:59:22.252584 kubelet[2775]: I0129 10:59:22.252064 2775 kubelet.go:312] "Adding apiserver pod source" Jan 29 10:59:22.252584 kubelet[2775]: I0129 10:59:22.252082 2775 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 10:59:22.256187 kubelet[2775]: I0129 10:59:22.255277 2775 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 10:59:22.256187 kubelet[2775]: I0129 10:59:22.255498 2775 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 10:59:22.256187 kubelet[2775]: I0129 10:59:22.256019 2775 server.go:1264] "Started kubelet" Jan 29 10:59:22.259252 kubelet[2775]: I0129 10:59:22.258171 2775 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 10:59:22.264196 kubelet[2775]: I0129 10:59:22.263435 2775 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 10:59:22.264304 kubelet[2775]: I0129 10:59:22.264279 2775 server.go:455] "Adding debug handlers to kubelet server" Jan 29 10:59:22.266185 kubelet[2775]: I0129 10:59:22.265092 2775 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 10:59:22.266185 kubelet[2775]: I0129 10:59:22.265336 2775 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 10:59:22.268210 kubelet[2775]: I0129 10:59:22.266947 2775 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 10:59:22.268404 kubelet[2775]: I0129 10:59:22.268370 2775 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 10:59:22.268670 kubelet[2775]: I0129 10:59:22.268506 2775 reconciler.go:26] "Reconciler: start to sync state" Jan 29 10:59:22.272190 kubelet[2775]: I0129 10:59:22.271688 2775 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 10:59:22.273223 kubelet[2775]: I0129 10:59:22.272599 2775 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 10:59:22.273223 kubelet[2775]: I0129 10:59:22.272634 2775 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 10:59:22.273223 kubelet[2775]: I0129 10:59:22.272652 2775 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 10:59:22.273223 kubelet[2775]: E0129 10:59:22.272690 2775 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 10:59:22.290999 kubelet[2775]: I0129 10:59:22.290803 2775 factory.go:221] Registration of the systemd container factory successfully Jan 29 10:59:22.290999 kubelet[2775]: I0129 10:59:22.290910 2775 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 10:59:22.300212 kubelet[2775]: I0129 10:59:22.299974 2775 factory.go:221] Registration of the containerd container factory successfully Jan 29 10:59:22.360460 kubelet[2775]: I0129 10:59:22.360430 2775 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 10:59:22.361065 kubelet[2775]: I0129 10:59:22.360654 2775 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 10:59:22.361065 kubelet[2775]: I0129 10:59:22.360690 2775 state_mem.go:36] "Initialized new in-memory state store" Jan 29 10:59:22.361065 kubelet[2775]: I0129 10:59:22.360926 2775 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 10:59:22.361065 kubelet[2775]: I0129 10:59:22.360944 2775 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 10:59:22.361065 kubelet[2775]: I0129 10:59:22.360971 2775 policy_none.go:49] "None policy: Start" Jan 29 10:59:22.362046 kubelet[2775]: I0129 10:59:22.362007 2775 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 10:59:22.362046 kubelet[2775]: I0129 10:59:22.362046 2775 state_mem.go:35] "Initializing new in-memory state store" Jan 29 10:59:22.362502 kubelet[2775]: I0129 10:59:22.362460 2775 state_mem.go:75] "Updated machine memory state" Jan 29 10:59:22.367250 kubelet[2775]: I0129 10:59:22.367228 2775 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 10:59:22.367936 kubelet[2775]: I0129 10:59:22.367512 2775 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 10:59:22.367936 kubelet[2775]: I0129 10:59:22.367649 2775 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 10:59:22.374021 kubelet[2775]: I0129 10:59:22.373911 2775 topology_manager.go:215] "Topology Admit Handler" podUID="7f7220b8fc2a93265b74d37db8de1420" podNamespace="kube-system" podName="kube-apiserver-ci-4186-1-0-3-8e4516c670" Jan 29 10:59:22.374132 kubelet[2775]: I0129 10:59:22.374100 2775 topology_manager.go:215] "Topology Admit Handler" podUID="9e1318d1478cf74647bb5d2d48299d60" podNamespace="kube-system" podName="kube-controller-manager-ci-4186-1-0-3-8e4516c670" Jan 29 10:59:22.374202 kubelet[2775]: I0129 10:59:22.374145 2775 topology_manager.go:215] "Topology Admit Handler" podUID="8b9012e57c9935cc147e0187d2988b5b" podNamespace="kube-system" podName="kube-scheduler-ci-4186-1-0-3-8e4516c670" Jan 29 10:59:22.378044 kubelet[2775]: I0129 10:59:22.378002 2775 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186-1-0-3-8e4516c670" Jan 29 10:59:22.386224 kubelet[2775]: E0129 10:59:22.385429 2775 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4186-1-0-3-8e4516c670\" already exists" pod="kube-system/kube-controller-manager-ci-4186-1-0-3-8e4516c670" Jan 29 10:59:22.386825 kubelet[2775]: I0129 10:59:22.386791 2775 kubelet_node_status.go:112] "Node was previously registered" node="ci-4186-1-0-3-8e4516c670" Jan 29 10:59:22.386888 kubelet[2775]: I0129 10:59:22.386867 2775 kubelet_node_status.go:76] "Successfully registered node" node="ci-4186-1-0-3-8e4516c670" Jan 29 10:59:22.470685 kubelet[2775]: I0129 10:59:22.470347 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7f7220b8fc2a93265b74d37db8de1420-ca-certs\") pod \"kube-apiserver-ci-4186-1-0-3-8e4516c670\" (UID: \"7f7220b8fc2a93265b74d37db8de1420\") " pod="kube-system/kube-apiserver-ci-4186-1-0-3-8e4516c670" Jan 29 10:59:22.470685 kubelet[2775]: I0129 10:59:22.470399 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9e1318d1478cf74647bb5d2d48299d60-ca-certs\") pod \"kube-controller-manager-ci-4186-1-0-3-8e4516c670\" (UID: \"9e1318d1478cf74647bb5d2d48299d60\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-3-8e4516c670" Jan 29 10:59:22.470685 kubelet[2775]: I0129 10:59:22.470427 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9e1318d1478cf74647bb5d2d48299d60-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186-1-0-3-8e4516c670\" (UID: \"9e1318d1478cf74647bb5d2d48299d60\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-3-8e4516c670" Jan 29 10:59:22.470685 kubelet[2775]: I0129 10:59:22.470449 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8b9012e57c9935cc147e0187d2988b5b-kubeconfig\") pod \"kube-scheduler-ci-4186-1-0-3-8e4516c670\" (UID: \"8b9012e57c9935cc147e0187d2988b5b\") " pod="kube-system/kube-scheduler-ci-4186-1-0-3-8e4516c670" Jan 29 10:59:22.470685 kubelet[2775]: I0129 10:59:22.470477 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7f7220b8fc2a93265b74d37db8de1420-k8s-certs\") pod \"kube-apiserver-ci-4186-1-0-3-8e4516c670\" (UID: \"7f7220b8fc2a93265b74d37db8de1420\") " pod="kube-system/kube-apiserver-ci-4186-1-0-3-8e4516c670" Jan 29 10:59:22.470945 kubelet[2775]: I0129 10:59:22.470498 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7f7220b8fc2a93265b74d37db8de1420-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186-1-0-3-8e4516c670\" (UID: \"7f7220b8fc2a93265b74d37db8de1420\") " pod="kube-system/kube-apiserver-ci-4186-1-0-3-8e4516c670" Jan 29 10:59:22.470945 kubelet[2775]: I0129 10:59:22.470518 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9e1318d1478cf74647bb5d2d48299d60-flexvolume-dir\") pod \"kube-controller-manager-ci-4186-1-0-3-8e4516c670\" (UID: \"9e1318d1478cf74647bb5d2d48299d60\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-3-8e4516c670" Jan 29 10:59:22.470945 kubelet[2775]: I0129 10:59:22.470538 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9e1318d1478cf74647bb5d2d48299d60-k8s-certs\") pod \"kube-controller-manager-ci-4186-1-0-3-8e4516c670\" (UID: \"9e1318d1478cf74647bb5d2d48299d60\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-3-8e4516c670" Jan 29 10:59:22.470945 kubelet[2775]: I0129 10:59:22.470570 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9e1318d1478cf74647bb5d2d48299d60-kubeconfig\") pod \"kube-controller-manager-ci-4186-1-0-3-8e4516c670\" (UID: \"9e1318d1478cf74647bb5d2d48299d60\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-3-8e4516c670" Jan 29 10:59:23.254058 kubelet[2775]: I0129 10:59:23.253797 2775 apiserver.go:52] "Watching apiserver" Jan 29 10:59:23.268910 kubelet[2775]: I0129 10:59:23.268871 2775 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 10:59:23.356827 kubelet[2775]: E0129 10:59:23.356277 2775 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4186-1-0-3-8e4516c670\" already exists" pod="kube-system/kube-apiserver-ci-4186-1-0-3-8e4516c670" Jan 29 10:59:23.385926 kubelet[2775]: I0129 10:59:23.385851 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4186-1-0-3-8e4516c670" podStartSLOduration=1.3858103800000001 podStartE2EDuration="1.38581038s" podCreationTimestamp="2025-01-29 10:59:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 10:59:23.372513964 +0000 UTC m=+1.189948658" watchObservedRunningTime="2025-01-29 10:59:23.38581038 +0000 UTC m=+1.203245074" Jan 29 10:59:23.399921 kubelet[2775]: I0129 10:59:23.399643 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4186-1-0-3-8e4516c670" podStartSLOduration=1.399618282 podStartE2EDuration="1.399618282s" podCreationTimestamp="2025-01-29 10:59:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 10:59:23.386463467 +0000 UTC m=+1.203898161" watchObservedRunningTime="2025-01-29 10:59:23.399618282 +0000 UTC m=+1.217052976" Jan 29 10:59:23.719714 sudo[1829]: pam_unix(sudo:session): session closed for user root Jan 29 10:59:23.880208 sshd[1828]: Connection closed by 147.75.109.163 port 58406 Jan 29 10:59:23.881288 sshd-session[1826]: pam_unix(sshd:session): session closed for user core Jan 29 10:59:23.886309 systemd-logind[1460]: Session 5 logged out. Waiting for processes to exit. Jan 29 10:59:23.887266 systemd[1]: sshd@4-188.34.178.132:22-147.75.109.163:58406.service: Deactivated successfully. Jan 29 10:59:23.889692 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 10:59:23.889906 systemd[1]: session-5.scope: Consumed 6.817s CPU time, 191.8M memory peak, 0B memory swap peak. Jan 29 10:59:23.891007 systemd-logind[1460]: Removed session 5. Jan 29 10:59:29.350062 kubelet[2775]: I0129 10:59:29.349781 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4186-1-0-3-8e4516c670" podStartSLOduration=8.349755576 podStartE2EDuration="8.349755576s" podCreationTimestamp="2025-01-29 10:59:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 10:59:23.399795084 +0000 UTC m=+1.217229778" watchObservedRunningTime="2025-01-29 10:59:29.349755576 +0000 UTC m=+7.167190310" Jan 29 10:59:35.945165 kubelet[2775]: I0129 10:59:35.944922 2775 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 10:59:35.946238 containerd[1475]: time="2025-01-29T10:59:35.946056094Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 10:59:35.946676 kubelet[2775]: I0129 10:59:35.946473 2775 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 10:59:36.303104 kubelet[2775]: I0129 10:59:36.302342 2775 topology_manager.go:215] "Topology Admit Handler" podUID="a4ae75bf-9813-467f-b345-d6aea155b02b" podNamespace="kube-system" podName="kube-proxy-dssgz" Jan 29 10:59:36.313390 systemd[1]: Created slice kubepods-besteffort-poda4ae75bf_9813_467f_b345_d6aea155b02b.slice - libcontainer container kubepods-besteffort-poda4ae75bf_9813_467f_b345_d6aea155b02b.slice. Jan 29 10:59:36.321425 kubelet[2775]: I0129 10:59:36.321391 2775 topology_manager.go:215] "Topology Admit Handler" podUID="51faa92d-6e33-491a-92bb-46fbf5d7a21f" podNamespace="kube-flannel" podName="kube-flannel-ds-vkzrb" Jan 29 10:59:36.338770 systemd[1]: Created slice kubepods-burstable-pod51faa92d_6e33_491a_92bb_46fbf5d7a21f.slice - libcontainer container kubepods-burstable-pod51faa92d_6e33_491a_92bb_46fbf5d7a21f.slice. Jan 29 10:59:36.340252 kubelet[2775]: W0129 10:59:36.339843 2775 reflector.go:547] object-"kube-flannel"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4186-1-0-3-8e4516c670" cannot list resource "configmaps" in API group "" in the namespace "kube-flannel": no relationship found between node 'ci-4186-1-0-3-8e4516c670' and this object Jan 29 10:59:36.340252 kubelet[2775]: E0129 10:59:36.339892 2775 reflector.go:150] object-"kube-flannel"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4186-1-0-3-8e4516c670" cannot list resource "configmaps" in API group "" in the namespace "kube-flannel": no relationship found between node 'ci-4186-1-0-3-8e4516c670' and this object Jan 29 10:59:36.340252 kubelet[2775]: W0129 10:59:36.339936 2775 reflector.go:547] object-"kube-flannel"/"kube-flannel-cfg": failed to list *v1.ConfigMap: configmaps "kube-flannel-cfg" is forbidden: User "system:node:ci-4186-1-0-3-8e4516c670" cannot list resource "configmaps" in API group "" in the namespace "kube-flannel": no relationship found between node 'ci-4186-1-0-3-8e4516c670' and this object Jan 29 10:59:36.340252 kubelet[2775]: E0129 10:59:36.339947 2775 reflector.go:150] object-"kube-flannel"/"kube-flannel-cfg": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-flannel-cfg" is forbidden: User "system:node:ci-4186-1-0-3-8e4516c670" cannot list resource "configmaps" in API group "" in the namespace "kube-flannel": no relationship found between node 'ci-4186-1-0-3-8e4516c670' and this object Jan 29 10:59:36.357081 kubelet[2775]: I0129 10:59:36.357047 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdt6f\" (UniqueName: \"kubernetes.io/projected/51faa92d-6e33-491a-92bb-46fbf5d7a21f-kube-api-access-vdt6f\") pod \"kube-flannel-ds-vkzrb\" (UID: \"51faa92d-6e33-491a-92bb-46fbf5d7a21f\") " pod="kube-flannel/kube-flannel-ds-vkzrb" Jan 29 10:59:36.357273 kubelet[2775]: I0129 10:59:36.357257 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a4ae75bf-9813-467f-b345-d6aea155b02b-xtables-lock\") pod \"kube-proxy-dssgz\" (UID: \"a4ae75bf-9813-467f-b345-d6aea155b02b\") " pod="kube-system/kube-proxy-dssgz" Jan 29 10:59:36.357365 kubelet[2775]: I0129 10:59:36.357352 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9ltq\" (UniqueName: \"kubernetes.io/projected/a4ae75bf-9813-467f-b345-d6aea155b02b-kube-api-access-z9ltq\") pod \"kube-proxy-dssgz\" (UID: \"a4ae75bf-9813-467f-b345-d6aea155b02b\") " pod="kube-system/kube-proxy-dssgz" Jan 29 10:59:36.357440 kubelet[2775]: I0129 10:59:36.357428 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a4ae75bf-9813-467f-b345-d6aea155b02b-kube-proxy\") pod \"kube-proxy-dssgz\" (UID: \"a4ae75bf-9813-467f-b345-d6aea155b02b\") " pod="kube-system/kube-proxy-dssgz" Jan 29 10:59:36.357531 kubelet[2775]: I0129 10:59:36.357520 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a4ae75bf-9813-467f-b345-d6aea155b02b-lib-modules\") pod \"kube-proxy-dssgz\" (UID: \"a4ae75bf-9813-467f-b345-d6aea155b02b\") " pod="kube-system/kube-proxy-dssgz" Jan 29 10:59:36.357650 kubelet[2775]: I0129 10:59:36.357595 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/51faa92d-6e33-491a-92bb-46fbf5d7a21f-run\") pod \"kube-flannel-ds-vkzrb\" (UID: \"51faa92d-6e33-491a-92bb-46fbf5d7a21f\") " pod="kube-flannel/kube-flannel-ds-vkzrb" Jan 29 10:59:36.357739 kubelet[2775]: I0129 10:59:36.357718 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/51faa92d-6e33-491a-92bb-46fbf5d7a21f-flannel-cfg\") pod \"kube-flannel-ds-vkzrb\" (UID: \"51faa92d-6e33-491a-92bb-46fbf5d7a21f\") " pod="kube-flannel/kube-flannel-ds-vkzrb" Jan 29 10:59:36.357886 kubelet[2775]: I0129 10:59:36.357808 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/51faa92d-6e33-491a-92bb-46fbf5d7a21f-xtables-lock\") pod \"kube-flannel-ds-vkzrb\" (UID: \"51faa92d-6e33-491a-92bb-46fbf5d7a21f\") " pod="kube-flannel/kube-flannel-ds-vkzrb" Jan 29 10:59:36.357886 kubelet[2775]: I0129 10:59:36.357831 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/51faa92d-6e33-491a-92bb-46fbf5d7a21f-cni-plugin\") pod \"kube-flannel-ds-vkzrb\" (UID: \"51faa92d-6e33-491a-92bb-46fbf5d7a21f\") " pod="kube-flannel/kube-flannel-ds-vkzrb" Jan 29 10:59:36.357886 kubelet[2775]: I0129 10:59:36.357845 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/51faa92d-6e33-491a-92bb-46fbf5d7a21f-cni\") pod \"kube-flannel-ds-vkzrb\" (UID: \"51faa92d-6e33-491a-92bb-46fbf5d7a21f\") " pod="kube-flannel/kube-flannel-ds-vkzrb" Jan 29 10:59:36.472221 kubelet[2775]: E0129 10:59:36.471866 2775 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 29 10:59:36.472221 kubelet[2775]: E0129 10:59:36.471921 2775 projected.go:200] Error preparing data for projected volume kube-api-access-z9ltq for pod kube-system/kube-proxy-dssgz: configmap "kube-root-ca.crt" not found Jan 29 10:59:36.472221 kubelet[2775]: E0129 10:59:36.471994 2775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a4ae75bf-9813-467f-b345-d6aea155b02b-kube-api-access-z9ltq podName:a4ae75bf-9813-467f-b345-d6aea155b02b nodeName:}" failed. No retries permitted until 2025-01-29 10:59:36.97196496 +0000 UTC m=+14.789399654 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-z9ltq" (UniqueName: "kubernetes.io/projected/a4ae75bf-9813-467f-b345-d6aea155b02b-kube-api-access-z9ltq") pod "kube-proxy-dssgz" (UID: "a4ae75bf-9813-467f-b345-d6aea155b02b") : configmap "kube-root-ca.crt" not found Jan 29 10:59:37.227555 containerd[1475]: time="2025-01-29T10:59:37.227460260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dssgz,Uid:a4ae75bf-9813-467f-b345-d6aea155b02b,Namespace:kube-system,Attempt:0,}" Jan 29 10:59:37.249373 containerd[1475]: time="2025-01-29T10:59:37.248980597Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 10:59:37.249373 containerd[1475]: time="2025-01-29T10:59:37.249089038Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 10:59:37.249373 containerd[1475]: time="2025-01-29T10:59:37.249152799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:59:37.249749 containerd[1475]: time="2025-01-29T10:59:37.249701085Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:59:37.273484 systemd[1]: Started cri-containerd-94b512c266b859f592f01ac899194a1cc80efb99ec89a9e141810b9ec71925f8.scope - libcontainer container 94b512c266b859f592f01ac899194a1cc80efb99ec89a9e141810b9ec71925f8. Jan 29 10:59:37.306595 containerd[1475]: time="2025-01-29T10:59:37.306535897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dssgz,Uid:a4ae75bf-9813-467f-b345-d6aea155b02b,Namespace:kube-system,Attempt:0,} returns sandbox id \"94b512c266b859f592f01ac899194a1cc80efb99ec89a9e141810b9ec71925f8\"" Jan 29 10:59:37.311563 containerd[1475]: time="2025-01-29T10:59:37.311535588Z" level=info msg="CreateContainer within sandbox \"94b512c266b859f592f01ac899194a1cc80efb99ec89a9e141810b9ec71925f8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 10:59:37.328742 containerd[1475]: time="2025-01-29T10:59:37.328696041Z" level=info msg="CreateContainer within sandbox \"94b512c266b859f592f01ac899194a1cc80efb99ec89a9e141810b9ec71925f8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ffea3979d20d7fcc38b64987f4eee03c9af008f931892d400fbd6a8da19fb0fb\"" Jan 29 10:59:37.329885 containerd[1475]: time="2025-01-29T10:59:37.329685691Z" level=info msg="StartContainer for \"ffea3979d20d7fcc38b64987f4eee03c9af008f931892d400fbd6a8da19fb0fb\"" Jan 29 10:59:37.355358 systemd[1]: Started cri-containerd-ffea3979d20d7fcc38b64987f4eee03c9af008f931892d400fbd6a8da19fb0fb.scope - libcontainer container ffea3979d20d7fcc38b64987f4eee03c9af008f931892d400fbd6a8da19fb0fb. Jan 29 10:59:37.391024 containerd[1475]: time="2025-01-29T10:59:37.390845827Z" level=info msg="StartContainer for \"ffea3979d20d7fcc38b64987f4eee03c9af008f931892d400fbd6a8da19fb0fb\" returns successfully" Jan 29 10:59:37.459818 kubelet[2775]: E0129 10:59:37.459354 2775 configmap.go:199] Couldn't get configMap kube-flannel/kube-flannel-cfg: failed to sync configmap cache: timed out waiting for the condition Jan 29 10:59:37.459818 kubelet[2775]: E0129 10:59:37.459476 2775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/51faa92d-6e33-491a-92bb-46fbf5d7a21f-flannel-cfg podName:51faa92d-6e33-491a-92bb-46fbf5d7a21f nodeName:}" failed. No retries permitted until 2025-01-29 10:59:37.959450879 +0000 UTC m=+15.776885573 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "flannel-cfg" (UniqueName: "kubernetes.io/configmap/51faa92d-6e33-491a-92bb-46fbf5d7a21f-flannel-cfg") pod "kube-flannel-ds-vkzrb" (UID: "51faa92d-6e33-491a-92bb-46fbf5d7a21f") : failed to sync configmap cache: timed out waiting for the condition Jan 29 10:59:37.471747 kubelet[2775]: E0129 10:59:37.471640 2775 projected.go:294] Couldn't get configMap kube-flannel/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 29 10:59:37.471952 kubelet[2775]: E0129 10:59:37.471827 2775 projected.go:200] Error preparing data for projected volume kube-api-access-vdt6f for pod kube-flannel/kube-flannel-ds-vkzrb: failed to sync configmap cache: timed out waiting for the condition Jan 29 10:59:37.472194 kubelet[2775]: E0129 10:59:37.472046 2775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/51faa92d-6e33-491a-92bb-46fbf5d7a21f-kube-api-access-vdt6f podName:51faa92d-6e33-491a-92bb-46fbf5d7a21f nodeName:}" failed. No retries permitted until 2025-01-29 10:59:37.972023685 +0000 UTC m=+15.789458379 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-vdt6f" (UniqueName: "kubernetes.io/projected/51faa92d-6e33-491a-92bb-46fbf5d7a21f-kube-api-access-vdt6f") pod "kube-flannel-ds-vkzrb" (UID: "51faa92d-6e33-491a-92bb-46fbf5d7a21f") : failed to sync configmap cache: timed out waiting for the condition Jan 29 10:59:38.070665 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount760669495.mount: Deactivated successfully. Jan 29 10:59:38.143992 containerd[1475]: time="2025-01-29T10:59:38.143919936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-vkzrb,Uid:51faa92d-6e33-491a-92bb-46fbf5d7a21f,Namespace:kube-flannel,Attempt:0,}" Jan 29 10:59:38.167883 containerd[1475]: time="2025-01-29T10:59:38.167782496Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 10:59:38.167883 containerd[1475]: time="2025-01-29T10:59:38.167836777Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 10:59:38.167883 containerd[1475]: time="2025-01-29T10:59:38.167847337Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:59:38.168188 containerd[1475]: time="2025-01-29T10:59:38.167920898Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:59:38.187510 systemd[1]: Started cri-containerd-0f3758b828eb193c9ca1c7d1f4d20f76f70fca834ea3f1b33a02520c4423ccea.scope - libcontainer container 0f3758b828eb193c9ca1c7d1f4d20f76f70fca834ea3f1b33a02520c4423ccea. Jan 29 10:59:38.228225 containerd[1475]: time="2025-01-29T10:59:38.228084224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-vkzrb,Uid:51faa92d-6e33-491a-92bb-46fbf5d7a21f,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"0f3758b828eb193c9ca1c7d1f4d20f76f70fca834ea3f1b33a02520c4423ccea\"" Jan 29 10:59:38.230132 containerd[1475]: time="2025-01-29T10:59:38.230106084Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Jan 29 10:59:38.393940 kubelet[2775]: I0129 10:59:38.393846 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dssgz" podStartSLOduration=2.3938242929999998 podStartE2EDuration="2.393824293s" podCreationTimestamp="2025-01-29 10:59:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 10:59:38.393137566 +0000 UTC m=+16.210572300" watchObservedRunningTime="2025-01-29 10:59:38.393824293 +0000 UTC m=+16.211258987" Jan 29 10:59:40.735569 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount998973349.mount: Deactivated successfully. Jan 29 10:59:40.776191 containerd[1475]: time="2025-01-29T10:59:40.776098256Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:59:40.777590 containerd[1475]: time="2025-01-29T10:59:40.777543831Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3673532" Jan 29 10:59:40.778273 containerd[1475]: time="2025-01-29T10:59:40.777903154Z" level=info msg="ImageCreate event name:\"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:59:40.781296 containerd[1475]: time="2025-01-29T10:59:40.780978385Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:59:40.782967 containerd[1475]: time="2025-01-29T10:59:40.782936325Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3662650\" in 2.552797641s" Jan 29 10:59:40.783056 containerd[1475]: time="2025-01-29T10:59:40.783040966Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" Jan 29 10:59:40.786260 containerd[1475]: time="2025-01-29T10:59:40.786142117Z" level=info msg="CreateContainer within sandbox \"0f3758b828eb193c9ca1c7d1f4d20f76f70fca834ea3f1b33a02520c4423ccea\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Jan 29 10:59:40.806858 containerd[1475]: time="2025-01-29T10:59:40.806727684Z" level=info msg="CreateContainer within sandbox \"0f3758b828eb193c9ca1c7d1f4d20f76f70fca834ea3f1b33a02520c4423ccea\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"3301a5301e27cd517fc61c6dd2e7f5e07f6dfdedf8ff39cb5b2dd755fc80d102\"" Jan 29 10:59:40.808228 containerd[1475]: time="2025-01-29T10:59:40.807389051Z" level=info msg="StartContainer for \"3301a5301e27cd517fc61c6dd2e7f5e07f6dfdedf8ff39cb5b2dd755fc80d102\"" Jan 29 10:59:40.838339 systemd[1]: Started cri-containerd-3301a5301e27cd517fc61c6dd2e7f5e07f6dfdedf8ff39cb5b2dd755fc80d102.scope - libcontainer container 3301a5301e27cd517fc61c6dd2e7f5e07f6dfdedf8ff39cb5b2dd755fc80d102. Jan 29 10:59:40.875856 containerd[1475]: time="2025-01-29T10:59:40.875114451Z" level=info msg="StartContainer for \"3301a5301e27cd517fc61c6dd2e7f5e07f6dfdedf8ff39cb5b2dd755fc80d102\" returns successfully" Jan 29 10:59:40.876656 systemd[1]: cri-containerd-3301a5301e27cd517fc61c6dd2e7f5e07f6dfdedf8ff39cb5b2dd755fc80d102.scope: Deactivated successfully. Jan 29 10:59:40.916737 containerd[1475]: time="2025-01-29T10:59:40.916644749Z" level=info msg="shim disconnected" id=3301a5301e27cd517fc61c6dd2e7f5e07f6dfdedf8ff39cb5b2dd755fc80d102 namespace=k8s.io Jan 29 10:59:40.916981 containerd[1475]: time="2025-01-29T10:59:40.916815310Z" level=warning msg="cleaning up after shim disconnected" id=3301a5301e27cd517fc61c6dd2e7f5e07f6dfdedf8ff39cb5b2dd755fc80d102 namespace=k8s.io Jan 29 10:59:40.916981 containerd[1475]: time="2025-01-29T10:59:40.916877871Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 10:59:41.396090 containerd[1475]: time="2025-01-29T10:59:41.395940162Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Jan 29 10:59:41.659026 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3301a5301e27cd517fc61c6dd2e7f5e07f6dfdedf8ff39cb5b2dd755fc80d102-rootfs.mount: Deactivated successfully. Jan 29 10:59:44.019571 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4097520738.mount: Deactivated successfully. Jan 29 10:59:44.755353 containerd[1475]: time="2025-01-29T10:59:44.755281690Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:59:44.756979 containerd[1475]: time="2025-01-29T10:59:44.756613503Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26874261" Jan 29 10:59:44.758525 containerd[1475]: time="2025-01-29T10:59:44.757967117Z" level=info msg="ImageCreate event name:\"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:59:44.763945 containerd[1475]: time="2025-01-29T10:59:44.763892976Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:59:44.767332 containerd[1475]: time="2025-01-29T10:59:44.767293850Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26863435\" in 3.371307767s" Jan 29 10:59:44.767463 containerd[1475]: time="2025-01-29T10:59:44.767443972Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\"" Jan 29 10:59:44.770491 containerd[1475]: time="2025-01-29T10:59:44.770461482Z" level=info msg="CreateContainer within sandbox \"0f3758b828eb193c9ca1c7d1f4d20f76f70fca834ea3f1b33a02520c4423ccea\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 29 10:59:44.787824 containerd[1475]: time="2025-01-29T10:59:44.787774775Z" level=info msg="CreateContainer within sandbox \"0f3758b828eb193c9ca1c7d1f4d20f76f70fca834ea3f1b33a02520c4423ccea\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a92c6c7f78971019b52901afa888f9df645dcec699510e44097b3482d437d213\"" Jan 29 10:59:44.788922 containerd[1475]: time="2025-01-29T10:59:44.788629144Z" level=info msg="StartContainer for \"a92c6c7f78971019b52901afa888f9df645dcec699510e44097b3482d437d213\"" Jan 29 10:59:44.826486 systemd[1]: Started cri-containerd-a92c6c7f78971019b52901afa888f9df645dcec699510e44097b3482d437d213.scope - libcontainer container a92c6c7f78971019b52901afa888f9df645dcec699510e44097b3482d437d213. Jan 29 10:59:44.856772 containerd[1475]: time="2025-01-29T10:59:44.856718666Z" level=info msg="StartContainer for \"a92c6c7f78971019b52901afa888f9df645dcec699510e44097b3482d437d213\" returns successfully" Jan 29 10:59:44.858134 systemd[1]: cri-containerd-a92c6c7f78971019b52901afa888f9df645dcec699510e44097b3482d437d213.scope: Deactivated successfully. Jan 29 10:59:44.896207 kubelet[2775]: I0129 10:59:44.896023 2775 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 29 10:59:44.929856 kubelet[2775]: I0129 10:59:44.928714 2775 topology_manager.go:215] "Topology Admit Handler" podUID="5db86eb9-a268-40b5-904d-736060d52a85" podNamespace="kube-system" podName="coredns-7db6d8ff4d-pls68" Jan 29 10:59:44.932950 kubelet[2775]: I0129 10:59:44.932301 2775 topology_manager.go:215] "Topology Admit Handler" podUID="6e8d4f5b-953b-4848-8527-eebfa4e65a0e" podNamespace="kube-system" podName="coredns-7db6d8ff4d-5gdzx" Jan 29 10:59:44.937969 containerd[1475]: time="2025-01-29T10:59:44.937631836Z" level=info msg="shim disconnected" id=a92c6c7f78971019b52901afa888f9df645dcec699510e44097b3482d437d213 namespace=k8s.io Jan 29 10:59:44.937969 containerd[1475]: time="2025-01-29T10:59:44.937827718Z" level=warning msg="cleaning up after shim disconnected" id=a92c6c7f78971019b52901afa888f9df645dcec699510e44097b3482d437d213 namespace=k8s.io Jan 29 10:59:44.937969 containerd[1475]: time="2025-01-29T10:59:44.937838278Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 10:59:44.951903 systemd[1]: Created slice kubepods-burstable-pod5db86eb9_a268_40b5_904d_736060d52a85.slice - libcontainer container kubepods-burstable-pod5db86eb9_a268_40b5_904d_736060d52a85.slice. Jan 29 10:59:44.962041 systemd[1]: Created slice kubepods-burstable-pod6e8d4f5b_953b_4848_8527_eebfa4e65a0e.slice - libcontainer container kubepods-burstable-pod6e8d4f5b_953b_4848_8527_eebfa4e65a0e.slice. Jan 29 10:59:45.017145 kubelet[2775]: I0129 10:59:45.016926 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5db86eb9-a268-40b5-904d-736060d52a85-config-volume\") pod \"coredns-7db6d8ff4d-pls68\" (UID: \"5db86eb9-a268-40b5-904d-736060d52a85\") " pod="kube-system/coredns-7db6d8ff4d-pls68" Jan 29 10:59:45.017145 kubelet[2775]: I0129 10:59:45.016993 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4td9d\" (UniqueName: \"kubernetes.io/projected/6e8d4f5b-953b-4848-8527-eebfa4e65a0e-kube-api-access-4td9d\") pod \"coredns-7db6d8ff4d-5gdzx\" (UID: \"6e8d4f5b-953b-4848-8527-eebfa4e65a0e\") " pod="kube-system/coredns-7db6d8ff4d-5gdzx" Jan 29 10:59:45.017145 kubelet[2775]: I0129 10:59:45.017032 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6e8d4f5b-953b-4848-8527-eebfa4e65a0e-config-volume\") pod \"coredns-7db6d8ff4d-5gdzx\" (UID: \"6e8d4f5b-953b-4848-8527-eebfa4e65a0e\") " pod="kube-system/coredns-7db6d8ff4d-5gdzx" Jan 29 10:59:45.017145 kubelet[2775]: I0129 10:59:45.017066 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvfcb\" (UniqueName: \"kubernetes.io/projected/5db86eb9-a268-40b5-904d-736060d52a85-kube-api-access-cvfcb\") pod \"coredns-7db6d8ff4d-pls68\" (UID: \"5db86eb9-a268-40b5-904d-736060d52a85\") " pod="kube-system/coredns-7db6d8ff4d-pls68" Jan 29 10:59:45.257302 containerd[1475]: time="2025-01-29T10:59:45.257251236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pls68,Uid:5db86eb9-a268-40b5-904d-736060d52a85,Namespace:kube-system,Attempt:0,}" Jan 29 10:59:45.269840 containerd[1475]: time="2025-01-29T10:59:45.269384917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5gdzx,Uid:6e8d4f5b-953b-4848-8527-eebfa4e65a0e,Namespace:kube-system,Attempt:0,}" Jan 29 10:59:45.307487 containerd[1475]: time="2025-01-29T10:59:45.307429458Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5gdzx,Uid:6e8d4f5b-953b-4848-8527-eebfa4e65a0e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"845d5785c18188f239ee0e8bf712a521add8b76377c41ed04b253b935e1d21cf\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 29 10:59:45.307754 kubelet[2775]: E0129 10:59:45.307711 2775 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"845d5785c18188f239ee0e8bf712a521add8b76377c41ed04b253b935e1d21cf\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 29 10:59:45.307821 kubelet[2775]: E0129 10:59:45.307779 2775 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"845d5785c18188f239ee0e8bf712a521add8b76377c41ed04b253b935e1d21cf\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-5gdzx" Jan 29 10:59:45.307821 kubelet[2775]: E0129 10:59:45.307797 2775 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"845d5785c18188f239ee0e8bf712a521add8b76377c41ed04b253b935e1d21cf\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-5gdzx" Jan 29 10:59:45.307937 kubelet[2775]: E0129 10:59:45.307835 2775 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-5gdzx_kube-system(6e8d4f5b-953b-4848-8527-eebfa4e65a0e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-5gdzx_kube-system(6e8d4f5b-953b-4848-8527-eebfa4e65a0e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"845d5785c18188f239ee0e8bf712a521add8b76377c41ed04b253b935e1d21cf\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-5gdzx" podUID="6e8d4f5b-953b-4848-8527-eebfa4e65a0e" Jan 29 10:59:45.309692 containerd[1475]: time="2025-01-29T10:59:45.309610200Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pls68,Uid:5db86eb9-a268-40b5-904d-736060d52a85,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"31e30fa8e73f7ce1bb7147cfa4217c53105846758406fde395fed938a8170da5\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 29 10:59:45.309893 kubelet[2775]: E0129 10:59:45.309852 2775 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31e30fa8e73f7ce1bb7147cfa4217c53105846758406fde395fed938a8170da5\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 29 10:59:45.309893 kubelet[2775]: E0129 10:59:45.309892 2775 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31e30fa8e73f7ce1bb7147cfa4217c53105846758406fde395fed938a8170da5\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-pls68" Jan 29 10:59:45.310000 kubelet[2775]: E0129 10:59:45.309918 2775 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31e30fa8e73f7ce1bb7147cfa4217c53105846758406fde395fed938a8170da5\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-pls68" Jan 29 10:59:45.310000 kubelet[2775]: E0129 10:59:45.309950 2775 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-pls68_kube-system(5db86eb9-a268-40b5-904d-736060d52a85)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-pls68_kube-system(5db86eb9-a268-40b5-904d-736060d52a85)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"31e30fa8e73f7ce1bb7147cfa4217c53105846758406fde395fed938a8170da5\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-pls68" podUID="5db86eb9-a268-40b5-904d-736060d52a85" Jan 29 10:59:45.419799 containerd[1475]: time="2025-01-29T10:59:45.419677781Z" level=info msg="CreateContainer within sandbox \"0f3758b828eb193c9ca1c7d1f4d20f76f70fca834ea3f1b33a02520c4423ccea\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Jan 29 10:59:45.436930 containerd[1475]: time="2025-01-29T10:59:45.436782152Z" level=info msg="CreateContainer within sandbox \"0f3758b828eb193c9ca1c7d1f4d20f76f70fca834ea3f1b33a02520c4423ccea\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"8462ab2104185b38af3a9947e2c4a6603b7004b06d3cce06e08b5eddb596f00b\"" Jan 29 10:59:45.438873 containerd[1475]: time="2025-01-29T10:59:45.438024045Z" level=info msg="StartContainer for \"8462ab2104185b38af3a9947e2c4a6603b7004b06d3cce06e08b5eddb596f00b\"" Jan 29 10:59:45.462345 systemd[1]: Started cri-containerd-8462ab2104185b38af3a9947e2c4a6603b7004b06d3cce06e08b5eddb596f00b.scope - libcontainer container 8462ab2104185b38af3a9947e2c4a6603b7004b06d3cce06e08b5eddb596f00b. Jan 29 10:59:45.491880 containerd[1475]: time="2025-01-29T10:59:45.491826823Z" level=info msg="StartContainer for \"8462ab2104185b38af3a9947e2c4a6603b7004b06d3cce06e08b5eddb596f00b\" returns successfully" Jan 29 10:59:45.940017 systemd[1]: run-netns-cni\x2d3af881cb\x2d2c1c\x2d346b\x2dd164\x2d1054b9714d55.mount: Deactivated successfully. Jan 29 10:59:45.940123 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-31e30fa8e73f7ce1bb7147cfa4217c53105846758406fde395fed938a8170da5-shm.mount: Deactivated successfully. Jan 29 10:59:46.437139 kubelet[2775]: I0129 10:59:46.435890 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-vkzrb" podStartSLOduration=3.896426759 podStartE2EDuration="10.435870988s" podCreationTimestamp="2025-01-29 10:59:36 +0000 UTC" firstStartedPulling="2025-01-29 10:59:38.229588999 +0000 UTC m=+16.047023693" lastFinishedPulling="2025-01-29 10:59:44.769033228 +0000 UTC m=+22.586467922" observedRunningTime="2025-01-29 10:59:46.435570945 +0000 UTC m=+24.253005639" watchObservedRunningTime="2025-01-29 10:59:46.435870988 +0000 UTC m=+24.253305722" Jan 29 10:59:46.573775 systemd-networkd[1385]: flannel.1: Link UP Jan 29 10:59:46.573783 systemd-networkd[1385]: flannel.1: Gained carrier Jan 29 10:59:47.761431 systemd-networkd[1385]: flannel.1: Gained IPv6LL Jan 29 10:59:57.274627 containerd[1475]: time="2025-01-29T10:59:57.274411028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5gdzx,Uid:6e8d4f5b-953b-4848-8527-eebfa4e65a0e,Namespace:kube-system,Attempt:0,}" Jan 29 10:59:57.308486 systemd-networkd[1385]: cni0: Link UP Jan 29 10:59:57.308493 systemd-networkd[1385]: cni0: Gained carrier Jan 29 10:59:57.313814 systemd-networkd[1385]: cni0: Lost carrier Jan 29 10:59:57.316378 systemd-networkd[1385]: vethb8f8199c: Link UP Jan 29 10:59:57.318256 kernel: cni0: port 1(vethb8f8199c) entered blocking state Jan 29 10:59:57.318349 kernel: cni0: port 1(vethb8f8199c) entered disabled state Jan 29 10:59:57.319352 kernel: vethb8f8199c: entered allmulticast mode Jan 29 10:59:57.320937 kernel: vethb8f8199c: entered promiscuous mode Jan 29 10:59:57.321000 kernel: cni0: port 1(vethb8f8199c) entered blocking state Jan 29 10:59:57.321028 kernel: cni0: port 1(vethb8f8199c) entered forwarding state Jan 29 10:59:57.322207 kernel: cni0: port 1(vethb8f8199c) entered disabled state Jan 29 10:59:57.333285 kernel: cni0: port 1(vethb8f8199c) entered blocking state Jan 29 10:59:57.333345 kernel: cni0: port 1(vethb8f8199c) entered forwarding state Jan 29 10:59:57.333430 systemd-networkd[1385]: vethb8f8199c: Gained carrier Jan 29 10:59:57.335378 systemd-networkd[1385]: cni0: Gained carrier Jan 29 10:59:57.337603 containerd[1475]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x400001c938), "name":"cbr0", "type":"bridge"} Jan 29 10:59:57.337603 containerd[1475]: delegateAdd: netconf sent to delegate plugin: Jan 29 10:59:57.359614 containerd[1475]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-01-29T10:59:57.359476433Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 10:59:57.359614 containerd[1475]: time="2025-01-29T10:59:57.359547833Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 10:59:57.359614 containerd[1475]: time="2025-01-29T10:59:57.359567554Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:59:57.360407 containerd[1475]: time="2025-01-29T10:59:57.359664235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:59:57.384361 systemd[1]: Started cri-containerd-7600de2e78ac65ab189cc3e3d7ee23945a58f7b2db15a3dd6f8475da5448625f.scope - libcontainer container 7600de2e78ac65ab189cc3e3d7ee23945a58f7b2db15a3dd6f8475da5448625f. Jan 29 10:59:57.421218 containerd[1475]: time="2025-01-29T10:59:57.421155685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5gdzx,Uid:6e8d4f5b-953b-4848-8527-eebfa4e65a0e,Namespace:kube-system,Attempt:0,} returns sandbox id \"7600de2e78ac65ab189cc3e3d7ee23945a58f7b2db15a3dd6f8475da5448625f\"" Jan 29 10:59:57.425605 containerd[1475]: time="2025-01-29T10:59:57.425437288Z" level=info msg="CreateContainer within sandbox \"7600de2e78ac65ab189cc3e3d7ee23945a58f7b2db15a3dd6f8475da5448625f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 10:59:57.441841 containerd[1475]: time="2025-01-29T10:59:57.441686849Z" level=info msg="CreateContainer within sandbox \"7600de2e78ac65ab189cc3e3d7ee23945a58f7b2db15a3dd6f8475da5448625f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"30ac27072177588e987fea2fac14557ce93049a1240562b72738f5eec57ca550\"" Jan 29 10:59:57.443256 containerd[1475]: time="2025-01-29T10:59:57.443113983Z" level=info msg="StartContainer for \"30ac27072177588e987fea2fac14557ce93049a1240562b72738f5eec57ca550\"" Jan 29 10:59:57.470476 systemd[1]: Started cri-containerd-30ac27072177588e987fea2fac14557ce93049a1240562b72738f5eec57ca550.scope - libcontainer container 30ac27072177588e987fea2fac14557ce93049a1240562b72738f5eec57ca550. Jan 29 10:59:57.500152 containerd[1475]: time="2025-01-29T10:59:57.500114189Z" level=info msg="StartContainer for \"30ac27072177588e987fea2fac14557ce93049a1240562b72738f5eec57ca550\" returns successfully" Jan 29 10:59:58.274810 containerd[1475]: time="2025-01-29T10:59:58.274216075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pls68,Uid:5db86eb9-a268-40b5-904d-736060d52a85,Namespace:kube-system,Attempt:0,}" Jan 29 10:59:58.298049 systemd-networkd[1385]: veth19c44b3f: Link UP Jan 29 10:59:58.299896 kernel: cni0: port 2(veth19c44b3f) entered blocking state Jan 29 10:59:58.299969 kernel: cni0: port 2(veth19c44b3f) entered disabled state Jan 29 10:59:58.299984 kernel: veth19c44b3f: entered allmulticast mode Jan 29 10:59:58.300000 kernel: veth19c44b3f: entered promiscuous mode Jan 29 10:59:58.301286 kernel: cni0: port 2(veth19c44b3f) entered blocking state Jan 29 10:59:58.302096 kernel: cni0: port 2(veth19c44b3f) entered forwarding state Jan 29 10:59:58.305618 systemd-networkd[1385]: veth19c44b3f: Gained carrier Jan 29 10:59:58.310632 containerd[1475]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x4000106628), "name":"cbr0", "type":"bridge"} Jan 29 10:59:58.310632 containerd[1475]: delegateAdd: netconf sent to delegate plugin: Jan 29 10:59:58.329057 containerd[1475]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-01-29T10:59:58.328954898Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 10:59:58.329614 containerd[1475]: time="2025-01-29T10:59:58.329101620Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 10:59:58.329842 containerd[1475]: time="2025-01-29T10:59:58.329120500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:59:58.329842 containerd[1475]: time="2025-01-29T10:59:58.329768787Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:59:58.351362 systemd[1]: Started cri-containerd-5c118695385ffc4a725f7f61d7064e9300d80f330a7accfa084ce822046e4546.scope - libcontainer container 5c118695385ffc4a725f7f61d7064e9300d80f330a7accfa084ce822046e4546. Jan 29 10:59:58.386974 containerd[1475]: time="2025-01-29T10:59:58.386901314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pls68,Uid:5db86eb9-a268-40b5-904d-736060d52a85,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c118695385ffc4a725f7f61d7064e9300d80f330a7accfa084ce822046e4546\"" Jan 29 10:59:58.391551 containerd[1475]: time="2025-01-29T10:59:58.391510479Z" level=info msg="CreateContainer within sandbox \"5c118695385ffc4a725f7f61d7064e9300d80f330a7accfa084ce822046e4546\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 10:59:58.406220 containerd[1475]: time="2025-01-29T10:59:58.406138945Z" level=info msg="CreateContainer within sandbox \"5c118695385ffc4a725f7f61d7064e9300d80f330a7accfa084ce822046e4546\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2ef14ac5bb9d9e5c2d30679e64542579eb5ab98fb60742bd8868ec16c007ebc2\"" Jan 29 10:59:58.408761 containerd[1475]: time="2025-01-29T10:59:58.408603689Z" level=info msg="StartContainer for \"2ef14ac5bb9d9e5c2d30679e64542579eb5ab98fb60742bd8868ec16c007ebc2\"" Jan 29 10:59:58.436550 systemd[1]: Started cri-containerd-2ef14ac5bb9d9e5c2d30679e64542579eb5ab98fb60742bd8868ec16c007ebc2.scope - libcontainer container 2ef14ac5bb9d9e5c2d30679e64542579eb5ab98fb60742bd8868ec16c007ebc2. Jan 29 10:59:58.470910 containerd[1475]: time="2025-01-29T10:59:58.470284541Z" level=info msg="StartContainer for \"2ef14ac5bb9d9e5c2d30679e64542579eb5ab98fb60742bd8868ec16c007ebc2\" returns successfully" Jan 29 10:59:58.481137 kubelet[2775]: I0129 10:59:58.481064 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-5gdzx" podStartSLOduration=22.481043888 podStartE2EDuration="22.481043888s" podCreationTimestamp="2025-01-29 10:59:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 10:59:58.480432602 +0000 UTC m=+36.297867376" watchObservedRunningTime="2025-01-29 10:59:58.481043888 +0000 UTC m=+36.298478582" Jan 29 10:59:58.897359 systemd-networkd[1385]: cni0: Gained IPv6LL Jan 29 10:59:59.345507 systemd-networkd[1385]: vethb8f8199c: Gained IPv6LL Jan 29 10:59:59.473809 systemd-networkd[1385]: veth19c44b3f: Gained IPv6LL Jan 29 10:59:59.484837 kubelet[2775]: I0129 10:59:59.484386 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-pls68" podStartSLOduration=23.484335123 podStartE2EDuration="23.484335123s" podCreationTimestamp="2025-01-29 10:59:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 10:59:59.481752218 +0000 UTC m=+37.299186992" watchObservedRunningTime="2025-01-29 10:59:59.484335123 +0000 UTC m=+37.301769857" Jan 29 11:00:43.597724 systemd[1]: Started sshd@5-188.34.178.132:22-175.205.197.156:37752.service - OpenSSH per-connection server daemon (175.205.197.156:37752). Jan 29 11:00:47.483795 sshd[3853]: maximum authentication attempts exceeded for root from 175.205.197.156 port 37752 ssh2 [preauth] Jan 29 11:00:47.483795 sshd[3853]: Disconnecting authenticating user root 175.205.197.156 port 37752: Too many authentication failures [preauth] Jan 29 11:00:47.489080 systemd[1]: sshd@5-188.34.178.132:22-175.205.197.156:37752.service: Deactivated successfully. Jan 29 11:00:48.080605 systemd[1]: Started sshd@6-188.34.178.132:22-175.205.197.156:38280.service - OpenSSH per-connection server daemon (175.205.197.156:38280). Jan 29 11:00:52.083169 sshd[3880]: maximum authentication attempts exceeded for root from 175.205.197.156 port 38280 ssh2 [preauth] Jan 29 11:00:52.083169 sshd[3880]: Disconnecting authenticating user root 175.205.197.156 port 38280: Too many authentication failures [preauth] Jan 29 11:00:52.086044 systemd[1]: sshd@6-188.34.178.132:22-175.205.197.156:38280.service: Deactivated successfully. Jan 29 11:00:52.812019 systemd[1]: Started sshd@7-188.34.178.132:22-175.205.197.156:38916.service - OpenSSH per-connection server daemon (175.205.197.156:38916). Jan 29 11:00:57.061829 sshd[3906]: maximum authentication attempts exceeded for root from 175.205.197.156 port 38916 ssh2 [preauth] Jan 29 11:00:57.061829 sshd[3906]: Disconnecting authenticating user root 175.205.197.156 port 38916: Too many authentication failures [preauth] Jan 29 11:00:57.065444 systemd[1]: sshd@7-188.34.178.132:22-175.205.197.156:38916.service: Deactivated successfully. Jan 29 11:00:58.718674 systemd[1]: Started sshd@8-188.34.178.132:22-175.205.197.156:39540.service - OpenSSH per-connection server daemon (175.205.197.156:39540). Jan 29 11:01:01.374612 sshd[3932]: Received disconnect from 175.205.197.156 port 39540:11: disconnected by user [preauth] Jan 29 11:01:01.374612 sshd[3932]: Disconnected from authenticating user root 175.205.197.156 port 39540 [preauth] Jan 29 11:01:01.376788 systemd[1]: sshd@8-188.34.178.132:22-175.205.197.156:39540.service: Deactivated successfully. Jan 29 11:01:01.743701 systemd[1]: Started sshd@9-188.34.178.132:22-175.205.197.156:40068.service - OpenSSH per-connection server daemon (175.205.197.156:40068). Jan 29 11:01:04.711444 sshd[3937]: Invalid user admin from 175.205.197.156 port 40068 Jan 29 11:01:06.501954 sshd[3937]: maximum authentication attempts exceeded for invalid user admin from 175.205.197.156 port 40068 ssh2 [preauth] Jan 29 11:01:06.501954 sshd[3937]: Disconnecting invalid user admin 175.205.197.156 port 40068: Too many authentication failures [preauth] Jan 29 11:01:06.505174 systemd[1]: sshd@9-188.34.178.132:22-175.205.197.156:40068.service: Deactivated successfully. Jan 29 11:01:07.222575 systemd[1]: Started sshd@10-188.34.178.132:22-175.205.197.156:40666.service - OpenSSH per-connection server daemon (175.205.197.156:40666). Jan 29 11:01:09.922782 sshd[3984]: Invalid user admin from 175.205.197.156 port 40666 Jan 29 11:01:11.709615 sshd[3984]: maximum authentication attempts exceeded for invalid user admin from 175.205.197.156 port 40666 ssh2 [preauth] Jan 29 11:01:11.709615 sshd[3984]: Disconnecting invalid user admin 175.205.197.156 port 40666: Too many authentication failures [preauth] Jan 29 11:01:11.712151 systemd[1]: sshd@10-188.34.178.132:22-175.205.197.156:40666.service: Deactivated successfully. Jan 29 11:01:12.438601 systemd[1]: Started sshd@11-188.34.178.132:22-175.205.197.156:41338.service - OpenSSH per-connection server daemon (175.205.197.156:41338). Jan 29 11:01:14.556173 sshd[4012]: Invalid user admin from 175.205.197.156 port 41338 Jan 29 11:01:15.993217 sshd[4012]: Received disconnect from 175.205.197.156 port 41338:11: disconnected by user [preauth] Jan 29 11:01:15.993217 sshd[4012]: Disconnected from invalid user admin 175.205.197.156 port 41338 [preauth] Jan 29 11:01:15.996892 systemd[1]: sshd@11-188.34.178.132:22-175.205.197.156:41338.service: Deactivated successfully. Jan 29 11:01:16.361776 systemd[1]: Started sshd@12-188.34.178.132:22-175.205.197.156:41812.service - OpenSSH per-connection server daemon (175.205.197.156:41812). Jan 29 11:01:18.872099 sshd[4017]: Invalid user oracle from 175.205.197.156 port 41812 Jan 29 11:01:21.407128 sshd[4017]: maximum authentication attempts exceeded for invalid user oracle from 175.205.197.156 port 41812 ssh2 [preauth] Jan 29 11:01:21.407128 sshd[4017]: Disconnecting invalid user oracle 175.205.197.156 port 41812: Too many authentication failures [preauth] Jan 29 11:01:21.411152 systemd[1]: sshd@12-188.34.178.132:22-175.205.197.156:41812.service: Deactivated successfully. Jan 29 11:01:22.133657 systemd[1]: Started sshd@13-188.34.178.132:22-175.205.197.156:42530.service - OpenSSH per-connection server daemon (175.205.197.156:42530). Jan 29 11:01:24.419070 sshd[4065]: Invalid user oracle from 175.205.197.156 port 42530 Jan 29 11:01:27.109553 sshd[4065]: maximum authentication attempts exceeded for invalid user oracle from 175.205.197.156 port 42530 ssh2 [preauth] Jan 29 11:01:27.109553 sshd[4065]: Disconnecting invalid user oracle 175.205.197.156 port 42530: Too many authentication failures [preauth] Jan 29 11:01:27.112957 systemd[1]: sshd@13-188.34.178.132:22-175.205.197.156:42530.service: Deactivated successfully. Jan 29 11:01:27.844545 systemd[1]: Started sshd@14-188.34.178.132:22-175.205.197.156:43216.service - OpenSSH per-connection server daemon (175.205.197.156:43216). Jan 29 11:01:30.257636 sshd[4093]: Invalid user oracle from 175.205.197.156 port 43216 Jan 29 11:01:31.492312 sshd[4093]: Received disconnect from 175.205.197.156 port 43216:11: disconnected by user [preauth] Jan 29 11:01:31.492312 sshd[4093]: Disconnected from invalid user oracle 175.205.197.156 port 43216 [preauth] Jan 29 11:01:31.495026 systemd[1]: sshd@14-188.34.178.132:22-175.205.197.156:43216.service: Deactivated successfully. Jan 29 11:01:31.853572 systemd[1]: Started sshd@15-188.34.178.132:22-175.205.197.156:43716.service - OpenSSH per-connection server daemon (175.205.197.156:43716). Jan 29 11:01:34.117876 sshd[4098]: Invalid user usuario from 175.205.197.156 port 43716 Jan 29 11:01:35.907869 sshd[4098]: maximum authentication attempts exceeded for invalid user usuario from 175.205.197.156 port 43716 ssh2 [preauth] Jan 29 11:01:35.907869 sshd[4098]: Disconnecting invalid user usuario 175.205.197.156 port 43716: Too many authentication failures [preauth] Jan 29 11:01:35.910551 systemd[1]: sshd@15-188.34.178.132:22-175.205.197.156:43716.service: Deactivated successfully. Jan 29 11:01:36.628610 systemd[1]: Started sshd@16-188.34.178.132:22-175.205.197.156:44288.service - OpenSSH per-connection server daemon (175.205.197.156:44288). Jan 29 11:01:39.174106 sshd[4124]: Invalid user usuario from 175.205.197.156 port 44288 Jan 29 11:01:41.005085 sshd[4124]: maximum authentication attempts exceeded for invalid user usuario from 175.205.197.156 port 44288 ssh2 [preauth] Jan 29 11:01:41.005085 sshd[4124]: Disconnecting invalid user usuario 175.205.197.156 port 44288: Too many authentication failures [preauth] Jan 29 11:01:41.008566 systemd[1]: sshd@16-188.34.178.132:22-175.205.197.156:44288.service: Deactivated successfully. Jan 29 11:01:41.581562 systemd[1]: Started sshd@17-188.34.178.132:22-175.205.197.156:44920.service - OpenSSH per-connection server daemon (175.205.197.156:44920). Jan 29 11:01:43.671683 sshd[4153]: Invalid user usuario from 175.205.197.156 port 44920 Jan 29 11:01:45.118317 sshd[4153]: Received disconnect from 175.205.197.156 port 44920:11: disconnected by user [preauth] Jan 29 11:01:45.118317 sshd[4153]: Disconnected from invalid user usuario 175.205.197.156 port 44920 [preauth] Jan 29 11:01:45.121143 systemd[1]: sshd@17-188.34.178.132:22-175.205.197.156:44920.service: Deactivated successfully. Jan 29 11:01:45.487685 systemd[1]: Started sshd@18-188.34.178.132:22-175.205.197.156:45416.service - OpenSSH per-connection server daemon (175.205.197.156:45416). Jan 29 11:01:47.920164 sshd[4179]: Invalid user test from 175.205.197.156 port 45416 Jan 29 11:01:49.743140 sshd[4179]: maximum authentication attempts exceeded for invalid user test from 175.205.197.156 port 45416 ssh2 [preauth] Jan 29 11:01:49.743140 sshd[4179]: Disconnecting invalid user test 175.205.197.156 port 45416: Too many authentication failures [preauth] Jan 29 11:01:49.744971 systemd[1]: sshd@18-188.34.178.132:22-175.205.197.156:45416.service: Deactivated successfully. Jan 29 11:01:50.481475 systemd[1]: Started sshd@19-188.34.178.132:22-175.205.197.156:46026.service - OpenSSH per-connection server daemon (175.205.197.156:46026). Jan 29 11:01:53.323647 sshd[4205]: Invalid user test from 175.205.197.156 port 46026 Jan 29 11:01:56.052618 sshd[4205]: maximum authentication attempts exceeded for invalid user test from 175.205.197.156 port 46026 ssh2 [preauth] Jan 29 11:01:56.052618 sshd[4205]: Disconnecting invalid user test 175.205.197.156 port 46026: Too many authentication failures [preauth] Jan 29 11:01:56.055815 systemd[1]: sshd@19-188.34.178.132:22-175.205.197.156:46026.service: Deactivated successfully. Jan 29 11:01:57.794672 systemd[1]: Started sshd@20-188.34.178.132:22-175.205.197.156:46826.service - OpenSSH per-connection server daemon (175.205.197.156:46826). Jan 29 11:02:00.855594 sshd[4253]: Invalid user test from 175.205.197.156 port 46826 Jan 29 11:02:02.305144 sshd[4253]: Received disconnect from 175.205.197.156 port 46826:11: disconnected by user [preauth] Jan 29 11:02:02.305144 sshd[4253]: Disconnected from invalid user test 175.205.197.156 port 46826 [preauth] Jan 29 11:02:02.308547 systemd[1]: sshd@20-188.34.178.132:22-175.205.197.156:46826.service: Deactivated successfully. Jan 29 11:02:02.673340 systemd[1]: Started sshd@21-188.34.178.132:22-175.205.197.156:47550.service - OpenSSH per-connection server daemon (175.205.197.156:47550). Jan 29 11:02:04.674762 sshd[4281]: Invalid user user from 175.205.197.156 port 47550 Jan 29 11:02:06.495312 sshd[4281]: maximum authentication attempts exceeded for invalid user user from 175.205.197.156 port 47550 ssh2 [preauth] Jan 29 11:02:06.495312 sshd[4281]: Disconnecting invalid user user 175.205.197.156 port 47550: Too many authentication failures [preauth] Jan 29 11:02:06.497061 systemd[1]: sshd@21-188.34.178.132:22-175.205.197.156:47550.service: Deactivated successfully. Jan 29 11:02:07.228493 systemd[1]: Started sshd@22-188.34.178.132:22-175.205.197.156:48176.service - OpenSSH per-connection server daemon (175.205.197.156:48176). Jan 29 11:02:10.786943 sshd[4293]: Invalid user user from 175.205.197.156 port 48176 Jan 29 11:02:13.409974 sshd[4293]: maximum authentication attempts exceeded for invalid user user from 175.205.197.156 port 48176 ssh2 [preauth] Jan 29 11:02:13.409974 sshd[4293]: Disconnecting invalid user user 175.205.197.156 port 48176: Too many authentication failures [preauth] Jan 29 11:02:13.413947 systemd[1]: sshd@22-188.34.178.132:22-175.205.197.156:48176.service: Deactivated successfully. Jan 29 11:02:13.987631 systemd[1]: Started sshd@23-188.34.178.132:22-175.205.197.156:49048.service - OpenSSH per-connection server daemon (175.205.197.156:49048). Jan 29 11:02:16.213332 sshd[4337]: Invalid user user from 175.205.197.156 port 49048 Jan 29 11:02:17.652125 sshd[4337]: Received disconnect from 175.205.197.156 port 49048:11: disconnected by user [preauth] Jan 29 11:02:17.652125 sshd[4337]: Disconnected from invalid user user 175.205.197.156 port 49048 [preauth] Jan 29 11:02:17.655208 systemd[1]: sshd@23-188.34.178.132:22-175.205.197.156:49048.service: Deactivated successfully. Jan 29 11:02:18.028628 systemd[1]: Started sshd@24-188.34.178.132:22-175.205.197.156:49580.service - OpenSSH per-connection server daemon (175.205.197.156:49580). Jan 29 11:02:21.290159 sshd[4363]: Invalid user ftpuser from 175.205.197.156 port 49580 Jan 29 11:02:23.122220 sshd[4363]: maximum authentication attempts exceeded for invalid user ftpuser from 175.205.197.156 port 49580 ssh2 [preauth] Jan 29 11:02:23.122220 sshd[4363]: Disconnecting invalid user ftpuser 175.205.197.156 port 49580: Too many authentication failures [preauth] Jan 29 11:02:23.125966 systemd[1]: sshd@24-188.34.178.132:22-175.205.197.156:49580.service: Deactivated successfully. Jan 29 11:02:23.837545 systemd[1]: Started sshd@25-188.34.178.132:22-175.205.197.156:50318.service - OpenSSH per-connection server daemon (175.205.197.156:50318). Jan 29 11:02:26.632011 sshd[4392]: Invalid user ftpuser from 175.205.197.156 port 50318 Jan 29 11:02:29.788302 sshd[4392]: maximum authentication attempts exceeded for invalid user ftpuser from 175.205.197.156 port 50318 ssh2 [preauth] Jan 29 11:02:29.788302 sshd[4392]: Disconnecting invalid user ftpuser 175.205.197.156 port 50318: Too many authentication failures [preauth] Jan 29 11:02:29.791012 systemd[1]: sshd@25-188.34.178.132:22-175.205.197.156:50318.service: Deactivated successfully. Jan 29 11:02:30.513663 systemd[1]: Started sshd@26-188.34.178.132:22-175.205.197.156:51190.service - OpenSSH per-connection server daemon (175.205.197.156:51190). Jan 29 11:02:32.901663 sshd[4419]: Invalid user ftpuser from 175.205.197.156 port 51190 Jan 29 11:02:35.219023 sshd[4419]: Received disconnect from 175.205.197.156 port 51190:11: disconnected by user [preauth] Jan 29 11:02:35.219023 sshd[4419]: Disconnected from invalid user ftpuser 175.205.197.156 port 51190 [preauth] Jan 29 11:02:35.220820 systemd[1]: sshd@26-188.34.178.132:22-175.205.197.156:51190.service: Deactivated successfully. Jan 29 11:02:35.508567 systemd[1]: Started sshd@27-188.34.178.132:22-175.205.197.156:51810.service - OpenSSH per-connection server daemon (175.205.197.156:51810). Jan 29 11:02:37.497140 sshd[4445]: Invalid user test1 from 175.205.197.156 port 51810 Jan 29 11:02:39.973137 sshd[4445]: maximum authentication attempts exceeded for invalid user test1 from 175.205.197.156 port 51810 ssh2 [preauth] Jan 29 11:02:39.973137 sshd[4445]: Disconnecting invalid user test1 175.205.197.156 port 51810: Too many authentication failures [preauth] Jan 29 11:02:39.976781 systemd[1]: sshd@27-188.34.178.132:22-175.205.197.156:51810.service: Deactivated successfully. Jan 29 11:02:40.700513 systemd[1]: Started sshd@28-188.34.178.132:22-175.205.197.156:52490.service - OpenSSH per-connection server daemon (175.205.197.156:52490). Jan 29 11:02:44.364442 sshd[4473]: Invalid user test1 from 175.205.197.156 port 52490 Jan 29 11:02:46.157404 sshd[4473]: maximum authentication attempts exceeded for invalid user test1 from 175.205.197.156 port 52490 ssh2 [preauth] Jan 29 11:02:46.157404 sshd[4473]: Disconnecting invalid user test1 175.205.197.156 port 52490: Too many authentication failures [preauth] Jan 29 11:02:46.160804 systemd[1]: sshd@28-188.34.178.132:22-175.205.197.156:52490.service: Deactivated successfully. Jan 29 11:02:46.699523 systemd[1]: Started sshd@29-188.34.178.132:22-175.205.197.156:53336.service - OpenSSH per-connection server daemon (175.205.197.156:53336). Jan 29 11:02:48.763229 sshd[4500]: Invalid user test1 from 175.205.197.156 port 53336 Jan 29 11:02:50.940154 sshd[4500]: Received disconnect from 175.205.197.156 port 53336:11: disconnected by user [preauth] Jan 29 11:02:50.940154 sshd[4500]: Disconnected from invalid user test1 175.205.197.156 port 53336 [preauth] Jan 29 11:02:50.944839 systemd[1]: sshd@29-188.34.178.132:22-175.205.197.156:53336.service: Deactivated successfully. Jan 29 11:02:51.306590 systemd[1]: Started sshd@30-188.34.178.132:22-175.205.197.156:53888.service - OpenSSH per-connection server daemon (175.205.197.156:53888). Jan 29 11:02:53.518004 sshd[4527]: Invalid user test2 from 175.205.197.156 port 53888 Jan 29 11:02:55.326932 sshd[4527]: maximum authentication attempts exceeded for invalid user test2 from 175.205.197.156 port 53888 ssh2 [preauth] Jan 29 11:02:55.326932 sshd[4527]: Disconnecting invalid user test2 175.205.197.156 port 53888: Too many authentication failures [preauth] Jan 29 11:02:55.328464 systemd[1]: sshd@30-188.34.178.132:22-175.205.197.156:53888.service: Deactivated successfully. Jan 29 11:02:56.050452 systemd[1]: Started sshd@31-188.34.178.132:22-175.205.197.156:54554.service - OpenSSH per-connection server daemon (175.205.197.156:54554). Jan 29 11:02:58.141028 sshd[4553]: Invalid user test2 from 175.205.197.156 port 54554 Jan 29 11:03:00.636173 sshd[4553]: maximum authentication attempts exceeded for invalid user test2 from 175.205.197.156 port 54554 ssh2 [preauth] Jan 29 11:03:00.636173 sshd[4553]: Disconnecting invalid user test2 175.205.197.156 port 54554: Too many authentication failures [preauth] Jan 29 11:03:00.639025 systemd[1]: sshd@31-188.34.178.132:22-175.205.197.156:54554.service: Deactivated successfully. Jan 29 11:03:01.352470 systemd[1]: Started sshd@32-188.34.178.132:22-175.205.197.156:55298.service - OpenSSH per-connection server daemon (175.205.197.156:55298). Jan 29 11:03:04.002689 sshd[4580]: Invalid user test2 from 175.205.197.156 port 55298 Jan 29 11:03:05.458291 sshd[4580]: Received disconnect from 175.205.197.156 port 55298:11: disconnected by user [preauth] Jan 29 11:03:05.458291 sshd[4580]: Disconnected from invalid user test2 175.205.197.156 port 55298 [preauth] Jan 29 11:03:05.461125 systemd[1]: sshd@32-188.34.178.132:22-175.205.197.156:55298.service: Deactivated successfully. Jan 29 11:03:05.831707 systemd[1]: Started sshd@33-188.34.178.132:22-175.205.197.156:55882.service - OpenSSH per-connection server daemon (175.205.197.156:55882). Jan 29 11:03:07.801676 sshd[4606]: Invalid user contador from 175.205.197.156 port 55882 Jan 29 11:03:08.704881 sshd[4606]: Received disconnect from 175.205.197.156 port 55882:11: disconnected by user [preauth] Jan 29 11:03:08.704881 sshd[4606]: Disconnected from invalid user contador 175.205.197.156 port 55882 [preauth] Jan 29 11:03:08.707009 systemd[1]: sshd@33-188.34.178.132:22-175.205.197.156:55882.service: Deactivated successfully. Jan 29 11:03:09.083435 systemd[1]: Started sshd@34-188.34.178.132:22-175.205.197.156:56338.service - OpenSSH per-connection server daemon (175.205.197.156:56338). Jan 29 11:03:12.881212 sshd[4634]: Invalid user ubuntu from 175.205.197.156 port 56338 Jan 29 11:03:14.732157 sshd[4634]: maximum authentication attempts exceeded for invalid user ubuntu from 175.205.197.156 port 56338 ssh2 [preauth] Jan 29 11:03:14.732157 sshd[4634]: Disconnecting invalid user ubuntu 175.205.197.156 port 56338: Too many authentication failures [preauth] Jan 29 11:03:14.735107 systemd[1]: sshd@34-188.34.178.132:22-175.205.197.156:56338.service: Deactivated successfully. Jan 29 11:03:15.918543 systemd[1]: Started sshd@35-188.34.178.132:22-175.205.197.156:57266.service - OpenSSH per-connection server daemon (175.205.197.156:57266). Jan 29 11:03:18.295797 sshd[4660]: Invalid user ubuntu from 175.205.197.156 port 57266 Jan 29 11:03:20.544886 sshd[4660]: maximum authentication attempts exceeded for invalid user ubuntu from 175.205.197.156 port 57266 ssh2 [preauth] Jan 29 11:03:20.544886 sshd[4660]: Disconnecting invalid user ubuntu 175.205.197.156 port 57266: Too many authentication failures [preauth] Jan 29 11:03:20.547002 systemd[1]: sshd@35-188.34.178.132:22-175.205.197.156:57266.service: Deactivated successfully. Jan 29 11:03:21.267512 systemd[1]: Started sshd@36-188.34.178.132:22-175.205.197.156:58008.service - OpenSSH per-connection server daemon (175.205.197.156:58008). Jan 29 11:03:24.626417 sshd[4686]: Invalid user ubuntu from 175.205.197.156 port 58008 Jan 29 11:03:25.717056 sshd[4686]: Received disconnect from 175.205.197.156 port 58008:11: disconnected by user [preauth] Jan 29 11:03:25.717056 sshd[4686]: Disconnected from invalid user ubuntu 175.205.197.156 port 58008 [preauth] Jan 29 11:03:25.720808 systemd[1]: sshd@36-188.34.178.132:22-175.205.197.156:58008.service: Deactivated successfully. Jan 29 11:03:26.100578 systemd[1]: Started sshd@37-188.34.178.132:22-175.205.197.156:58656.service - OpenSSH per-connection server daemon (175.205.197.156:58656). Jan 29 11:03:29.014763 sshd[4714]: Invalid user pi from 175.205.197.156 port 58656 Jan 29 11:03:30.974657 sshd[4714]: Received disconnect from 175.205.197.156 port 58656:11: disconnected by user [preauth] Jan 29 11:03:30.974657 sshd[4714]: Disconnected from invalid user pi 175.205.197.156 port 58656 [preauth] Jan 29 11:03:30.977842 systemd[1]: sshd@37-188.34.178.132:22-175.205.197.156:58656.service: Deactivated successfully. Jan 29 11:03:31.352692 systemd[1]: Started sshd@38-188.34.178.132:22-175.205.197.156:59340.service - OpenSSH per-connection server daemon (175.205.197.156:59340). Jan 29 11:03:33.415550 sshd[4740]: Invalid user baikal from 175.205.197.156 port 59340 Jan 29 11:03:33.779630 sshd[4740]: Received disconnect from 175.205.197.156 port 59340:11: disconnected by user [preauth] Jan 29 11:03:33.779630 sshd[4740]: Disconnected from invalid user baikal 175.205.197.156 port 59340 [preauth] Jan 29 11:03:33.783616 systemd[1]: sshd@38-188.34.178.132:22-175.205.197.156:59340.service: Deactivated successfully. Jan 29 11:04:08.226685 systemd[1]: Started sshd@39-188.34.178.132:22-147.75.109.163:36734.service - OpenSSH per-connection server daemon (147.75.109.163:36734). Jan 29 11:04:09.212712 sshd[4920]: Accepted publickey for core from 147.75.109.163 port 36734 ssh2: RSA SHA256:ud3vxHVNIak4XcjfYqyE/gz2LSgCDnXIPhIjlh5WLRg Jan 29 11:04:09.214558 sshd-session[4920]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:04:09.223981 systemd-logind[1460]: New session 6 of user core. Jan 29 11:04:09.226441 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 11:04:09.982667 sshd[4922]: Connection closed by 147.75.109.163 port 36734 Jan 29 11:04:09.983591 sshd-session[4920]: pam_unix(sshd:session): session closed for user core Jan 29 11:04:09.989212 systemd[1]: sshd@39-188.34.178.132:22-147.75.109.163:36734.service: Deactivated successfully. Jan 29 11:04:09.992115 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 11:04:09.992854 systemd-logind[1460]: Session 6 logged out. Waiting for processes to exit. Jan 29 11:04:09.994155 systemd-logind[1460]: Removed session 6. Jan 29 11:04:15.161621 systemd[1]: Started sshd@40-188.34.178.132:22-147.75.109.163:36750.service - OpenSSH per-connection server daemon (147.75.109.163:36750). Jan 29 11:04:16.161446 sshd[4957]: Accepted publickey for core from 147.75.109.163 port 36750 ssh2: RSA SHA256:ud3vxHVNIak4XcjfYqyE/gz2LSgCDnXIPhIjlh5WLRg Jan 29 11:04:16.163839 sshd-session[4957]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:04:16.170123 systemd-logind[1460]: New session 7 of user core. Jan 29 11:04:16.174351 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 11:04:16.931137 sshd[4959]: Connection closed by 147.75.109.163 port 36750 Jan 29 11:04:16.932171 sshd-session[4957]: pam_unix(sshd:session): session closed for user core Jan 29 11:04:16.937443 systemd[1]: sshd@40-188.34.178.132:22-147.75.109.163:36750.service: Deactivated successfully. Jan 29 11:04:16.940319 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 11:04:16.941267 systemd-logind[1460]: Session 7 logged out. Waiting for processes to exit. Jan 29 11:04:16.942241 systemd-logind[1460]: Removed session 7. Jan 29 11:04:22.102508 systemd[1]: Started sshd@41-188.34.178.132:22-147.75.109.163:50132.service - OpenSSH per-connection server daemon (147.75.109.163:50132). Jan 29 11:04:23.089230 sshd[4992]: Accepted publickey for core from 147.75.109.163 port 50132 ssh2: RSA SHA256:ud3vxHVNIak4XcjfYqyE/gz2LSgCDnXIPhIjlh5WLRg Jan 29 11:04:23.092043 sshd-session[4992]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:04:23.097286 systemd-logind[1460]: New session 8 of user core. Jan 29 11:04:23.100351 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 11:04:23.841152 sshd[5017]: Connection closed by 147.75.109.163 port 50132 Jan 29 11:04:23.841027 sshd-session[4992]: pam_unix(sshd:session): session closed for user core Jan 29 11:04:23.847462 systemd[1]: sshd@41-188.34.178.132:22-147.75.109.163:50132.service: Deactivated successfully. Jan 29 11:04:23.847493 systemd-logind[1460]: Session 8 logged out. Waiting for processes to exit. Jan 29 11:04:23.851020 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 11:04:23.852111 systemd-logind[1460]: Removed session 8. Jan 29 11:04:24.031739 systemd[1]: Started sshd@42-188.34.178.132:22-147.75.109.163:50134.service - OpenSSH per-connection server daemon (147.75.109.163:50134). Jan 29 11:04:25.027256 sshd[5030]: Accepted publickey for core from 147.75.109.163 port 50134 ssh2: RSA SHA256:ud3vxHVNIak4XcjfYqyE/gz2LSgCDnXIPhIjlh5WLRg Jan 29 11:04:25.029404 sshd-session[5030]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:04:25.034278 systemd-logind[1460]: New session 9 of user core. Jan 29 11:04:25.037338 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 11:04:25.826368 sshd[5032]: Connection closed by 147.75.109.163 port 50134 Jan 29 11:04:25.827139 sshd-session[5030]: pam_unix(sshd:session): session closed for user core Jan 29 11:04:25.831084 systemd-logind[1460]: Session 9 logged out. Waiting for processes to exit. Jan 29 11:04:25.831534 systemd[1]: sshd@42-188.34.178.132:22-147.75.109.163:50134.service: Deactivated successfully. Jan 29 11:04:25.834347 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 11:04:25.836510 systemd-logind[1460]: Removed session 9. Jan 29 11:04:26.000596 systemd[1]: Started sshd@43-188.34.178.132:22-147.75.109.163:50148.service - OpenSSH per-connection server daemon (147.75.109.163:50148). Jan 29 11:04:26.984060 sshd[5041]: Accepted publickey for core from 147.75.109.163 port 50148 ssh2: RSA SHA256:ud3vxHVNIak4XcjfYqyE/gz2LSgCDnXIPhIjlh5WLRg Jan 29 11:04:26.985787 sshd-session[5041]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:04:26.990490 systemd-logind[1460]: New session 10 of user core. Jan 29 11:04:26.997490 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 11:04:27.745447 sshd[5043]: Connection closed by 147.75.109.163 port 50148 Jan 29 11:04:27.745944 sshd-session[5041]: pam_unix(sshd:session): session closed for user core Jan 29 11:04:27.751267 systemd[1]: sshd@43-188.34.178.132:22-147.75.109.163:50148.service: Deactivated successfully. Jan 29 11:04:27.753160 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 11:04:27.756035 systemd-logind[1460]: Session 10 logged out. Waiting for processes to exit. Jan 29 11:04:27.757205 systemd-logind[1460]: Removed session 10. Jan 29 11:04:32.924526 systemd[1]: Started sshd@44-188.34.178.132:22-147.75.109.163:51870.service - OpenSSH per-connection server daemon (147.75.109.163:51870). Jan 29 11:04:33.913696 sshd[5080]: Accepted publickey for core from 147.75.109.163 port 51870 ssh2: RSA SHA256:ud3vxHVNIak4XcjfYqyE/gz2LSgCDnXIPhIjlh5WLRg Jan 29 11:04:33.916287 sshd-session[5080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:04:33.921422 systemd-logind[1460]: New session 11 of user core. Jan 29 11:04:33.927362 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 11:04:34.671583 sshd[5097]: Connection closed by 147.75.109.163 port 51870 Jan 29 11:04:34.672619 sshd-session[5080]: pam_unix(sshd:session): session closed for user core Jan 29 11:04:34.678115 systemd[1]: sshd@44-188.34.178.132:22-147.75.109.163:51870.service: Deactivated successfully. Jan 29 11:04:34.680860 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 11:04:34.681899 systemd-logind[1460]: Session 11 logged out. Waiting for processes to exit. Jan 29 11:04:34.683031 systemd-logind[1460]: Removed session 11. Jan 29 11:04:34.851621 systemd[1]: Started sshd@45-188.34.178.132:22-147.75.109.163:51880.service - OpenSSH per-connection server daemon (147.75.109.163:51880). Jan 29 11:04:35.849103 sshd[5108]: Accepted publickey for core from 147.75.109.163 port 51880 ssh2: RSA SHA256:ud3vxHVNIak4XcjfYqyE/gz2LSgCDnXIPhIjlh5WLRg Jan 29 11:04:35.851084 sshd-session[5108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:04:35.856882 systemd-logind[1460]: New session 12 of user core. Jan 29 11:04:35.861370 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 11:04:36.654754 sshd[5110]: Connection closed by 147.75.109.163 port 51880 Jan 29 11:04:36.655590 sshd-session[5108]: pam_unix(sshd:session): session closed for user core Jan 29 11:04:36.660751 systemd[1]: sshd@45-188.34.178.132:22-147.75.109.163:51880.service: Deactivated successfully. Jan 29 11:04:36.664210 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 11:04:36.665241 systemd-logind[1460]: Session 12 logged out. Waiting for processes to exit. Jan 29 11:04:36.666620 systemd-logind[1460]: Removed session 12. Jan 29 11:04:36.829625 systemd[1]: Started sshd@46-188.34.178.132:22-147.75.109.163:51886.service - OpenSSH per-connection server daemon (147.75.109.163:51886). Jan 29 11:04:37.810012 sshd[5120]: Accepted publickey for core from 147.75.109.163 port 51886 ssh2: RSA SHA256:ud3vxHVNIak4XcjfYqyE/gz2LSgCDnXIPhIjlh5WLRg Jan 29 11:04:37.812546 sshd-session[5120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:04:37.819994 systemd-logind[1460]: New session 13 of user core. Jan 29 11:04:37.822431 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 11:04:39.814545 sshd[5130]: Connection closed by 147.75.109.163 port 51886 Jan 29 11:04:39.814382 sshd-session[5120]: pam_unix(sshd:session): session closed for user core Jan 29 11:04:39.819353 systemd[1]: sshd@46-188.34.178.132:22-147.75.109.163:51886.service: Deactivated successfully. Jan 29 11:04:39.822104 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 11:04:39.824627 systemd-logind[1460]: Session 13 logged out. Waiting for processes to exit. Jan 29 11:04:39.825694 systemd-logind[1460]: Removed session 13. Jan 29 11:04:39.987627 systemd[1]: Started sshd@47-188.34.178.132:22-147.75.109.163:38992.service - OpenSSH per-connection server daemon (147.75.109.163:38992). Jan 29 11:04:40.973142 sshd[5161]: Accepted publickey for core from 147.75.109.163 port 38992 ssh2: RSA SHA256:ud3vxHVNIak4XcjfYqyE/gz2LSgCDnXIPhIjlh5WLRg Jan 29 11:04:40.975451 sshd-session[5161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:04:40.980317 systemd-logind[1460]: New session 14 of user core. Jan 29 11:04:40.987534 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 11:04:41.851785 sshd[5163]: Connection closed by 147.75.109.163 port 38992 Jan 29 11:04:41.853561 sshd-session[5161]: pam_unix(sshd:session): session closed for user core Jan 29 11:04:41.858356 systemd[1]: sshd@47-188.34.178.132:22-147.75.109.163:38992.service: Deactivated successfully. Jan 29 11:04:41.860818 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 11:04:41.861715 systemd-logind[1460]: Session 14 logged out. Waiting for processes to exit. Jan 29 11:04:41.863432 systemd-logind[1460]: Removed session 14. Jan 29 11:04:42.028575 systemd[1]: Started sshd@48-188.34.178.132:22-147.75.109.163:38996.service - OpenSSH per-connection server daemon (147.75.109.163:38996). Jan 29 11:04:43.031799 sshd[5172]: Accepted publickey for core from 147.75.109.163 port 38996 ssh2: RSA SHA256:ud3vxHVNIak4XcjfYqyE/gz2LSgCDnXIPhIjlh5WLRg Jan 29 11:04:43.034104 sshd-session[5172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:04:43.040607 systemd-logind[1460]: New session 15 of user core. Jan 29 11:04:43.048162 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 11:04:43.778441 sshd[5191]: Connection closed by 147.75.109.163 port 38996 Jan 29 11:04:43.779621 sshd-session[5172]: pam_unix(sshd:session): session closed for user core Jan 29 11:04:43.784077 systemd[1]: sshd@48-188.34.178.132:22-147.75.109.163:38996.service: Deactivated successfully. Jan 29 11:04:43.786241 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 11:04:43.787652 systemd-logind[1460]: Session 15 logged out. Waiting for processes to exit. Jan 29 11:04:43.790075 systemd-logind[1460]: Removed session 15. Jan 29 11:04:48.957530 systemd[1]: Started sshd@49-188.34.178.132:22-147.75.109.163:55910.service - OpenSSH per-connection server daemon (147.75.109.163:55910). Jan 29 11:04:49.941145 sshd[5229]: Accepted publickey for core from 147.75.109.163 port 55910 ssh2: RSA SHA256:ud3vxHVNIak4XcjfYqyE/gz2LSgCDnXIPhIjlh5WLRg Jan 29 11:04:49.944162 sshd-session[5229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:04:49.950299 systemd-logind[1460]: New session 16 of user core. Jan 29 11:04:49.955576 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 11:04:50.720685 sshd[5231]: Connection closed by 147.75.109.163 port 55910 Jan 29 11:04:50.721705 sshd-session[5229]: pam_unix(sshd:session): session closed for user core Jan 29 11:04:50.728229 systemd[1]: sshd@49-188.34.178.132:22-147.75.109.163:55910.service: Deactivated successfully. Jan 29 11:04:50.731865 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 11:04:50.734544 systemd-logind[1460]: Session 16 logged out. Waiting for processes to exit. Jan 29 11:04:50.735977 systemd-logind[1460]: Removed session 16. Jan 29 11:04:55.899563 systemd[1]: Started sshd@50-188.34.178.132:22-147.75.109.163:55922.service - OpenSSH per-connection server daemon (147.75.109.163:55922). Jan 29 11:04:56.905453 sshd[5263]: Accepted publickey for core from 147.75.109.163 port 55922 ssh2: RSA SHA256:ud3vxHVNIak4XcjfYqyE/gz2LSgCDnXIPhIjlh5WLRg Jan 29 11:04:56.908297 sshd-session[5263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:04:56.916304 systemd-logind[1460]: New session 17 of user core. Jan 29 11:04:56.920567 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 11:04:57.675122 sshd[5265]: Connection closed by 147.75.109.163 port 55922 Jan 29 11:04:57.676441 sshd-session[5263]: pam_unix(sshd:session): session closed for user core Jan 29 11:04:57.681295 systemd-logind[1460]: Session 17 logged out. Waiting for processes to exit. Jan 29 11:04:57.681855 systemd[1]: sshd@50-188.34.178.132:22-147.75.109.163:55922.service: Deactivated successfully. Jan 29 11:04:57.685814 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 11:04:57.692435 systemd-logind[1460]: Removed session 17. Jan 29 11:04:59.893403 update_engine[1461]: I20250129 11:04:59.892838 1461 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 29 11:04:59.893403 update_engine[1461]: I20250129 11:04:59.892927 1461 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 29 11:04:59.893403 update_engine[1461]: I20250129 11:04:59.893311 1461 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 29 11:04:59.895406 update_engine[1461]: I20250129 11:04:59.894050 1461 omaha_request_params.cc:62] Current group set to beta Jan 29 11:04:59.895406 update_engine[1461]: I20250129 11:04:59.894251 1461 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 29 11:04:59.895406 update_engine[1461]: I20250129 11:04:59.894269 1461 update_attempter.cc:643] Scheduling an action processor start. Jan 29 11:04:59.895406 update_engine[1461]: I20250129 11:04:59.894299 1461 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 29 11:04:59.895406 update_engine[1461]: I20250129 11:04:59.894349 1461 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 29 11:04:59.895406 update_engine[1461]: I20250129 11:04:59.894445 1461 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 29 11:04:59.895406 update_engine[1461]: I20250129 11:04:59.894464 1461 omaha_request_action.cc:272] Request: Jan 29 11:04:59.895406 update_engine[1461]: Jan 29 11:04:59.895406 update_engine[1461]: Jan 29 11:04:59.895406 update_engine[1461]: Jan 29 11:04:59.895406 update_engine[1461]: Jan 29 11:04:59.895406 update_engine[1461]: Jan 29 11:04:59.895406 update_engine[1461]: Jan 29 11:04:59.895406 update_engine[1461]: Jan 29 11:04:59.895406 update_engine[1461]: Jan 29 11:04:59.895406 update_engine[1461]: I20250129 11:04:59.894475 1461 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 11:04:59.896546 locksmithd[1495]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 29 11:04:59.897065 update_engine[1461]: I20250129 11:04:59.896806 1461 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 11:04:59.897623 update_engine[1461]: I20250129 11:04:59.897445 1461 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 11:04:59.898206 update_engine[1461]: E20250129 11:04:59.898149 1461 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 11:04:59.898290 update_engine[1461]: I20250129 11:04:59.898276 1461 libcurl_http_fetcher.cc:283] No HTTP response, retry 1