Feb 13 15:29:49.875664 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 15:29:49.875686 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Thu Feb 13 13:57:00 -00 2025 Feb 13 15:29:49.875696 kernel: KASLR enabled Feb 13 15:29:49.875701 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Feb 13 15:29:49.875707 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x138595418 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d98 Feb 13 15:29:49.875713 kernel: random: crng init done Feb 13 15:29:49.875720 kernel: secureboot: Secure boot disabled Feb 13 15:29:49.875725 kernel: ACPI: Early table checksum verification disabled Feb 13 15:29:49.875731 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Feb 13 15:29:49.875739 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Feb 13 15:29:49.875745 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:29:49.875750 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:29:49.875756 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:29:49.875762 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:29:49.875769 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:29:49.875777 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:29:49.875783 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:29:49.875789 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:29:49.875796 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:29:49.875802 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Feb 13 15:29:49.875808 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Feb 13 15:29:49.875814 kernel: NUMA: Failed to initialise from firmware Feb 13 15:29:49.875820 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Feb 13 15:29:49.875837 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] Feb 13 15:29:49.875846 kernel: Zone ranges: Feb 13 15:29:49.875855 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Feb 13 15:29:49.875861 kernel: DMA32 empty Feb 13 15:29:49.875867 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Feb 13 15:29:49.875873 kernel: Movable zone start for each node Feb 13 15:29:49.875879 kernel: Early memory node ranges Feb 13 15:29:49.875885 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Feb 13 15:29:49.875892 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Feb 13 15:29:49.875898 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Feb 13 15:29:49.875904 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Feb 13 15:29:49.875910 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Feb 13 15:29:49.875916 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Feb 13 15:29:49.875922 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Feb 13 15:29:49.875930 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Feb 13 15:29:49.875936 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Feb 13 15:29:49.875942 kernel: psci: probing for conduit method from ACPI. Feb 13 15:29:49.875951 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 15:29:49.875958 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 15:29:49.875964 kernel: psci: Trusted OS migration not required Feb 13 15:29:49.875972 kernel: psci: SMC Calling Convention v1.1 Feb 13 15:29:49.875978 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 13 15:29:49.875985 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 15:29:49.875992 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 15:29:49.875998 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 13 15:29:49.876005 kernel: Detected PIPT I-cache on CPU0 Feb 13 15:29:49.876011 kernel: CPU features: detected: GIC system register CPU interface Feb 13 15:29:49.876018 kernel: CPU features: detected: Hardware dirty bit management Feb 13 15:29:49.876025 kernel: CPU features: detected: Spectre-v4 Feb 13 15:29:49.876031 kernel: CPU features: detected: Spectre-BHB Feb 13 15:29:49.876039 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 15:29:49.876046 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 15:29:49.876052 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 15:29:49.876059 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 15:29:49.876065 kernel: alternatives: applying boot alternatives Feb 13 15:29:49.876073 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=07e9b8867aadd0b2e77ba5338d18cdd10706c658e0d745a78e129bcae9a0e4c6 Feb 13 15:29:49.876080 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 15:29:49.876087 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 15:29:49.876094 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 15:29:49.876100 kernel: Fallback order for Node 0: 0 Feb 13 15:29:49.876107 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Feb 13 15:29:49.876115 kernel: Policy zone: Normal Feb 13 15:29:49.876122 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 15:29:49.876128 kernel: software IO TLB: area num 2. Feb 13 15:29:49.876135 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Feb 13 15:29:49.876142 kernel: Memory: 3882680K/4096000K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39680K init, 897K bss, 213320K reserved, 0K cma-reserved) Feb 13 15:29:49.876148 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 15:29:49.876155 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 15:29:49.876162 kernel: rcu: RCU event tracing is enabled. Feb 13 15:29:49.876169 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 15:29:49.876176 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 15:29:49.876182 kernel: Tracing variant of Tasks RCU enabled. Feb 13 15:29:49.876189 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 15:29:49.876197 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 15:29:49.876204 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 15:29:49.876210 kernel: GICv3: 256 SPIs implemented Feb 13 15:29:49.876217 kernel: GICv3: 0 Extended SPIs implemented Feb 13 15:29:49.876223 kernel: Root IRQ handler: gic_handle_irq Feb 13 15:29:49.876230 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 15:29:49.876236 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 13 15:29:49.876243 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 13 15:29:49.876249 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 15:29:49.876256 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Feb 13 15:29:49.876263 kernel: GICv3: using LPI property table @0x00000001000e0000 Feb 13 15:29:49.876271 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Feb 13 15:29:49.876277 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 15:29:49.876284 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:29:49.876290 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 15:29:49.876297 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 15:29:49.876304 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 15:29:49.876311 kernel: Console: colour dummy device 80x25 Feb 13 15:29:49.876321 kernel: ACPI: Core revision 20230628 Feb 13 15:29:49.876328 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 15:29:49.876335 kernel: pid_max: default: 32768 minimum: 301 Feb 13 15:29:49.876343 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 15:29:49.876350 kernel: landlock: Up and running. Feb 13 15:29:49.876356 kernel: SELinux: Initializing. Feb 13 15:29:49.876363 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:29:49.876370 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:29:49.876377 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:29:49.876384 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:29:49.876390 kernel: rcu: Hierarchical SRCU implementation. Feb 13 15:29:49.876397 kernel: rcu: Max phase no-delay instances is 400. Feb 13 15:29:49.876405 kernel: Platform MSI: ITS@0x8080000 domain created Feb 13 15:29:49.876412 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 13 15:29:49.876419 kernel: Remapping and enabling EFI services. Feb 13 15:29:49.876425 kernel: smp: Bringing up secondary CPUs ... Feb 13 15:29:49.876432 kernel: Detected PIPT I-cache on CPU1 Feb 13 15:29:49.876439 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 13 15:29:49.876445 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Feb 13 15:29:49.876452 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:29:49.876459 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 15:29:49.876465 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 15:29:49.876473 kernel: SMP: Total of 2 processors activated. Feb 13 15:29:49.876480 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 15:29:49.876492 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 15:29:49.876500 kernel: CPU features: detected: Common not Private translations Feb 13 15:29:49.876508 kernel: CPU features: detected: CRC32 instructions Feb 13 15:29:49.876537 kernel: CPU features: detected: Enhanced Virtualization Traps Feb 13 15:29:49.876548 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 15:29:49.876555 kernel: CPU features: detected: LSE atomic instructions Feb 13 15:29:49.876562 kernel: CPU features: detected: Privileged Access Never Feb 13 15:29:49.876572 kernel: CPU features: detected: RAS Extension Support Feb 13 15:29:49.876580 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 13 15:29:49.876587 kernel: CPU: All CPU(s) started at EL1 Feb 13 15:29:49.876594 kernel: alternatives: applying system-wide alternatives Feb 13 15:29:49.876601 kernel: devtmpfs: initialized Feb 13 15:29:49.876609 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 15:29:49.876616 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 15:29:49.876623 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 15:29:49.876632 kernel: SMBIOS 3.0.0 present. Feb 13 15:29:49.876639 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Feb 13 15:29:49.876646 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 15:29:49.876654 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 15:29:49.876661 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 15:29:49.876668 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 15:29:49.876676 kernel: audit: initializing netlink subsys (disabled) Feb 13 15:29:49.876683 kernel: audit: type=2000 audit(0.010:1): state=initialized audit_enabled=0 res=1 Feb 13 15:29:49.876690 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 15:29:49.876698 kernel: cpuidle: using governor menu Feb 13 15:29:49.876705 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 15:29:49.876712 kernel: ASID allocator initialised with 32768 entries Feb 13 15:29:49.876719 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 15:29:49.876726 kernel: Serial: AMBA PL011 UART driver Feb 13 15:29:49.876733 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 15:29:49.876741 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 15:29:49.876748 kernel: Modules: 508960 pages in range for PLT usage Feb 13 15:29:49.876755 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 15:29:49.876764 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 15:29:49.876771 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 15:29:49.876778 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 15:29:49.876786 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 15:29:49.876793 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 15:29:49.876800 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 15:29:49.876807 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 15:29:49.876814 kernel: ACPI: Added _OSI(Module Device) Feb 13 15:29:49.876822 kernel: ACPI: Added _OSI(Processor Device) Feb 13 15:29:49.876861 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 15:29:49.876869 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 15:29:49.876876 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 15:29:49.876884 kernel: ACPI: Interpreter enabled Feb 13 15:29:49.876891 kernel: ACPI: Using GIC for interrupt routing Feb 13 15:29:49.876898 kernel: ACPI: MCFG table detected, 1 entries Feb 13 15:29:49.876905 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 13 15:29:49.876912 kernel: printk: console [ttyAMA0] enabled Feb 13 15:29:49.876919 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 15:29:49.878653 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 15:29:49.878748 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 15:29:49.878814 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 15:29:49.878925 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 13 15:29:49.878991 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 13 15:29:49.879001 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 13 15:29:49.879009 kernel: PCI host bridge to bus 0000:00 Feb 13 15:29:49.879085 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 13 15:29:49.879145 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 15:29:49.879201 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 13 15:29:49.879256 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 15:29:49.879334 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 13 15:29:49.879412 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Feb 13 15:29:49.879480 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Feb 13 15:29:49.879600 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Feb 13 15:29:49.879696 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Feb 13 15:29:49.879761 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Feb 13 15:29:49.879843 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Feb 13 15:29:49.879910 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Feb 13 15:29:49.879980 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Feb 13 15:29:49.880048 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Feb 13 15:29:49.880117 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Feb 13 15:29:49.880180 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Feb 13 15:29:49.880250 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Feb 13 15:29:49.880313 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Feb 13 15:29:49.880389 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Feb 13 15:29:49.880452 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Feb 13 15:29:49.880590 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Feb 13 15:29:49.880668 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Feb 13 15:29:49.880740 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Feb 13 15:29:49.880804 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Feb 13 15:29:49.880892 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Feb 13 15:29:49.880965 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Feb 13 15:29:49.881036 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Feb 13 15:29:49.881099 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Feb 13 15:29:49.881173 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Feb 13 15:29:49.881240 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Feb 13 15:29:49.881361 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 15:29:49.881444 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Feb 13 15:29:49.881533 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Feb 13 15:29:49.881609 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Feb 13 15:29:49.881685 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Feb 13 15:29:49.881751 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Feb 13 15:29:49.881817 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Feb 13 15:29:49.881945 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Feb 13 15:29:49.882023 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Feb 13 15:29:49.882096 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Feb 13 15:29:49.882163 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Feb 13 15:29:49.882229 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Feb 13 15:29:49.882302 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Feb 13 15:29:49.882369 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Feb 13 15:29:49.882437 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Feb 13 15:29:49.882514 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Feb 13 15:29:49.883647 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Feb 13 15:29:49.883718 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Feb 13 15:29:49.883784 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Feb 13 15:29:49.883898 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Feb 13 15:29:49.883974 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Feb 13 15:29:49.884037 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Feb 13 15:29:49.884110 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Feb 13 15:29:49.884177 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Feb 13 15:29:49.884241 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Feb 13 15:29:49.884307 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Feb 13 15:29:49.884372 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Feb 13 15:29:49.884436 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Feb 13 15:29:49.884506 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Feb 13 15:29:49.885404 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Feb 13 15:29:49.885477 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Feb 13 15:29:49.887277 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Feb 13 15:29:49.887359 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Feb 13 15:29:49.887423 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Feb 13 15:29:49.887490 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Feb 13 15:29:49.887595 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Feb 13 15:29:49.887661 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Feb 13 15:29:49.887727 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Feb 13 15:29:49.887790 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Feb 13 15:29:49.887876 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Feb 13 15:29:49.887947 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Feb 13 15:29:49.888011 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Feb 13 15:29:49.888073 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Feb 13 15:29:49.888144 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Feb 13 15:29:49.888206 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Feb 13 15:29:49.888270 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Feb 13 15:29:49.888347 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Feb 13 15:29:49.888412 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Feb 13 15:29:49.888479 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Feb 13 15:29:49.888560 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Feb 13 15:29:49.888632 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Feb 13 15:29:49.888696 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Feb 13 15:29:49.888761 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Feb 13 15:29:49.888825 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Feb 13 15:29:49.888938 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Feb 13 15:29:49.889002 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Feb 13 15:29:49.889071 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Feb 13 15:29:49.889134 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Feb 13 15:29:49.889198 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Feb 13 15:29:49.889262 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Feb 13 15:29:49.889325 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Feb 13 15:29:49.889389 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Feb 13 15:29:49.889453 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Feb 13 15:29:49.889532 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Feb 13 15:29:49.889606 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Feb 13 15:29:49.889671 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Feb 13 15:29:49.889734 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Feb 13 15:29:49.889797 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Feb 13 15:29:49.889885 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Feb 13 15:29:49.889952 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Feb 13 15:29:49.890016 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Feb 13 15:29:49.890084 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Feb 13 15:29:49.890149 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Feb 13 15:29:49.890212 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Feb 13 15:29:49.890275 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Feb 13 15:29:49.890337 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Feb 13 15:29:49.890401 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Feb 13 15:29:49.890463 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Feb 13 15:29:49.890555 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Feb 13 15:29:49.890630 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Feb 13 15:29:49.890695 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Feb 13 15:29:49.890757 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Feb 13 15:29:49.890821 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Feb 13 15:29:49.890947 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Feb 13 15:29:49.891017 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Feb 13 15:29:49.891089 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Feb 13 15:29:49.891156 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 15:29:49.891225 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Feb 13 15:29:49.891289 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Feb 13 15:29:49.891357 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Feb 13 15:29:49.891419 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Feb 13 15:29:49.891481 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Feb 13 15:29:49.891610 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Feb 13 15:29:49.891683 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Feb 13 15:29:49.891745 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Feb 13 15:29:49.891808 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Feb 13 15:29:49.891893 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Feb 13 15:29:49.891967 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Feb 13 15:29:49.892033 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Feb 13 15:29:49.892101 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Feb 13 15:29:49.892166 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Feb 13 15:29:49.892230 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Feb 13 15:29:49.892294 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Feb 13 15:29:49.892365 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Feb 13 15:29:49.892430 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Feb 13 15:29:49.892492 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Feb 13 15:29:49.892587 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Feb 13 15:29:49.892658 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Feb 13 15:29:49.892729 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Feb 13 15:29:49.892797 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Feb 13 15:29:49.892877 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Feb 13 15:29:49.892942 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Feb 13 15:29:49.893004 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Feb 13 15:29:49.893068 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Feb 13 15:29:49.893138 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Feb 13 15:29:49.893464 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Feb 13 15:29:49.894242 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Feb 13 15:29:49.894319 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Feb 13 15:29:49.894381 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Feb 13 15:29:49.894442 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Feb 13 15:29:49.894512 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Feb 13 15:29:49.894603 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Feb 13 15:29:49.894669 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Feb 13 15:29:49.894741 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Feb 13 15:29:49.894804 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Feb 13 15:29:49.894883 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Feb 13 15:29:49.894949 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Feb 13 15:29:49.895035 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Feb 13 15:29:49.895101 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Feb 13 15:29:49.895164 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Feb 13 15:29:49.895238 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Feb 13 15:29:49.895309 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Feb 13 15:29:49.895372 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Feb 13 15:29:49.895434 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Feb 13 15:29:49.895496 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Feb 13 15:29:49.897454 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 13 15:29:49.897553 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 15:29:49.897615 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 13 15:29:49.897692 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Feb 13 15:29:49.897752 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Feb 13 15:29:49.897810 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Feb 13 15:29:49.897921 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Feb 13 15:29:49.897983 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Feb 13 15:29:49.898041 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Feb 13 15:29:49.898107 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Feb 13 15:29:49.898171 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Feb 13 15:29:49.898241 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Feb 13 15:29:49.898307 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Feb 13 15:29:49.898366 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Feb 13 15:29:49.898426 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Feb 13 15:29:49.898496 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Feb 13 15:29:49.898632 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Feb 13 15:29:49.898694 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Feb 13 15:29:49.898762 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Feb 13 15:29:49.898825 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Feb 13 15:29:49.898905 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Feb 13 15:29:49.898972 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Feb 13 15:29:49.899029 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Feb 13 15:29:49.899087 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Feb 13 15:29:49.899155 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Feb 13 15:29:49.899214 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Feb 13 15:29:49.899271 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Feb 13 15:29:49.899339 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Feb 13 15:29:49.899397 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Feb 13 15:29:49.899455 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Feb 13 15:29:49.899465 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 15:29:49.899473 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 15:29:49.899480 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 15:29:49.899488 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 15:29:49.899496 kernel: iommu: Default domain type: Translated Feb 13 15:29:49.899505 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 15:29:49.899513 kernel: efivars: Registered efivars operations Feb 13 15:29:49.899559 kernel: vgaarb: loaded Feb 13 15:29:49.899567 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 15:29:49.899575 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 15:29:49.899583 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 15:29:49.899591 kernel: pnp: PnP ACPI init Feb 13 15:29:49.899669 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 13 15:29:49.899684 kernel: pnp: PnP ACPI: found 1 devices Feb 13 15:29:49.899692 kernel: NET: Registered PF_INET protocol family Feb 13 15:29:49.899700 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 15:29:49.899707 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 15:29:49.899715 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 15:29:49.899723 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 15:29:49.899731 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 15:29:49.899738 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 15:29:49.899746 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:29:49.899755 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:29:49.899764 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 15:29:49.899882 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Feb 13 15:29:49.899896 kernel: PCI: CLS 0 bytes, default 64 Feb 13 15:29:49.899904 kernel: kvm [1]: HYP mode not available Feb 13 15:29:49.899912 kernel: Initialise system trusted keyrings Feb 13 15:29:49.899937 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 15:29:49.899947 kernel: Key type asymmetric registered Feb 13 15:29:49.899954 kernel: Asymmetric key parser 'x509' registered Feb 13 15:29:49.899965 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 15:29:49.899973 kernel: io scheduler mq-deadline registered Feb 13 15:29:49.899980 kernel: io scheduler kyber registered Feb 13 15:29:49.899988 kernel: io scheduler bfq registered Feb 13 15:29:49.899997 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Feb 13 15:29:49.900076 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Feb 13 15:29:49.900143 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Feb 13 15:29:49.900208 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:29:49.900276 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Feb 13 15:29:49.900341 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Feb 13 15:29:49.900406 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:29:49.900473 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Feb 13 15:29:49.900598 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Feb 13 15:29:49.900667 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:29:49.900761 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Feb 13 15:29:49.900825 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Feb 13 15:29:49.900907 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:29:49.900973 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Feb 13 15:29:49.901037 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Feb 13 15:29:49.901098 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:29:49.901167 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Feb 13 15:29:49.901231 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Feb 13 15:29:49.901293 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:29:49.901358 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Feb 13 15:29:49.901420 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Feb 13 15:29:49.901482 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:29:49.901613 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Feb 13 15:29:49.901680 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Feb 13 15:29:49.901742 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:29:49.901752 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Feb 13 15:29:49.901815 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Feb 13 15:29:49.901896 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Feb 13 15:29:49.901966 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:29:49.901978 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 15:29:49.901986 kernel: ACPI: button: Power Button [PWRB] Feb 13 15:29:49.901993 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 15:29:49.902062 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Feb 13 15:29:49.902131 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Feb 13 15:29:49.902142 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 15:29:49.902150 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Feb 13 15:29:49.902215 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Feb 13 15:29:49.902228 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Feb 13 15:29:49.902235 kernel: thunder_xcv, ver 1.0 Feb 13 15:29:49.902243 kernel: thunder_bgx, ver 1.0 Feb 13 15:29:49.902251 kernel: nicpf, ver 1.0 Feb 13 15:29:49.902258 kernel: nicvf, ver 1.0 Feb 13 15:29:49.902333 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 15:29:49.902394 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T15:29:49 UTC (1739460589) Feb 13 15:29:49.902404 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 15:29:49.902415 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 13 15:29:49.902423 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 15:29:49.902430 kernel: watchdog: Hard watchdog permanently disabled Feb 13 15:29:49.902438 kernel: NET: Registered PF_INET6 protocol family Feb 13 15:29:49.902446 kernel: Segment Routing with IPv6 Feb 13 15:29:49.902453 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 15:29:49.902461 kernel: NET: Registered PF_PACKET protocol family Feb 13 15:29:49.902469 kernel: Key type dns_resolver registered Feb 13 15:29:49.902476 kernel: registered taskstats version 1 Feb 13 15:29:49.902486 kernel: Loading compiled-in X.509 certificates Feb 13 15:29:49.902495 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 4531cdb19689f90a81e7969ac7d8e25a95254f51' Feb 13 15:29:49.902503 kernel: Key type .fscrypt registered Feb 13 15:29:49.902510 kernel: Key type fscrypt-provisioning registered Feb 13 15:29:49.902531 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 15:29:49.902539 kernel: ima: Allocated hash algorithm: sha1 Feb 13 15:29:49.902546 kernel: ima: No architecture policies found Feb 13 15:29:49.902554 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 15:29:49.902564 kernel: clk: Disabling unused clocks Feb 13 15:29:49.902571 kernel: Freeing unused kernel memory: 39680K Feb 13 15:29:49.902579 kernel: Run /init as init process Feb 13 15:29:49.902587 kernel: with arguments: Feb 13 15:29:49.902594 kernel: /init Feb 13 15:29:49.902601 kernel: with environment: Feb 13 15:29:49.902609 kernel: HOME=/ Feb 13 15:29:49.902616 kernel: TERM=linux Feb 13 15:29:49.902623 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 15:29:49.902633 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:29:49.902645 systemd[1]: Detected virtualization kvm. Feb 13 15:29:49.902653 systemd[1]: Detected architecture arm64. Feb 13 15:29:49.902661 systemd[1]: Running in initrd. Feb 13 15:29:49.902669 systemd[1]: No hostname configured, using default hostname. Feb 13 15:29:49.902677 systemd[1]: Hostname set to . Feb 13 15:29:49.902685 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:29:49.902693 systemd[1]: Queued start job for default target initrd.target. Feb 13 15:29:49.902702 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:29:49.902710 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:29:49.902719 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 15:29:49.902727 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:29:49.902735 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 15:29:49.902744 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 15:29:49.902754 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 15:29:49.902763 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 15:29:49.902771 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:29:49.902780 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:29:49.902788 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:29:49.902796 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:29:49.902804 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:29:49.902812 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:29:49.902820 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:29:49.902866 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:29:49.902875 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 15:29:49.902883 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 15:29:49.902892 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:29:49.902900 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:29:49.902908 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:29:49.902916 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:29:49.902924 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 15:29:49.902935 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:29:49.902943 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 15:29:49.902951 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 15:29:49.902959 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:29:49.902968 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:29:49.902976 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:29:49.902984 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 15:29:49.902992 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:29:49.903000 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 15:29:49.903036 systemd-journald[238]: Collecting audit messages is disabled. Feb 13 15:29:49.903059 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:29:49.903068 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:29:49.903076 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 15:29:49.903084 kernel: Bridge firewalling registered Feb 13 15:29:49.903092 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:29:49.903101 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:29:49.903109 systemd-journald[238]: Journal started Feb 13 15:29:49.903130 systemd-journald[238]: Runtime Journal (/run/log/journal/6c9f3318576247dfb1785cdb888fc7d7) is 8.0M, max 76.6M, 68.6M free. Feb 13 15:29:49.873569 systemd-modules-load[239]: Inserted module 'overlay' Feb 13 15:29:49.914481 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:29:49.896838 systemd-modules-load[239]: Inserted module 'br_netfilter' Feb 13 15:29:49.918839 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:29:49.926023 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:29:49.926077 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:29:49.941698 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:29:49.943918 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:29:49.947550 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:29:49.959907 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 15:29:49.961536 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:29:49.963361 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:29:49.971721 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:29:49.981959 dracut-cmdline[269]: dracut-dracut-053 Feb 13 15:29:49.986528 dracut-cmdline[269]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=07e9b8867aadd0b2e77ba5338d18cdd10706c658e0d745a78e129bcae9a0e4c6 Feb 13 15:29:50.001175 systemd-resolved[275]: Positive Trust Anchors: Feb 13 15:29:50.002041 systemd-resolved[275]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:29:50.002824 systemd-resolved[275]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:29:50.011997 systemd-resolved[275]: Defaulting to hostname 'linux'. Feb 13 15:29:50.013043 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:29:50.013937 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:29:50.085577 kernel: SCSI subsystem initialized Feb 13 15:29:50.090554 kernel: Loading iSCSI transport class v2.0-870. Feb 13 15:29:50.097557 kernel: iscsi: registered transport (tcp) Feb 13 15:29:50.111563 kernel: iscsi: registered transport (qla4xxx) Feb 13 15:29:50.111641 kernel: QLogic iSCSI HBA Driver Feb 13 15:29:50.161334 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 15:29:50.168751 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 15:29:50.192726 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 15:29:50.192794 kernel: device-mapper: uevent: version 1.0.3 Feb 13 15:29:50.192806 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 15:29:50.243584 kernel: raid6: neonx8 gen() 15363 MB/s Feb 13 15:29:50.260835 kernel: raid6: neonx4 gen() 13428 MB/s Feb 13 15:29:50.277574 kernel: raid6: neonx2 gen() 13176 MB/s Feb 13 15:29:50.294566 kernel: raid6: neonx1 gen() 10445 MB/s Feb 13 15:29:50.311594 kernel: raid6: int64x8 gen() 6924 MB/s Feb 13 15:29:50.328597 kernel: raid6: int64x4 gen() 7309 MB/s Feb 13 15:29:50.345571 kernel: raid6: int64x2 gen() 6099 MB/s Feb 13 15:29:50.362577 kernel: raid6: int64x1 gen() 5034 MB/s Feb 13 15:29:50.362642 kernel: raid6: using algorithm neonx8 gen() 15363 MB/s Feb 13 15:29:50.379585 kernel: raid6: .... xor() 11848 MB/s, rmw enabled Feb 13 15:29:50.379649 kernel: raid6: using neon recovery algorithm Feb 13 15:29:50.384550 kernel: xor: measuring software checksum speed Feb 13 15:29:50.384614 kernel: 8regs : 19797 MB/sec Feb 13 15:29:50.384634 kernel: 32regs : 19664 MB/sec Feb 13 15:29:50.385558 kernel: arm64_neon : 25648 MB/sec Feb 13 15:29:50.385589 kernel: xor: using function: arm64_neon (25648 MB/sec) Feb 13 15:29:50.435614 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 15:29:50.447907 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:29:50.455742 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:29:50.468397 systemd-udevd[455]: Using default interface naming scheme 'v255'. Feb 13 15:29:50.471766 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:29:50.481225 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 15:29:50.496763 dracut-pre-trigger[462]: rd.md=0: removing MD RAID activation Feb 13 15:29:50.533507 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:29:50.542681 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:29:50.590334 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:29:50.597687 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 15:29:50.622079 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 15:29:50.625616 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:29:50.626992 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:29:50.627665 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:29:50.637691 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 15:29:50.650761 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:29:50.684064 kernel: scsi host0: Virtio SCSI HBA Feb 13 15:29:50.685569 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 13 15:29:50.686537 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Feb 13 15:29:50.707237 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:29:50.709222 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:29:50.715944 kernel: ACPI: bus type USB registered Feb 13 15:29:50.715969 kernel: usbcore: registered new interface driver usbfs Feb 13 15:29:50.715980 kernel: usbcore: registered new interface driver hub Feb 13 15:29:50.715989 kernel: usbcore: registered new device driver usb Feb 13 15:29:50.712193 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:29:50.715200 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:29:50.715364 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:29:50.716461 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:29:50.723869 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:29:50.744862 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:29:50.752760 kernel: sr 0:0:0:0: Power-on or device reset occurred Feb 13 15:29:50.755940 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Feb 13 15:29:50.756056 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 15:29:50.756066 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Feb 13 15:29:50.756712 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:29:50.765538 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Feb 13 15:29:50.774647 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Feb 13 15:29:50.774780 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Feb 13 15:29:50.774917 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Feb 13 15:29:50.775002 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Feb 13 15:29:50.775079 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Feb 13 15:29:50.775155 kernel: hub 1-0:1.0: USB hub found Feb 13 15:29:50.775256 kernel: hub 1-0:1.0: 4 ports detected Feb 13 15:29:50.775333 kernel: sd 0:0:0:1: Power-on or device reset occurred Feb 13 15:29:50.784691 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Feb 13 15:29:50.784869 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Feb 13 15:29:50.784967 kernel: hub 2-0:1.0: USB hub found Feb 13 15:29:50.785070 kernel: hub 2-0:1.0: 4 ports detected Feb 13 15:29:50.785162 kernel: sd 0:0:0:1: [sda] Write Protect is off Feb 13 15:29:50.785245 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Feb 13 15:29:50.785327 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 13 15:29:50.785417 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 15:29:50.785428 kernel: GPT:17805311 != 80003071 Feb 13 15:29:50.785437 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 15:29:50.785446 kernel: GPT:17805311 != 80003071 Feb 13 15:29:50.785455 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 15:29:50.785464 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:29:50.785473 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Feb 13 15:29:50.783873 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:29:50.825423 kernel: BTRFS: device fsid 27ad543d-6fdb-4ace-b8f1-8f50b124bd06 devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (515) Feb 13 15:29:50.827546 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (517) Feb 13 15:29:50.831254 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Feb 13 15:29:50.844322 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Feb 13 15:29:50.852431 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Feb 13 15:29:50.857507 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Feb 13 15:29:50.858196 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Feb 13 15:29:50.867880 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 15:29:50.872620 disk-uuid[574]: Primary Header is updated. Feb 13 15:29:50.872620 disk-uuid[574]: Secondary Entries is updated. Feb 13 15:29:50.872620 disk-uuid[574]: Secondary Header is updated. Feb 13 15:29:50.885549 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:29:51.010571 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Feb 13 15:29:51.253555 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Feb 13 15:29:51.389013 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Feb 13 15:29:51.389065 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Feb 13 15:29:51.391541 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Feb 13 15:29:51.445579 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Feb 13 15:29:51.447544 kernel: usbcore: registered new interface driver usbhid Feb 13 15:29:51.447597 kernel: usbhid: USB HID core driver Feb 13 15:29:51.889610 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:29:51.890568 disk-uuid[575]: The operation has completed successfully. Feb 13 15:29:51.940562 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 15:29:51.940675 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 15:29:51.955759 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 15:29:51.961179 sh[586]: Success Feb 13 15:29:51.973622 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 15:29:52.022267 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 15:29:52.030579 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 15:29:52.031269 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 15:29:52.055624 kernel: BTRFS info (device dm-0): first mount of filesystem 27ad543d-6fdb-4ace-b8f1-8f50b124bd06 Feb 13 15:29:52.055690 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:29:52.055712 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 15:29:52.056778 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 15:29:52.056849 kernel: BTRFS info (device dm-0): using free space tree Feb 13 15:29:52.063546 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 15:29:52.065688 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 15:29:52.066399 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 15:29:52.075805 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 15:29:52.079732 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 15:29:52.091402 kernel: BTRFS info (device sda6): first mount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:29:52.091451 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:29:52.091470 kernel: BTRFS info (device sda6): using free space tree Feb 13 15:29:52.096160 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 15:29:52.096223 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 15:29:52.109466 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 15:29:52.110364 kernel: BTRFS info (device sda6): last unmount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:29:52.115225 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 15:29:52.120749 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 15:29:52.200574 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:29:52.210703 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:29:52.216076 ignition[676]: Ignition 2.20.0 Feb 13 15:29:52.216093 ignition[676]: Stage: fetch-offline Feb 13 15:29:52.217951 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:29:52.216130 ignition[676]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:29:52.216138 ignition[676]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 15:29:52.216299 ignition[676]: parsed url from cmdline: "" Feb 13 15:29:52.216303 ignition[676]: no config URL provided Feb 13 15:29:52.216308 ignition[676]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:29:52.216315 ignition[676]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:29:52.216320 ignition[676]: failed to fetch config: resource requires networking Feb 13 15:29:52.216493 ignition[676]: Ignition finished successfully Feb 13 15:29:52.232213 systemd-networkd[773]: lo: Link UP Feb 13 15:29:52.232226 systemd-networkd[773]: lo: Gained carrier Feb 13 15:29:52.233795 systemd-networkd[773]: Enumeration completed Feb 13 15:29:52.234038 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:29:52.234727 systemd[1]: Reached target network.target - Network. Feb 13 15:29:52.236876 systemd-networkd[773]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:29:52.236879 systemd-networkd[773]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:29:52.237684 systemd-networkd[773]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:29:52.237687 systemd-networkd[773]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:29:52.238246 systemd-networkd[773]: eth0: Link UP Feb 13 15:29:52.238250 systemd-networkd[773]: eth0: Gained carrier Feb 13 15:29:52.238257 systemd-networkd[773]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:29:52.242747 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 15:29:52.246775 systemd-networkd[773]: eth1: Link UP Feb 13 15:29:52.246780 systemd-networkd[773]: eth1: Gained carrier Feb 13 15:29:52.246788 systemd-networkd[773]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:29:52.257017 ignition[777]: Ignition 2.20.0 Feb 13 15:29:52.257026 ignition[777]: Stage: fetch Feb 13 15:29:52.257470 ignition[777]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:29:52.257480 ignition[777]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 15:29:52.257589 ignition[777]: parsed url from cmdline: "" Feb 13 15:29:52.257592 ignition[777]: no config URL provided Feb 13 15:29:52.257597 ignition[777]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:29:52.257605 ignition[777]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:29:52.257688 ignition[777]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Feb 13 15:29:52.258542 ignition[777]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Feb 13 15:29:52.269621 systemd-networkd[773]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:29:52.312635 systemd-networkd[773]: eth0: DHCPv4 address 138.199.158.182/32, gateway 172.31.1.1 acquired from 172.31.1.1 Feb 13 15:29:52.458772 ignition[777]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Feb 13 15:29:52.464783 ignition[777]: GET result: OK Feb 13 15:29:52.465012 ignition[777]: parsing config with SHA512: 1e3b529a58a4f0e6e2fc08490d6f948bd8bbedeb93b4e2794f520dcbf2c79f330057fa9a5cb361a93741977b4f115323501eda75b4cf5fc7c8cc6bb156a01e6d Feb 13 15:29:52.471693 unknown[777]: fetched base config from "system" Feb 13 15:29:52.472110 ignition[777]: fetch: fetch complete Feb 13 15:29:52.471704 unknown[777]: fetched base config from "system" Feb 13 15:29:52.472116 ignition[777]: fetch: fetch passed Feb 13 15:29:52.471709 unknown[777]: fetched user config from "hetzner" Feb 13 15:29:52.472173 ignition[777]: Ignition finished successfully Feb 13 15:29:52.474048 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 15:29:52.477843 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 15:29:52.491870 ignition[784]: Ignition 2.20.0 Feb 13 15:29:52.491880 ignition[784]: Stage: kargs Feb 13 15:29:52.492063 ignition[784]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:29:52.492073 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 15:29:52.493094 ignition[784]: kargs: kargs passed Feb 13 15:29:52.493152 ignition[784]: Ignition finished successfully Feb 13 15:29:52.495890 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 15:29:52.505854 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 15:29:52.517941 ignition[791]: Ignition 2.20.0 Feb 13 15:29:52.517952 ignition[791]: Stage: disks Feb 13 15:29:52.518129 ignition[791]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:29:52.518138 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 15:29:52.519103 ignition[791]: disks: disks passed Feb 13 15:29:52.519156 ignition[791]: Ignition finished successfully Feb 13 15:29:52.523019 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 15:29:52.524200 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 15:29:52.525323 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 15:29:52.526855 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:29:52.528260 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:29:52.529484 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:29:52.539738 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 15:29:52.557772 systemd-fsck[800]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Feb 13 15:29:52.562802 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 15:29:52.571719 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 15:29:52.623697 kernel: EXT4-fs (sda9): mounted filesystem b8d8a7c2-9667-48db-9266-035fd118dfdf r/w with ordered data mode. Quota mode: none. Feb 13 15:29:52.624246 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 15:29:52.626151 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 15:29:52.638739 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:29:52.642659 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 15:29:52.645763 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Feb 13 15:29:52.646771 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 15:29:52.646828 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:29:52.656967 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 15:29:52.663599 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (808) Feb 13 15:29:52.666685 kernel: BTRFS info (device sda6): first mount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:29:52.666731 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:29:52.666743 kernel: BTRFS info (device sda6): using free space tree Feb 13 15:29:52.666999 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 15:29:52.675597 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 15:29:52.675663 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 15:29:52.678379 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:29:52.712382 coreos-metadata[810]: Feb 13 15:29:52.712 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Feb 13 15:29:52.715236 coreos-metadata[810]: Feb 13 15:29:52.714 INFO Fetch successful Feb 13 15:29:52.717675 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 15:29:52.719180 coreos-metadata[810]: Feb 13 15:29:52.718 INFO wrote hostname ci-4152-2-1-4-c758b1cf91 to /sysroot/etc/hostname Feb 13 15:29:52.720743 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 15:29:52.725793 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory Feb 13 15:29:52.730623 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 15:29:52.734419 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 15:29:52.844474 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 15:29:52.854733 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 15:29:52.859875 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 15:29:52.866533 kernel: BTRFS info (device sda6): last unmount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:29:52.887836 ignition[928]: INFO : Ignition 2.20.0 Feb 13 15:29:52.887836 ignition[928]: INFO : Stage: mount Feb 13 15:29:52.888850 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:29:52.888850 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 15:29:52.888850 ignition[928]: INFO : mount: mount passed Feb 13 15:29:52.891713 ignition[928]: INFO : Ignition finished successfully Feb 13 15:29:52.891701 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 15:29:52.902667 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 15:29:52.903418 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 15:29:53.055891 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 15:29:53.074856 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:29:53.086841 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (940) Feb 13 15:29:53.086895 kernel: BTRFS info (device sda6): first mount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:29:53.086914 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:29:53.087814 kernel: BTRFS info (device sda6): using free space tree Feb 13 15:29:53.091787 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 15:29:53.091890 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 15:29:53.094551 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:29:53.122991 ignition[957]: INFO : Ignition 2.20.0 Feb 13 15:29:53.123703 ignition[957]: INFO : Stage: files Feb 13 15:29:53.124088 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:29:53.124088 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 15:29:53.125482 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Feb 13 15:29:53.126220 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 15:29:53.126220 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 15:29:53.129497 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 15:29:53.130713 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 15:29:53.130713 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 15:29:53.130039 unknown[957]: wrote ssh authorized keys file for user: core Feb 13 15:29:53.133364 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 13 15:29:53.133364 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 13 15:29:53.133364 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 15:29:53.133364 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 15:29:53.185325 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 15:29:53.348087 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 15:29:53.348087 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 15:29:53.350864 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 15:29:53.350864 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:29:53.350864 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:29:53.350864 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:29:53.350864 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:29:53.350864 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:29:53.350864 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:29:53.350864 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:29:53.350864 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:29:53.350864 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Feb 13 15:29:53.350864 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Feb 13 15:29:53.350864 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Feb 13 15:29:53.350864 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Feb 13 15:29:53.518898 systemd-networkd[773]: eth0: Gained IPv6LL Feb 13 15:29:53.916174 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 15:29:54.188390 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Feb 13 15:29:54.188390 ignition[957]: INFO : files: op(c): [started] processing unit "containerd.service" Feb 13 15:29:54.192037 ignition[957]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 13 15:29:54.192037 ignition[957]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 13 15:29:54.192037 ignition[957]: INFO : files: op(c): [finished] processing unit "containerd.service" Feb 13 15:29:54.192037 ignition[957]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Feb 13 15:29:54.192037 ignition[957]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:29:54.192037 ignition[957]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:29:54.192037 ignition[957]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Feb 13 15:29:54.192037 ignition[957]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Feb 13 15:29:54.192037 ignition[957]: INFO : files: op(10): op(11): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Feb 13 15:29:54.192037 ignition[957]: INFO : files: op(10): op(11): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Feb 13 15:29:54.192037 ignition[957]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Feb 13 15:29:54.192037 ignition[957]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Feb 13 15:29:54.192037 ignition[957]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 15:29:54.192037 ignition[957]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:29:54.192037 ignition[957]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:29:54.192037 ignition[957]: INFO : files: files passed Feb 13 15:29:54.215678 ignition[957]: INFO : Ignition finished successfully Feb 13 15:29:54.194345 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 15:29:54.208728 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 15:29:54.213491 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 15:29:54.217348 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 15:29:54.217441 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 15:29:54.223717 systemd-networkd[773]: eth1: Gained IPv6LL Feb 13 15:29:54.239487 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:29:54.239487 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:29:54.245552 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:29:54.246721 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:29:54.247839 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 15:29:54.254763 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 15:29:54.289355 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 15:29:54.290621 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 15:29:54.292831 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 15:29:54.293758 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 15:29:54.295017 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 15:29:54.296621 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 15:29:54.313497 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:29:54.320773 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 15:29:54.330144 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:29:54.330935 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:29:54.332214 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 15:29:54.333359 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 15:29:54.333467 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:29:54.335814 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 15:29:54.336593 systemd[1]: Stopped target basic.target - Basic System. Feb 13 15:29:54.337790 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 15:29:54.340128 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:29:54.341132 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 15:29:54.342683 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 15:29:54.344258 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:29:54.345467 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 15:29:54.346478 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 15:29:54.347611 systemd[1]: Stopped target swap.target - Swaps. Feb 13 15:29:54.348548 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 15:29:54.348668 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:29:54.350000 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:29:54.350667 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:29:54.351749 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 15:29:54.355554 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:29:54.356267 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 15:29:54.356378 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 15:29:54.359000 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 15:29:54.359174 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:29:54.360666 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 15:29:54.360759 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 15:29:54.361849 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 13 15:29:54.361945 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 15:29:54.375147 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 15:29:54.380784 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 15:29:54.381873 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 15:29:54.381999 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:29:54.383380 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 15:29:54.383691 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:29:54.388728 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 15:29:54.389403 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 15:29:54.394546 ignition[1009]: INFO : Ignition 2.20.0 Feb 13 15:29:54.394546 ignition[1009]: INFO : Stage: umount Feb 13 15:29:54.394546 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:29:54.394546 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 15:29:54.397259 ignition[1009]: INFO : umount: umount passed Feb 13 15:29:54.397259 ignition[1009]: INFO : Ignition finished successfully Feb 13 15:29:54.397264 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 15:29:54.397355 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 15:29:54.398889 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 15:29:54.398978 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 15:29:54.399937 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 15:29:54.399982 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 15:29:54.401721 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 15:29:54.401769 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 15:29:54.404127 systemd[1]: Stopped target network.target - Network. Feb 13 15:29:54.406527 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 15:29:54.406598 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:29:54.408693 systemd[1]: Stopped target paths.target - Path Units. Feb 13 15:29:54.410054 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 15:29:54.414055 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:29:54.415730 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 15:29:54.417055 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 15:29:54.418472 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 15:29:54.418531 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:29:54.421300 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 15:29:54.421358 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:29:54.422491 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 15:29:54.422565 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 15:29:54.423591 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 15:29:54.423639 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 15:29:54.426045 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 15:29:54.427705 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 15:29:54.430584 systemd-networkd[773]: eth0: DHCPv6 lease lost Feb 13 15:29:54.431144 systemd-networkd[773]: eth1: DHCPv6 lease lost Feb 13 15:29:54.434057 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 15:29:54.434887 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 15:29:54.435008 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 15:29:54.436808 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 15:29:54.436911 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 15:29:54.442065 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 15:29:54.442122 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:29:54.448661 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 15:29:54.450082 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 15:29:54.451655 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:29:54.454377 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:29:54.454435 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:29:54.455502 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 15:29:54.458175 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 15:29:54.459172 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 15:29:54.459217 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:29:54.463777 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:29:54.470699 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 15:29:54.470875 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 15:29:54.479373 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 15:29:54.479478 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 15:29:54.482177 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 15:29:54.482287 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 15:29:54.485153 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 15:29:54.485310 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:29:54.487586 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 15:29:54.487626 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 15:29:54.489532 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 15:29:54.489572 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:29:54.490568 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 15:29:54.490613 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:29:54.492152 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 15:29:54.492194 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 15:29:54.493636 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:29:54.493683 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:29:54.497847 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 15:29:54.498433 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 15:29:54.498486 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:29:54.503253 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 15:29:54.503303 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:29:54.504354 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 15:29:54.504392 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:29:54.506427 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:29:54.506473 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:29:54.520640 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 15:29:54.520870 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 15:29:54.523410 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 15:29:54.527937 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 15:29:54.553696 systemd[1]: Switching root. Feb 13 15:29:54.588153 systemd-journald[238]: Journal stopped Feb 13 15:29:55.456619 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Feb 13 15:29:55.456683 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 15:29:55.456696 kernel: SELinux: policy capability open_perms=1 Feb 13 15:29:55.456705 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 15:29:55.456719 kernel: SELinux: policy capability always_check_network=0 Feb 13 15:29:55.456729 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 15:29:55.456741 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 15:29:55.456750 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 15:29:55.456759 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 15:29:55.456769 kernel: audit: type=1403 audit(1739460594.760:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 15:29:55.456780 systemd[1]: Successfully loaded SELinux policy in 35.033ms. Feb 13 15:29:55.456815 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.762ms. Feb 13 15:29:55.456829 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:29:55.456843 systemd[1]: Detected virtualization kvm. Feb 13 15:29:55.456853 systemd[1]: Detected architecture arm64. Feb 13 15:29:55.456866 systemd[1]: Detected first boot. Feb 13 15:29:55.456876 systemd[1]: Hostname set to . Feb 13 15:29:55.456886 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:29:55.456896 zram_generator::config[1073]: No configuration found. Feb 13 15:29:55.456907 systemd[1]: Populated /etc with preset unit settings. Feb 13 15:29:55.456917 systemd[1]: Queued start job for default target multi-user.target. Feb 13 15:29:55.456927 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Feb 13 15:29:55.456938 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 15:29:55.456950 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 15:29:55.456961 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 15:29:55.456971 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 15:29:55.456981 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 15:29:55.456991 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 15:29:55.457001 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 15:29:55.457011 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 15:29:55.457021 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:29:55.457033 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:29:55.457044 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 15:29:55.457054 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 15:29:55.457069 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 15:29:55.457084 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:29:55.457094 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 13 15:29:55.457105 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:29:55.457115 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 15:29:55.457126 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:29:55.457143 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:29:55.457155 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:29:55.457168 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:29:55.457178 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 15:29:55.457188 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 15:29:55.457199 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 15:29:55.457210 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 15:29:55.457222 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:29:55.457232 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:29:55.457243 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:29:55.457253 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 15:29:55.457264 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 15:29:55.457274 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 15:29:55.457285 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 15:29:55.457295 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 15:29:55.457311 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 15:29:55.457324 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 15:29:55.457335 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 15:29:55.457345 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:29:55.457355 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:29:55.457366 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 15:29:55.457376 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:29:55.457388 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:29:55.457399 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:29:55.457410 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 15:29:55.457420 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:29:55.457431 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 15:29:55.457442 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 13 15:29:55.457453 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Feb 13 15:29:55.457465 kernel: fuse: init (API version 7.39) Feb 13 15:29:55.457475 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:29:55.457486 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:29:55.464393 systemd-journald[1161]: Collecting audit messages is disabled. Feb 13 15:29:55.464469 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 15:29:55.464486 kernel: ACPI: bus type drm_connector registered Feb 13 15:29:55.464498 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 15:29:55.464508 kernel: loop: module loaded Feb 13 15:29:55.464629 systemd-journald[1161]: Journal started Feb 13 15:29:55.464659 systemd-journald[1161]: Runtime Journal (/run/log/journal/6c9f3318576247dfb1785cdb888fc7d7) is 8.0M, max 76.6M, 68.6M free. Feb 13 15:29:55.480943 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:29:55.485897 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:29:55.487124 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 15:29:55.488122 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 15:29:55.489067 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 15:29:55.490046 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 15:29:55.490935 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 15:29:55.493750 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 15:29:55.494864 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 15:29:55.495926 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:29:55.496944 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 15:29:55.497179 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 15:29:55.498695 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:29:55.498947 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:29:55.499896 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:29:55.500110 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:29:55.501024 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:29:55.501246 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:29:55.502535 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 15:29:55.502688 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 15:29:55.503868 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:29:55.504152 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:29:55.505159 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:29:55.506282 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 15:29:55.507467 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 15:29:55.521464 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 15:29:55.529624 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 15:29:55.533654 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 15:29:55.534497 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 15:29:55.542734 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 15:29:55.547867 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 15:29:55.551711 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:29:55.559358 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 15:29:55.563709 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:29:55.567661 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:29:55.576353 systemd-journald[1161]: Time spent on flushing to /var/log/journal/6c9f3318576247dfb1785cdb888fc7d7 is 36ms for 1113 entries. Feb 13 15:29:55.576353 systemd-journald[1161]: System Journal (/var/log/journal/6c9f3318576247dfb1785cdb888fc7d7) is 8.0M, max 584.8M, 576.8M free. Feb 13 15:29:55.633999 systemd-journald[1161]: Received client request to flush runtime journal. Feb 13 15:29:55.575669 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:29:55.580889 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 15:29:55.583299 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 15:29:55.589983 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 15:29:55.590784 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 15:29:55.605299 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:29:55.614682 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 15:29:55.615625 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:29:55.638395 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 15:29:55.643744 udevadm[1216]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 15:29:55.646622 systemd-tmpfiles[1206]: ACLs are not supported, ignoring. Feb 13 15:29:55.646636 systemd-tmpfiles[1206]: ACLs are not supported, ignoring. Feb 13 15:29:55.651062 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:29:55.659683 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 15:29:55.702059 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 15:29:55.709887 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:29:55.723708 systemd-tmpfiles[1228]: ACLs are not supported, ignoring. Feb 13 15:29:55.724031 systemd-tmpfiles[1228]: ACLs are not supported, ignoring. Feb 13 15:29:55.728191 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:29:56.111241 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 15:29:56.119730 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:29:56.139816 systemd-udevd[1234]: Using default interface naming scheme 'v255'. Feb 13 15:29:56.160296 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:29:56.173293 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:29:56.192763 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 15:29:56.254362 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 15:29:56.283271 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Feb 13 15:29:56.286560 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 15:29:56.338779 systemd-networkd[1242]: lo: Link UP Feb 13 15:29:56.338826 systemd-networkd[1242]: lo: Gained carrier Feb 13 15:29:56.340383 systemd-networkd[1242]: Enumeration completed Feb 13 15:29:56.340605 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:29:56.342451 systemd-networkd[1242]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:29:56.342462 systemd-networkd[1242]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:29:56.343933 systemd-networkd[1242]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:29:56.343937 systemd-networkd[1242]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:29:56.344770 systemd-networkd[1242]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:29:56.344823 systemd-networkd[1242]: eth0: Link UP Feb 13 15:29:56.344826 systemd-networkd[1242]: eth0: Gained carrier Feb 13 15:29:56.344834 systemd-networkd[1242]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:29:56.351952 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 15:29:56.355863 systemd-networkd[1242]: eth1: Link UP Feb 13 15:29:56.355875 systemd-networkd[1242]: eth1: Gained carrier Feb 13 15:29:56.355892 systemd-networkd[1242]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:29:56.361188 systemd-networkd[1242]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:29:56.376602 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1241) Feb 13 15:29:56.380279 systemd[1]: Condition check resulted in dev-vport2p1.device - /dev/vport2p1 being skipped. Feb 13 15:29:56.381531 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Feb 13 15:29:56.387137 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:29:56.405856 systemd-networkd[1242]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:29:56.407315 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:29:56.410690 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:29:56.427706 systemd-networkd[1242]: eth0: DHCPv4 address 138.199.158.182/32, gateway 172.31.1.1 acquired from 172.31.1.1 Feb 13 15:29:56.435757 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:29:56.436353 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 15:29:56.436397 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 15:29:56.444567 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Feb 13 15:29:56.444653 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Feb 13 15:29:56.444667 kernel: [drm] features: -context_init Feb 13 15:29:56.445115 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:29:56.445291 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:29:56.454712 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:29:56.454905 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:29:56.458372 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:29:56.459116 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:29:56.466650 kernel: [drm] number of scanouts: 1 Feb 13 15:29:56.466702 kernel: [drm] number of cap sets: 0 Feb 13 15:29:56.467745 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Feb 13 15:29:56.483566 kernel: Console: switching to colour frame buffer device 160x50 Feb 13 15:29:56.489553 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Feb 13 15:29:56.495737 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Feb 13 15:29:56.498598 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:29:56.498881 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:29:56.504948 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:29:56.512808 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:29:56.513196 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:29:56.519878 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:29:56.588475 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:29:56.617090 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 15:29:56.623724 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 15:29:56.643396 lvm[1305]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:29:56.673102 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 15:29:56.675997 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:29:56.683763 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 15:29:56.688315 lvm[1308]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:29:56.715127 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 15:29:56.717300 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 15:29:56.718650 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 15:29:56.718775 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:29:56.719416 systemd[1]: Reached target machines.target - Containers. Feb 13 15:29:56.721466 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 15:29:56.729797 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 15:29:56.734691 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 15:29:56.736283 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:29:56.738887 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 15:29:56.744693 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 15:29:56.751687 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 15:29:56.753170 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 15:29:56.767358 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 15:29:56.779605 kernel: loop0: detected capacity change from 0 to 113536 Feb 13 15:29:56.786628 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 15:29:56.788757 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 15:29:56.798804 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 15:29:56.821619 kernel: loop1: detected capacity change from 0 to 194512 Feb 13 15:29:56.853607 kernel: loop2: detected capacity change from 0 to 8 Feb 13 15:29:56.890560 kernel: loop3: detected capacity change from 0 to 116808 Feb 13 15:29:56.928571 kernel: loop4: detected capacity change from 0 to 113536 Feb 13 15:29:56.948574 kernel: loop5: detected capacity change from 0 to 194512 Feb 13 15:29:56.968616 kernel: loop6: detected capacity change from 0 to 8 Feb 13 15:29:56.973565 kernel: loop7: detected capacity change from 0 to 116808 Feb 13 15:29:56.987827 (sd-merge)[1330]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Feb 13 15:29:56.989063 (sd-merge)[1330]: Merged extensions into '/usr'. Feb 13 15:29:56.994175 systemd[1]: Reloading requested from client PID 1316 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 15:29:56.994189 systemd[1]: Reloading... Feb 13 15:29:57.077631 zram_generator::config[1359]: No configuration found. Feb 13 15:29:57.151503 ldconfig[1312]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 15:29:57.181354 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:29:57.236354 systemd[1]: Reloading finished in 241 ms. Feb 13 15:29:57.254928 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 15:29:57.256999 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 15:29:57.264738 systemd[1]: Starting ensure-sysext.service... Feb 13 15:29:57.267752 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:29:57.275034 systemd[1]: Reloading requested from client PID 1403 ('systemctl') (unit ensure-sysext.service)... Feb 13 15:29:57.275054 systemd[1]: Reloading... Feb 13 15:29:57.288079 systemd-tmpfiles[1404]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 15:29:57.288857 systemd-tmpfiles[1404]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 15:29:57.289743 systemd-tmpfiles[1404]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 15:29:57.290115 systemd-tmpfiles[1404]: ACLs are not supported, ignoring. Feb 13 15:29:57.290243 systemd-tmpfiles[1404]: ACLs are not supported, ignoring. Feb 13 15:29:57.292989 systemd-tmpfiles[1404]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:29:57.293096 systemd-tmpfiles[1404]: Skipping /boot Feb 13 15:29:57.300893 systemd-tmpfiles[1404]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:29:57.301005 systemd-tmpfiles[1404]: Skipping /boot Feb 13 15:29:57.345540 zram_generator::config[1433]: No configuration found. Feb 13 15:29:57.359733 systemd-networkd[1242]: eth0: Gained IPv6LL Feb 13 15:29:57.453206 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:29:57.508340 systemd[1]: Reloading finished in 232 ms. Feb 13 15:29:57.523900 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 15:29:57.530261 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:29:57.542748 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:29:57.549336 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 15:29:57.558043 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 15:29:57.562701 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:29:57.576851 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 15:29:57.583890 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:29:57.588768 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:29:57.592888 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:29:57.598840 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:29:57.601861 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:29:57.617689 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 15:29:57.630330 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:29:57.630953 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:29:57.638514 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:29:57.652666 augenrules[1513]: No rules Feb 13 15:29:57.655241 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:29:57.656816 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:29:57.665122 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 15:29:57.669062 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:29:57.669304 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:29:57.673047 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 15:29:57.681155 systemd-resolved[1489]: Positive Trust Anchors: Feb 13 15:29:57.681230 systemd-resolved[1489]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:29:57.681260 systemd-resolved[1489]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:29:57.681838 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 15:29:57.683908 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:29:57.684085 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:29:57.685806 systemd-resolved[1489]: Using system hostname 'ci-4152-2-1-4-c758b1cf91'. Feb 13 15:29:57.688172 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:29:57.689189 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:29:57.689347 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:29:57.690400 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:29:57.690603 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:29:57.699034 systemd[1]: Reached target network.target - Network. Feb 13 15:29:57.699828 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 15:29:57.700501 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:29:57.701300 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:29:57.701430 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:29:57.701513 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 15:29:57.702167 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 15:29:57.711748 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:29:57.713226 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:29:57.715836 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:29:57.722795 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:29:57.730168 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:29:57.736810 augenrules[1534]: /sbin/augenrules: No change Feb 13 15:29:57.743081 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:29:57.746860 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:29:57.747044 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 15:29:57.748290 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:29:57.748448 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:29:57.749536 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:29:57.749681 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:29:57.752835 augenrules[1557]: No rules Feb 13 15:29:57.753140 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:29:57.753313 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:29:57.756374 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:29:57.756607 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:29:57.759280 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:29:57.760800 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:29:57.762168 systemd[1]: Finished ensure-sysext.service. Feb 13 15:29:57.768668 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:29:57.768966 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:29:57.773690 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 15:29:57.835146 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 15:29:57.838552 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:29:57.840112 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 15:29:57.840872 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 15:29:57.841575 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 15:29:57.842260 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 15:29:57.842289 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:29:57.842824 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 15:29:57.843507 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 15:29:57.844222 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 15:29:57.844939 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:29:57.845962 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 15:29:57.848200 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 15:29:57.850018 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 15:29:57.852803 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 15:29:57.853405 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:29:57.853997 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:29:57.854670 systemd[1]: System is tainted: cgroupsv1 Feb 13 15:29:57.854707 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:29:57.854728 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:29:57.857732 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 15:29:57.861318 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 15:29:57.865537 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 15:29:57.869065 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 15:29:57.872744 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 15:29:57.873944 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 15:29:57.877480 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:29:57.886687 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 15:29:57.890434 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 15:29:57.896539 jq[1581]: false Feb 13 15:29:57.909123 coreos-metadata[1578]: Feb 13 15:29:57.909 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Feb 13 15:29:57.915412 coreos-metadata[1578]: Feb 13 15:29:57.912 INFO Fetch successful Feb 13 15:29:57.915412 coreos-metadata[1578]: Feb 13 15:29:57.912 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Feb 13 15:29:57.915412 coreos-metadata[1578]: Feb 13 15:29:57.912 INFO Fetch successful Feb 13 15:29:57.912913 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 15:29:57.918678 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Feb 13 15:29:57.916984 dbus-daemon[1580]: [system] SELinux support is enabled Feb 13 15:29:57.927581 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 15:29:57.935084 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 15:29:57.940080 extend-filesystems[1582]: Found loop4 Feb 13 15:29:57.942183 extend-filesystems[1582]: Found loop5 Feb 13 15:29:57.942183 extend-filesystems[1582]: Found loop6 Feb 13 15:29:57.942183 extend-filesystems[1582]: Found loop7 Feb 13 15:29:57.942183 extend-filesystems[1582]: Found sda Feb 13 15:29:57.942183 extend-filesystems[1582]: Found sda1 Feb 13 15:29:57.942183 extend-filesystems[1582]: Found sda2 Feb 13 15:29:57.942183 extend-filesystems[1582]: Found sda3 Feb 13 15:29:57.942183 extend-filesystems[1582]: Found usr Feb 13 15:29:57.942183 extend-filesystems[1582]: Found sda4 Feb 13 15:29:57.942183 extend-filesystems[1582]: Found sda6 Feb 13 15:29:57.942183 extend-filesystems[1582]: Found sda7 Feb 13 15:29:57.942183 extend-filesystems[1582]: Found sda9 Feb 13 15:29:57.942183 extend-filesystems[1582]: Checking size of /dev/sda9 Feb 13 15:29:57.950693 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 15:29:57.952444 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 15:29:57.963860 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 15:29:57.971636 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 15:29:57.972829 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 15:29:57.986338 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 15:29:57.988436 jq[1613]: true Feb 13 15:29:57.987891 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 15:29:57.990696 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 15:29:57.990931 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 15:29:57.995910 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 15:29:57.996095 extend-filesystems[1582]: Resized partition /dev/sda9 Feb 13 15:29:58.005048 extend-filesystems[1625]: resize2fs 1.47.1 (20-May-2024) Feb 13 15:29:58.029630 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Feb 13 15:29:58.017974 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 15:29:58.018197 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 15:29:58.025445 systemd-timesyncd[1573]: Contacted time server 217.79.189.239:123 (0.flatcar.pool.ntp.org). Feb 13 15:29:58.025499 systemd-timesyncd[1573]: Initial clock synchronization to Thu 2025-02-13 15:29:57.810444 UTC. Feb 13 15:29:58.042408 update_engine[1611]: I20250213 15:29:58.041633 1611 main.cc:92] Flatcar Update Engine starting Feb 13 15:29:58.056481 update_engine[1611]: I20250213 15:29:58.056147 1611 update_check_scheduler.cc:74] Next update check in 4m43s Feb 13 15:29:58.059081 (ntainerd)[1631]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 15:29:58.077469 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 15:29:58.078365 jq[1630]: true Feb 13 15:29:58.077529 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 15:29:58.080684 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 15:29:58.090271 tar[1628]: linux-arm64/helm Feb 13 15:29:58.080708 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 15:29:58.088330 systemd[1]: Started update-engine.service - Update Engine. Feb 13 15:29:58.100654 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 15:29:58.108165 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 15:29:58.145752 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1240) Feb 13 15:29:58.190613 systemd-networkd[1242]: eth1: Gained IPv6LL Feb 13 15:29:58.199572 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Feb 13 15:29:58.212037 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 15:29:58.214871 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 15:29:58.216684 extend-filesystems[1625]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Feb 13 15:29:58.216684 extend-filesystems[1625]: old_desc_blocks = 1, new_desc_blocks = 5 Feb 13 15:29:58.216684 extend-filesystems[1625]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Feb 13 15:29:58.225396 extend-filesystems[1582]: Resized filesystem in /dev/sda9 Feb 13 15:29:58.225396 extend-filesystems[1582]: Found sr0 Feb 13 15:29:58.221873 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 15:29:58.222337 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 15:29:58.241604 systemd-logind[1602]: New seat seat0. Feb 13 15:29:58.243050 systemd-logind[1602]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 15:29:58.243069 systemd-logind[1602]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Feb 13 15:29:58.243283 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 15:29:58.271673 bash[1679]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:29:58.274969 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 15:29:58.289987 systemd[1]: Starting sshkeys.service... Feb 13 15:29:58.313089 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 15:29:58.353044 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 15:29:58.408193 coreos-metadata[1687]: Feb 13 15:29:58.408 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Feb 13 15:29:58.410330 coreos-metadata[1687]: Feb 13 15:29:58.410 INFO Fetch successful Feb 13 15:29:58.412730 unknown[1687]: wrote ssh authorized keys file for user: core Feb 13 15:29:58.442873 locksmithd[1653]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 15:29:58.463902 update-ssh-keys[1695]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:29:58.454924 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 15:29:58.466290 systemd[1]: Finished sshkeys.service. Feb 13 15:29:58.473707 containerd[1631]: time="2025-02-13T15:29:58.473546600Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 15:29:58.578472 containerd[1631]: time="2025-02-13T15:29:58.574847160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:29:58.578472 containerd[1631]: time="2025-02-13T15:29:58.577669240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:29:58.578472 containerd[1631]: time="2025-02-13T15:29:58.577708000Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 15:29:58.578472 containerd[1631]: time="2025-02-13T15:29:58.577725120Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 15:29:58.578472 containerd[1631]: time="2025-02-13T15:29:58.577956760Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 15:29:58.578472 containerd[1631]: time="2025-02-13T15:29:58.577978080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 15:29:58.578472 containerd[1631]: time="2025-02-13T15:29:58.578035040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:29:58.578472 containerd[1631]: time="2025-02-13T15:29:58.578045960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:29:58.579128 containerd[1631]: time="2025-02-13T15:29:58.579086920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:29:58.579128 containerd[1631]: time="2025-02-13T15:29:58.579125040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 15:29:58.579186 containerd[1631]: time="2025-02-13T15:29:58.579143000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:29:58.579186 containerd[1631]: time="2025-02-13T15:29:58.579152880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 15:29:58.579274 containerd[1631]: time="2025-02-13T15:29:58.579256480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:29:58.579478 containerd[1631]: time="2025-02-13T15:29:58.579457440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:29:58.579649 containerd[1631]: time="2025-02-13T15:29:58.579630440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:29:58.579683 containerd[1631]: time="2025-02-13T15:29:58.579649600Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 15:29:58.579743 containerd[1631]: time="2025-02-13T15:29:58.579727640Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 15:29:58.579804 containerd[1631]: time="2025-02-13T15:29:58.579787760Z" level=info msg="metadata content store policy set" policy=shared Feb 13 15:29:58.589979 containerd[1631]: time="2025-02-13T15:29:58.589945840Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 15:29:58.590597 containerd[1631]: time="2025-02-13T15:29:58.590575960Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 15:29:58.590668 containerd[1631]: time="2025-02-13T15:29:58.590655960Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 15:29:58.590747 containerd[1631]: time="2025-02-13T15:29:58.590733800Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 15:29:58.590821 containerd[1631]: time="2025-02-13T15:29:58.590808120Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 15:29:58.591041 containerd[1631]: time="2025-02-13T15:29:58.591020040Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 15:29:58.593542 containerd[1631]: time="2025-02-13T15:29:58.592480640Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 15:29:58.593542 containerd[1631]: time="2025-02-13T15:29:58.592630160Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 15:29:58.593542 containerd[1631]: time="2025-02-13T15:29:58.592652000Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 15:29:58.593542 containerd[1631]: time="2025-02-13T15:29:58.592668920Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 15:29:58.593542 containerd[1631]: time="2025-02-13T15:29:58.592683080Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 15:29:58.593542 containerd[1631]: time="2025-02-13T15:29:58.592696040Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 15:29:58.593542 containerd[1631]: time="2025-02-13T15:29:58.592709520Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 15:29:58.593542 containerd[1631]: time="2025-02-13T15:29:58.592723040Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 15:29:58.593542 containerd[1631]: time="2025-02-13T15:29:58.592739960Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 15:29:58.593542 containerd[1631]: time="2025-02-13T15:29:58.592752800Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 15:29:58.593542 containerd[1631]: time="2025-02-13T15:29:58.592765440Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 15:29:58.593542 containerd[1631]: time="2025-02-13T15:29:58.592793040Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 15:29:58.593542 containerd[1631]: time="2025-02-13T15:29:58.592814320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 15:29:58.593542 containerd[1631]: time="2025-02-13T15:29:58.592829680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 15:29:58.593903 containerd[1631]: time="2025-02-13T15:29:58.592841440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 15:29:58.593903 containerd[1631]: time="2025-02-13T15:29:58.592855120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 15:29:58.593903 containerd[1631]: time="2025-02-13T15:29:58.592869960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 15:29:58.593903 containerd[1631]: time="2025-02-13T15:29:58.592885200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 15:29:58.593903 containerd[1631]: time="2025-02-13T15:29:58.592896520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 15:29:58.593903 containerd[1631]: time="2025-02-13T15:29:58.592909000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 15:29:58.593903 containerd[1631]: time="2025-02-13T15:29:58.592921840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 15:29:58.593903 containerd[1631]: time="2025-02-13T15:29:58.592936200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 15:29:58.593903 containerd[1631]: time="2025-02-13T15:29:58.592956120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 15:29:58.593903 containerd[1631]: time="2025-02-13T15:29:58.592970240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 15:29:58.593903 containerd[1631]: time="2025-02-13T15:29:58.592983120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 15:29:58.593903 containerd[1631]: time="2025-02-13T15:29:58.592998000Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 15:29:58.593903 containerd[1631]: time="2025-02-13T15:29:58.593022600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 15:29:58.593903 containerd[1631]: time="2025-02-13T15:29:58.593040240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 15:29:58.593903 containerd[1631]: time="2025-02-13T15:29:58.593052480Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 15:29:58.594152 containerd[1631]: time="2025-02-13T15:29:58.593217720Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 15:29:58.594152 containerd[1631]: time="2025-02-13T15:29:58.593236760Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 15:29:58.594152 containerd[1631]: time="2025-02-13T15:29:58.593247280Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 15:29:58.594152 containerd[1631]: time="2025-02-13T15:29:58.593259400Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 15:29:58.594152 containerd[1631]: time="2025-02-13T15:29:58.593268680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 15:29:58.594152 containerd[1631]: time="2025-02-13T15:29:58.593281680Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 15:29:58.594152 containerd[1631]: time="2025-02-13T15:29:58.593291240Z" level=info msg="NRI interface is disabled by configuration." Feb 13 15:29:58.594152 containerd[1631]: time="2025-02-13T15:29:58.593300880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 15:29:58.595825 containerd[1631]: time="2025-02-13T15:29:58.595725280Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 15:29:58.596596 containerd[1631]: time="2025-02-13T15:29:58.595981960Z" level=info msg="Connect containerd service" Feb 13 15:29:58.596596 containerd[1631]: time="2025-02-13T15:29:58.596036000Z" level=info msg="using legacy CRI server" Feb 13 15:29:58.596596 containerd[1631]: time="2025-02-13T15:29:58.596044160Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 15:29:58.596596 containerd[1631]: time="2025-02-13T15:29:58.596278360Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 15:29:58.599547 containerd[1631]: time="2025-02-13T15:29:58.599041760Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:29:58.599547 containerd[1631]: time="2025-02-13T15:29:58.599244200Z" level=info msg="Start subscribing containerd event" Feb 13 15:29:58.599547 containerd[1631]: time="2025-02-13T15:29:58.599289640Z" level=info msg="Start recovering state" Feb 13 15:29:58.599547 containerd[1631]: time="2025-02-13T15:29:58.599354040Z" level=info msg="Start event monitor" Feb 13 15:29:58.599547 containerd[1631]: time="2025-02-13T15:29:58.599366120Z" level=info msg="Start snapshots syncer" Feb 13 15:29:58.599547 containerd[1631]: time="2025-02-13T15:29:58.599375120Z" level=info msg="Start cni network conf syncer for default" Feb 13 15:29:58.599547 containerd[1631]: time="2025-02-13T15:29:58.599382520Z" level=info msg="Start streaming server" Feb 13 15:29:58.604857 containerd[1631]: time="2025-02-13T15:29:58.602174560Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 15:29:58.604857 containerd[1631]: time="2025-02-13T15:29:58.602222880Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 15:29:58.604857 containerd[1631]: time="2025-02-13T15:29:58.603565280Z" level=info msg="containerd successfully booted in 0.134131s" Feb 13 15:29:58.602384 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 15:29:58.870562 tar[1628]: linux-arm64/LICENSE Feb 13 15:29:58.870562 tar[1628]: linux-arm64/README.md Feb 13 15:29:58.886273 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 15:29:59.109373 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:29:59.129159 (kubelet)[1717]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:29:59.493110 sshd_keygen[1629]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 15:29:59.517279 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 15:29:59.526798 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 15:29:59.538003 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 15:29:59.538288 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 15:29:59.548247 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 15:29:59.561820 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 15:29:59.572813 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 15:29:59.576106 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 13 15:29:59.576957 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 15:29:59.577973 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 15:29:59.579299 systemd[1]: Startup finished in 5.846s (kernel) + 4.853s (userspace) = 10.699s. Feb 13 15:29:59.717442 kubelet[1717]: E0213 15:29:59.717323 1717 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:29:59.721174 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:29:59.721363 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:30:09.972184 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 15:30:09.981882 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:30:10.096727 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:30:10.101745 (kubelet)[1763]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:30:10.156467 kubelet[1763]: E0213 15:30:10.156416 1763 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:30:10.162612 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:30:10.162984 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:30:20.413748 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 15:30:20.424773 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:30:20.523732 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:30:20.527162 (kubelet)[1784]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:30:20.585285 kubelet[1784]: E0213 15:30:20.585224 1784 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:30:20.588285 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:30:20.588562 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:30:30.839626 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Feb 13 15:30:30.854870 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:30:30.968749 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:30:30.985199 (kubelet)[1805]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:30:31.040492 kubelet[1805]: E0213 15:30:31.040398 1805 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:30:31.044782 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:30:31.045185 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:30:41.064640 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Feb 13 15:30:41.072817 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:30:41.207772 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:30:41.218140 (kubelet)[1827]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:30:41.269063 kubelet[1827]: E0213 15:30:41.268988 1827 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:30:41.272721 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:30:41.273023 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:30:42.912664 update_engine[1611]: I20250213 15:30:42.911886 1611 update_attempter.cc:509] Updating boot flags... Feb 13 15:30:42.965585 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1846) Feb 13 15:30:43.015543 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1847) Feb 13 15:30:48.868707 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 15:30:48.874947 systemd[1]: Started sshd@0-138.199.158.182:22-139.178.89.65:47350.service - OpenSSH per-connection server daemon (139.178.89.65:47350). Feb 13 15:30:49.874431 sshd[1855]: Accepted publickey for core from 139.178.89.65 port 47350 ssh2: RSA SHA256:gOK0CXwz1tCyASFR4SaHsklqtVIpFWLnYpEIAs0TKRk Feb 13 15:30:49.877788 sshd-session[1855]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:30:49.886933 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 15:30:49.893940 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 15:30:49.899554 systemd-logind[1602]: New session 1 of user core. Feb 13 15:30:49.906120 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 15:30:49.913103 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 15:30:49.920295 (systemd)[1861]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 15:30:50.024270 systemd[1861]: Queued start job for default target default.target. Feb 13 15:30:50.025063 systemd[1861]: Created slice app.slice - User Application Slice. Feb 13 15:30:50.025206 systemd[1861]: Reached target paths.target - Paths. Feb 13 15:30:50.025292 systemd[1861]: Reached target timers.target - Timers. Feb 13 15:30:50.033746 systemd[1861]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 15:30:50.044396 systemd[1861]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 15:30:50.044464 systemd[1861]: Reached target sockets.target - Sockets. Feb 13 15:30:50.044479 systemd[1861]: Reached target basic.target - Basic System. Feb 13 15:30:50.044545 systemd[1861]: Reached target default.target - Main User Target. Feb 13 15:30:50.044585 systemd[1861]: Startup finished in 117ms. Feb 13 15:30:50.045264 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 15:30:50.050035 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 15:30:50.749000 systemd[1]: Started sshd@1-138.199.158.182:22-139.178.89.65:47358.service - OpenSSH per-connection server daemon (139.178.89.65:47358). Feb 13 15:30:51.314590 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Feb 13 15:30:51.329875 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:30:51.452651 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:30:51.463230 (kubelet)[1887]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:30:51.520535 kubelet[1887]: E0213 15:30:51.520456 1887 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:30:51.524224 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:30:51.525540 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:30:51.733163 sshd[1873]: Accepted publickey for core from 139.178.89.65 port 47358 ssh2: RSA SHA256:gOK0CXwz1tCyASFR4SaHsklqtVIpFWLnYpEIAs0TKRk Feb 13 15:30:51.734940 sshd-session[1873]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:30:51.742643 systemd-logind[1602]: New session 2 of user core. Feb 13 15:30:51.749088 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 15:30:52.413689 sshd[1897]: Connection closed by 139.178.89.65 port 47358 Feb 13 15:30:52.414597 sshd-session[1873]: pam_unix(sshd:session): session closed for user core Feb 13 15:30:52.419817 systemd[1]: sshd@1-138.199.158.182:22-139.178.89.65:47358.service: Deactivated successfully. Feb 13 15:30:52.423990 systemd-logind[1602]: Session 2 logged out. Waiting for processes to exit. Feb 13 15:30:52.424335 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 15:30:52.425919 systemd-logind[1602]: Removed session 2. Feb 13 15:30:52.581982 systemd[1]: Started sshd@2-138.199.158.182:22-139.178.89.65:47372.service - OpenSSH per-connection server daemon (139.178.89.65:47372). Feb 13 15:30:53.569962 sshd[1902]: Accepted publickey for core from 139.178.89.65 port 47372 ssh2: RSA SHA256:gOK0CXwz1tCyASFR4SaHsklqtVIpFWLnYpEIAs0TKRk Feb 13 15:30:53.571936 sshd-session[1902]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:30:53.578822 systemd-logind[1602]: New session 3 of user core. Feb 13 15:30:53.586017 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 15:30:54.249000 sshd[1905]: Connection closed by 139.178.89.65 port 47372 Feb 13 15:30:54.249955 sshd-session[1902]: pam_unix(sshd:session): session closed for user core Feb 13 15:30:54.254730 systemd[1]: sshd@2-138.199.158.182:22-139.178.89.65:47372.service: Deactivated successfully. Feb 13 15:30:54.260748 systemd-logind[1602]: Session 3 logged out. Waiting for processes to exit. Feb 13 15:30:54.260938 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 15:30:54.262204 systemd-logind[1602]: Removed session 3. Feb 13 15:30:54.419061 systemd[1]: Started sshd@3-138.199.158.182:22-139.178.89.65:47376.service - OpenSSH per-connection server daemon (139.178.89.65:47376). Feb 13 15:30:55.411720 sshd[1910]: Accepted publickey for core from 139.178.89.65 port 47376 ssh2: RSA SHA256:gOK0CXwz1tCyASFR4SaHsklqtVIpFWLnYpEIAs0TKRk Feb 13 15:30:55.414190 sshd-session[1910]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:30:55.421611 systemd-logind[1602]: New session 4 of user core. Feb 13 15:30:55.431073 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 15:30:56.099814 sshd[1913]: Connection closed by 139.178.89.65 port 47376 Feb 13 15:30:56.100509 sshd-session[1910]: pam_unix(sshd:session): session closed for user core Feb 13 15:30:56.105302 systemd[1]: sshd@3-138.199.158.182:22-139.178.89.65:47376.service: Deactivated successfully. Feb 13 15:30:56.106730 systemd-logind[1602]: Session 4 logged out. Waiting for processes to exit. Feb 13 15:30:56.108568 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 15:30:56.109960 systemd-logind[1602]: Removed session 4. Feb 13 15:30:56.271216 systemd[1]: Started sshd@4-138.199.158.182:22-139.178.89.65:56522.service - OpenSSH per-connection server daemon (139.178.89.65:56522). Feb 13 15:30:57.256200 sshd[1918]: Accepted publickey for core from 139.178.89.65 port 56522 ssh2: RSA SHA256:gOK0CXwz1tCyASFR4SaHsklqtVIpFWLnYpEIAs0TKRk Feb 13 15:30:57.258018 sshd-session[1918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:30:57.262553 systemd-logind[1602]: New session 5 of user core. Feb 13 15:30:57.269170 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 15:30:57.790005 sudo[1922]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 15:30:57.790680 sudo[1922]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:30:58.093993 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 15:30:58.094208 (dockerd)[1940]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 15:30:58.329704 dockerd[1940]: time="2025-02-13T15:30:58.328058304Z" level=info msg="Starting up" Feb 13 15:30:58.428176 systemd[1]: var-lib-docker-metacopy\x2dcheck4195868231-merged.mount: Deactivated successfully. Feb 13 15:30:58.436465 dockerd[1940]: time="2025-02-13T15:30:58.436354854Z" level=info msg="Loading containers: start." Feb 13 15:30:58.602740 kernel: Initializing XFRM netlink socket Feb 13 15:30:58.692002 systemd-networkd[1242]: docker0: Link UP Feb 13 15:30:58.724178 dockerd[1940]: time="2025-02-13T15:30:58.724088844Z" level=info msg="Loading containers: done." Feb 13 15:30:58.738916 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck202528385-merged.mount: Deactivated successfully. Feb 13 15:30:58.743166 dockerd[1940]: time="2025-02-13T15:30:58.742957196Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 15:30:58.743166 dockerd[1940]: time="2025-02-13T15:30:58.743073037Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Feb 13 15:30:58.743665 dockerd[1940]: time="2025-02-13T15:30:58.743460240Z" level=info msg="Daemon has completed initialization" Feb 13 15:30:58.780496 dockerd[1940]: time="2025-02-13T15:30:58.780323976Z" level=info msg="API listen on /run/docker.sock" Feb 13 15:30:58.782396 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 15:30:59.881974 containerd[1631]: time="2025-02-13T15:30:59.881932484Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.14\"" Feb 13 15:31:00.589544 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3528943195.mount: Deactivated successfully. Feb 13 15:31:01.564567 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Feb 13 15:31:01.571782 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:31:01.705761 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:31:01.717300 (kubelet)[2192]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:31:01.773089 kubelet[2192]: E0213 15:31:01.773019 2192 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:31:01.777686 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:31:01.777893 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:31:02.781685 containerd[1631]: time="2025-02-13T15:31:02.781593646Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:02.784167 containerd[1631]: time="2025-02-13T15:31:02.784072063Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.14: active requests=0, bytes read=32205953" Feb 13 15:31:02.785082 containerd[1631]: time="2025-02-13T15:31:02.785016509Z" level=info msg="ImageCreate event name:\"sha256:c136612236eb39fcac4abea395de985f019cf87f72cc1afd828fb78de88a649f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:02.788323 containerd[1631]: time="2025-02-13T15:31:02.788254532Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1432b456b21015c99783d2b3a2010873fb67bf946c89d45e6d356449e083dcfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:02.790157 containerd[1631]: time="2025-02-13T15:31:02.789434420Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.14\" with image id \"sha256:c136612236eb39fcac4abea395de985f019cf87f72cc1afd828fb78de88a649f\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.14\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1432b456b21015c99783d2b3a2010873fb67bf946c89d45e6d356449e083dcfb\", size \"32202661\" in 2.907457175s" Feb 13 15:31:02.790157 containerd[1631]: time="2025-02-13T15:31:02.789480421Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.14\" returns image reference \"sha256:c136612236eb39fcac4abea395de985f019cf87f72cc1afd828fb78de88a649f\"" Feb 13 15:31:02.810988 containerd[1631]: time="2025-02-13T15:31:02.810947050Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.14\"" Feb 13 15:31:05.402259 containerd[1631]: time="2025-02-13T15:31:05.402154618Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:05.403819 containerd[1631]: time="2025-02-13T15:31:05.403755948Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.14: active requests=0, bytes read=29383111" Feb 13 15:31:05.406049 containerd[1631]: time="2025-02-13T15:31:05.405991363Z" level=info msg="ImageCreate event name:\"sha256:582085ec6cd04751293bebad40e35d6b2066b81f6e5868a9db60b8127ca7921d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:05.409465 containerd[1631]: time="2025-02-13T15:31:05.409331304Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:23ccdb5e7e2c317f5727652ef7e64ef91ead34a3c73dfa9c3ab23b3a5028e280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:05.410707 containerd[1631]: time="2025-02-13T15:31:05.410655832Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.14\" with image id \"sha256:582085ec6cd04751293bebad40e35d6b2066b81f6e5868a9db60b8127ca7921d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.14\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:23ccdb5e7e2c317f5727652ef7e64ef91ead34a3c73dfa9c3ab23b3a5028e280\", size \"30786820\" in 2.599667142s" Feb 13 15:31:05.410707 containerd[1631]: time="2025-02-13T15:31:05.410697312Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.14\" returns image reference \"sha256:582085ec6cd04751293bebad40e35d6b2066b81f6e5868a9db60b8127ca7921d\"" Feb 13 15:31:05.433211 containerd[1631]: time="2025-02-13T15:31:05.433170215Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.14\"" Feb 13 15:31:07.179574 containerd[1631]: time="2025-02-13T15:31:07.178894453Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:07.181769 containerd[1631]: time="2025-02-13T15:31:07.181672029Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.14: active requests=0, bytes read=15767000" Feb 13 15:31:07.184406 containerd[1631]: time="2025-02-13T15:31:07.184289245Z" level=info msg="ImageCreate event name:\"sha256:dfb84ea1121ad6a9ceccfe5078af3eee1b27b8d2b2e93d6449d11e1526dbeff8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:07.188494 containerd[1631]: time="2025-02-13T15:31:07.188424509Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf0046be3eb6c4831b6b2a1b3e24f18e27778663890144478f11a82622b48c48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:07.191009 containerd[1631]: time="2025-02-13T15:31:07.190386601Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.14\" with image id \"sha256:dfb84ea1121ad6a9ceccfe5078af3eee1b27b8d2b2e93d6449d11e1526dbeff8\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.14\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf0046be3eb6c4831b6b2a1b3e24f18e27778663890144478f11a82622b48c48\", size \"17170727\" in 1.757165786s" Feb 13 15:31:07.191009 containerd[1631]: time="2025-02-13T15:31:07.190436601Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.14\" returns image reference \"sha256:dfb84ea1121ad6a9ceccfe5078af3eee1b27b8d2b2e93d6449d11e1526dbeff8\"" Feb 13 15:31:07.215719 containerd[1631]: time="2025-02-13T15:31:07.215634712Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.14\"" Feb 13 15:31:08.171080 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1199178910.mount: Deactivated successfully. Feb 13 15:31:08.714933 containerd[1631]: time="2025-02-13T15:31:08.714873380Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:08.715864 containerd[1631]: time="2025-02-13T15:31:08.715681585Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.14: active requests=0, bytes read=25273401" Feb 13 15:31:08.716598 containerd[1631]: time="2025-02-13T15:31:08.716506589Z" level=info msg="ImageCreate event name:\"sha256:8acaac6288aef2fbe5821a7539f95a6043513e648e6ffaf6a545a93fa77fe8c8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:08.719451 containerd[1631]: time="2025-02-13T15:31:08.719396246Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:197988595a902751e4e570a5e4d74182f12d83c1d175c1e79aa020f358f6535b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:08.721227 containerd[1631]: time="2025-02-13T15:31:08.721164256Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.14\" with image id \"sha256:8acaac6288aef2fbe5821a7539f95a6043513e648e6ffaf6a545a93fa77fe8c8\", repo tag \"registry.k8s.io/kube-proxy:v1.29.14\", repo digest \"registry.k8s.io/kube-proxy@sha256:197988595a902751e4e570a5e4d74182f12d83c1d175c1e79aa020f358f6535b\", size \"25272394\" in 1.505467064s" Feb 13 15:31:08.721328 containerd[1631]: time="2025-02-13T15:31:08.721227777Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.14\" returns image reference \"sha256:8acaac6288aef2fbe5821a7539f95a6043513e648e6ffaf6a545a93fa77fe8c8\"" Feb 13 15:31:08.747087 containerd[1631]: time="2025-02-13T15:31:08.747043087Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 15:31:09.361887 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount527132593.mount: Deactivated successfully. Feb 13 15:31:09.997873 containerd[1631]: time="2025-02-13T15:31:09.996466189Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:09.999503 containerd[1631]: time="2025-02-13T15:31:09.999449286Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485461" Feb 13 15:31:10.001067 containerd[1631]: time="2025-02-13T15:31:10.001025975Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:10.005085 containerd[1631]: time="2025-02-13T15:31:10.005044757Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:10.008061 containerd[1631]: time="2025-02-13T15:31:10.007997653Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.260916646s" Feb 13 15:31:10.008187 containerd[1631]: time="2025-02-13T15:31:10.008170854Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 13 15:31:10.029812 containerd[1631]: time="2025-02-13T15:31:10.029781413Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 15:31:10.592282 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3874267445.mount: Deactivated successfully. Feb 13 15:31:10.601588 containerd[1631]: time="2025-02-13T15:31:10.600688236Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:10.602863 containerd[1631]: time="2025-02-13T15:31:10.602772448Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268841" Feb 13 15:31:10.604173 containerd[1631]: time="2025-02-13T15:31:10.604102335Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:10.607457 containerd[1631]: time="2025-02-13T15:31:10.607360673Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:10.608405 containerd[1631]: time="2025-02-13T15:31:10.608252838Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 578.292584ms" Feb 13 15:31:10.608405 containerd[1631]: time="2025-02-13T15:31:10.608288638Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 13 15:31:10.634574 containerd[1631]: time="2025-02-13T15:31:10.634499222Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Feb 13 15:31:11.171894 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount372157317.mount: Deactivated successfully. Feb 13 15:31:11.814427 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Feb 13 15:31:11.823774 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:31:11.945738 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:31:11.950109 (kubelet)[2349]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:31:12.005925 kubelet[2349]: E0213 15:31:12.005855 2349 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:31:12.008380 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:31:12.008562 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:31:14.373046 containerd[1631]: time="2025-02-13T15:31:14.372951965Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:14.374193 containerd[1631]: time="2025-02-13T15:31:14.374146371Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200866" Feb 13 15:31:14.375139 containerd[1631]: time="2025-02-13T15:31:14.375055896Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:14.378540 containerd[1631]: time="2025-02-13T15:31:14.378325152Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:14.379820 containerd[1631]: time="2025-02-13T15:31:14.379665799Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 3.745088936s" Feb 13 15:31:14.379820 containerd[1631]: time="2025-02-13T15:31:14.379707239Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Feb 13 15:31:20.160986 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:31:20.172320 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:31:20.199440 systemd[1]: Reloading requested from client PID 2427 ('systemctl') (unit session-5.scope)... Feb 13 15:31:20.199454 systemd[1]: Reloading... Feb 13 15:31:20.305552 zram_generator::config[2468]: No configuration found. Feb 13 15:31:20.412977 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:31:20.475497 systemd[1]: Reloading finished in 275 ms. Feb 13 15:31:20.532868 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 15:31:20.533048 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 15:31:20.533890 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:31:20.548088 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:31:20.669655 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:31:20.675706 (kubelet)[2528]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:31:20.737160 kubelet[2528]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:31:20.737160 kubelet[2528]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:31:20.737160 kubelet[2528]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:31:20.737618 kubelet[2528]: I0213 15:31:20.737257 2528 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:31:21.609078 kubelet[2528]: I0213 15:31:21.609031 2528 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Feb 13 15:31:21.610562 kubelet[2528]: I0213 15:31:21.609286 2528 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:31:21.610562 kubelet[2528]: I0213 15:31:21.609721 2528 server.go:919] "Client rotation is on, will bootstrap in background" Feb 13 15:31:21.635947 kubelet[2528]: I0213 15:31:21.635893 2528 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:31:21.636211 kubelet[2528]: E0213 15:31:21.636182 2528 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://138.199.158.182:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 138.199.158.182:6443: connect: connection refused Feb 13 15:31:21.647603 kubelet[2528]: I0213 15:31:21.647574 2528 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:31:21.649221 kubelet[2528]: I0213 15:31:21.649179 2528 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:31:21.649486 kubelet[2528]: I0213 15:31:21.649455 2528 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:31:21.649486 kubelet[2528]: I0213 15:31:21.649483 2528 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:31:21.649616 kubelet[2528]: I0213 15:31:21.649493 2528 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:31:21.650841 kubelet[2528]: I0213 15:31:21.650808 2528 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:31:21.653455 kubelet[2528]: I0213 15:31:21.653402 2528 kubelet.go:396] "Attempting to sync node with API server" Feb 13 15:31:21.653455 kubelet[2528]: I0213 15:31:21.653436 2528 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:31:21.653721 kubelet[2528]: I0213 15:31:21.653658 2528 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:31:21.653721 kubelet[2528]: I0213 15:31:21.653690 2528 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:31:21.655796 kubelet[2528]: W0213 15:31:21.655734 2528 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://138.199.158.182:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-1-4-c758b1cf91&limit=500&resourceVersion=0": dial tcp 138.199.158.182:6443: connect: connection refused Feb 13 15:31:21.655796 kubelet[2528]: E0213 15:31:21.655793 2528 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://138.199.158.182:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-1-4-c758b1cf91&limit=500&resourceVersion=0": dial tcp 138.199.158.182:6443: connect: connection refused Feb 13 15:31:21.657465 kubelet[2528]: W0213 15:31:21.657042 2528 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://138.199.158.182:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 138.199.158.182:6443: connect: connection refused Feb 13 15:31:21.657465 kubelet[2528]: E0213 15:31:21.657100 2528 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://138.199.158.182:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 138.199.158.182:6443: connect: connection refused Feb 13 15:31:21.657835 kubelet[2528]: I0213 15:31:21.657810 2528 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:31:21.660330 kubelet[2528]: I0213 15:31:21.660050 2528 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:31:21.660936 kubelet[2528]: W0213 15:31:21.660914 2528 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 15:31:21.661854 kubelet[2528]: I0213 15:31:21.661835 2528 server.go:1256] "Started kubelet" Feb 13 15:31:21.663252 kubelet[2528]: I0213 15:31:21.662829 2528 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:31:21.663624 kubelet[2528]: I0213 15:31:21.663607 2528 server.go:461] "Adding debug handlers to kubelet server" Feb 13 15:31:21.665218 kubelet[2528]: I0213 15:31:21.665196 2528 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:31:21.665707 kubelet[2528]: I0213 15:31:21.665687 2528 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:31:21.667024 kubelet[2528]: I0213 15:31:21.666991 2528 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:31:21.670809 kubelet[2528]: E0213 15:31:21.670787 2528 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://138.199.158.182:6443/api/v1/namespaces/default/events\": dial tcp 138.199.158.182:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4152-2-1-4-c758b1cf91.1823ce461f3603d2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152-2-1-4-c758b1cf91,UID:ci-4152-2-1-4-c758b1cf91,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152-2-1-4-c758b1cf91,},FirstTimestamp:2025-02-13 15:31:21.661809618 +0000 UTC m=+0.981443829,LastTimestamp:2025-02-13 15:31:21.661809618 +0000 UTC m=+0.981443829,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152-2-1-4-c758b1cf91,}" Feb 13 15:31:21.673756 kubelet[2528]: I0213 15:31:21.673145 2528 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:31:21.674990 kubelet[2528]: E0213 15:31:21.674967 2528 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.158.182:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-1-4-c758b1cf91?timeout=10s\": dial tcp 138.199.158.182:6443: connect: connection refused" interval="200ms" Feb 13 15:31:21.676298 kubelet[2528]: I0213 15:31:21.676266 2528 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:31:21.676856 kubelet[2528]: I0213 15:31:21.676830 2528 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 13 15:31:21.676919 kubelet[2528]: I0213 15:31:21.676902 2528 reconciler_new.go:29] "Reconciler: start to sync state" Feb 13 15:31:21.677790 kubelet[2528]: E0213 15:31:21.677681 2528 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:31:21.678259 kubelet[2528]: I0213 15:31:21.678244 2528 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:31:21.678426 kubelet[2528]: I0213 15:31:21.678414 2528 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:31:21.679068 kubelet[2528]: W0213 15:31:21.679015 2528 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://138.199.158.182:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 138.199.158.182:6443: connect: connection refused Feb 13 15:31:21.679068 kubelet[2528]: E0213 15:31:21.679068 2528 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://138.199.158.182:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 138.199.158.182:6443: connect: connection refused Feb 13 15:31:21.688987 kubelet[2528]: I0213 15:31:21.688831 2528 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:31:21.691895 kubelet[2528]: I0213 15:31:21.691865 2528 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:31:21.692712 kubelet[2528]: I0213 15:31:21.692424 2528 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:31:21.699048 kubelet[2528]: I0213 15:31:21.699004 2528 kubelet.go:2329] "Starting kubelet main sync loop" Feb 13 15:31:21.699139 kubelet[2528]: E0213 15:31:21.699086 2528 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:31:21.700384 kubelet[2528]: W0213 15:31:21.700201 2528 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://138.199.158.182:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 138.199.158.182:6443: connect: connection refused Feb 13 15:31:21.700384 kubelet[2528]: E0213 15:31:21.700316 2528 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://138.199.158.182:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 138.199.158.182:6443: connect: connection refused Feb 13 15:31:21.710278 kubelet[2528]: I0213 15:31:21.710202 2528 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:31:21.710278 kubelet[2528]: I0213 15:31:21.710222 2528 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:31:21.710278 kubelet[2528]: I0213 15:31:21.710241 2528 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:31:21.713003 kubelet[2528]: I0213 15:31:21.712966 2528 policy_none.go:49] "None policy: Start" Feb 13 15:31:21.714251 kubelet[2528]: I0213 15:31:21.713844 2528 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:31:21.714251 kubelet[2528]: I0213 15:31:21.713892 2528 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:31:21.723578 kubelet[2528]: I0213 15:31:21.723060 2528 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:31:21.723578 kubelet[2528]: I0213 15:31:21.723424 2528 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:31:21.727977 kubelet[2528]: E0213 15:31:21.727953 2528 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4152-2-1-4-c758b1cf91\" not found" Feb 13 15:31:21.776582 kubelet[2528]: I0213 15:31:21.776486 2528 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-1-4-c758b1cf91" Feb 13 15:31:21.777257 kubelet[2528]: E0213 15:31:21.777124 2528 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://138.199.158.182:6443/api/v1/nodes\": dial tcp 138.199.158.182:6443: connect: connection refused" node="ci-4152-2-1-4-c758b1cf91" Feb 13 15:31:21.799382 kubelet[2528]: I0213 15:31:21.799295 2528 topology_manager.go:215] "Topology Admit Handler" podUID="9ea225cc0b3a20cdfa743599e5d8c668" podNamespace="kube-system" podName="kube-controller-manager-ci-4152-2-1-4-c758b1cf91" Feb 13 15:31:21.802728 kubelet[2528]: I0213 15:31:21.802055 2528 topology_manager.go:215] "Topology Admit Handler" podUID="5ba4c690828210ec8caf65e8c28eaa8a" podNamespace="kube-system" podName="kube-scheduler-ci-4152-2-1-4-c758b1cf91" Feb 13 15:31:21.804167 kubelet[2528]: I0213 15:31:21.804126 2528 topology_manager.go:215] "Topology Admit Handler" podUID="dc5a2170d476c972c3ecf9cc09f834a9" podNamespace="kube-system" podName="kube-apiserver-ci-4152-2-1-4-c758b1cf91" Feb 13 15:31:21.877925 kubelet[2528]: E0213 15:31:21.876574 2528 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.158.182:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-1-4-c758b1cf91?timeout=10s\": dial tcp 138.199.158.182:6443: connect: connection refused" interval="400ms" Feb 13 15:31:21.979253 kubelet[2528]: I0213 15:31:21.978572 2528 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dc5a2170d476c972c3ecf9cc09f834a9-k8s-certs\") pod \"kube-apiserver-ci-4152-2-1-4-c758b1cf91\" (UID: \"dc5a2170d476c972c3ecf9cc09f834a9\") " pod="kube-system/kube-apiserver-ci-4152-2-1-4-c758b1cf91" Feb 13 15:31:21.979253 kubelet[2528]: I0213 15:31:21.978656 2528 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9ea225cc0b3a20cdfa743599e5d8c668-k8s-certs\") pod \"kube-controller-manager-ci-4152-2-1-4-c758b1cf91\" (UID: \"9ea225cc0b3a20cdfa743599e5d8c668\") " pod="kube-system/kube-controller-manager-ci-4152-2-1-4-c758b1cf91" Feb 13 15:31:21.979253 kubelet[2528]: I0213 15:31:21.978712 2528 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9ea225cc0b3a20cdfa743599e5d8c668-kubeconfig\") pod \"kube-controller-manager-ci-4152-2-1-4-c758b1cf91\" (UID: \"9ea225cc0b3a20cdfa743599e5d8c668\") " pod="kube-system/kube-controller-manager-ci-4152-2-1-4-c758b1cf91" Feb 13 15:31:21.979253 kubelet[2528]: I0213 15:31:21.978759 2528 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9ea225cc0b3a20cdfa743599e5d8c668-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152-2-1-4-c758b1cf91\" (UID: \"9ea225cc0b3a20cdfa743599e5d8c668\") " pod="kube-system/kube-controller-manager-ci-4152-2-1-4-c758b1cf91" Feb 13 15:31:21.979253 kubelet[2528]: I0213 15:31:21.978803 2528 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5ba4c690828210ec8caf65e8c28eaa8a-kubeconfig\") pod \"kube-scheduler-ci-4152-2-1-4-c758b1cf91\" (UID: \"5ba4c690828210ec8caf65e8c28eaa8a\") " pod="kube-system/kube-scheduler-ci-4152-2-1-4-c758b1cf91" Feb 13 15:31:21.979782 kubelet[2528]: I0213 15:31:21.978844 2528 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dc5a2170d476c972c3ecf9cc09f834a9-ca-certs\") pod \"kube-apiserver-ci-4152-2-1-4-c758b1cf91\" (UID: \"dc5a2170d476c972c3ecf9cc09f834a9\") " pod="kube-system/kube-apiserver-ci-4152-2-1-4-c758b1cf91" Feb 13 15:31:21.979782 kubelet[2528]: I0213 15:31:21.978888 2528 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dc5a2170d476c972c3ecf9cc09f834a9-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152-2-1-4-c758b1cf91\" (UID: \"dc5a2170d476c972c3ecf9cc09f834a9\") " pod="kube-system/kube-apiserver-ci-4152-2-1-4-c758b1cf91" Feb 13 15:31:21.979782 kubelet[2528]: I0213 15:31:21.978942 2528 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9ea225cc0b3a20cdfa743599e5d8c668-ca-certs\") pod \"kube-controller-manager-ci-4152-2-1-4-c758b1cf91\" (UID: \"9ea225cc0b3a20cdfa743599e5d8c668\") " pod="kube-system/kube-controller-manager-ci-4152-2-1-4-c758b1cf91" Feb 13 15:31:21.979782 kubelet[2528]: I0213 15:31:21.978991 2528 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9ea225cc0b3a20cdfa743599e5d8c668-flexvolume-dir\") pod \"kube-controller-manager-ci-4152-2-1-4-c758b1cf91\" (UID: \"9ea225cc0b3a20cdfa743599e5d8c668\") " pod="kube-system/kube-controller-manager-ci-4152-2-1-4-c758b1cf91" Feb 13 15:31:21.980154 kubelet[2528]: I0213 15:31:21.980099 2528 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-1-4-c758b1cf91" Feb 13 15:31:21.980798 kubelet[2528]: E0213 15:31:21.980767 2528 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://138.199.158.182:6443/api/v1/nodes\": dial tcp 138.199.158.182:6443: connect: connection refused" node="ci-4152-2-1-4-c758b1cf91" Feb 13 15:31:22.109715 containerd[1631]: time="2025-02-13T15:31:22.109614707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152-2-1-4-c758b1cf91,Uid:9ea225cc0b3a20cdfa743599e5d8c668,Namespace:kube-system,Attempt:0,}" Feb 13 15:31:22.114873 containerd[1631]: time="2025-02-13T15:31:22.114565408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152-2-1-4-c758b1cf91,Uid:5ba4c690828210ec8caf65e8c28eaa8a,Namespace:kube-system,Attempt:0,}" Feb 13 15:31:22.114873 containerd[1631]: time="2025-02-13T15:31:22.114660329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152-2-1-4-c758b1cf91,Uid:dc5a2170d476c972c3ecf9cc09f834a9,Namespace:kube-system,Attempt:0,}" Feb 13 15:31:22.279511 kubelet[2528]: E0213 15:31:22.278717 2528 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.158.182:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-1-4-c758b1cf91?timeout=10s\": dial tcp 138.199.158.182:6443: connect: connection refused" interval="800ms" Feb 13 15:31:22.383675 kubelet[2528]: I0213 15:31:22.383634 2528 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-1-4-c758b1cf91" Feb 13 15:31:22.384199 kubelet[2528]: E0213 15:31:22.384120 2528 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://138.199.158.182:6443/api/v1/nodes\": dial tcp 138.199.158.182:6443: connect: connection refused" node="ci-4152-2-1-4-c758b1cf91" Feb 13 15:31:22.646160 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1710365647.mount: Deactivated successfully. Feb 13 15:31:22.652596 containerd[1631]: time="2025-02-13T15:31:22.652307103Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:31:22.654293 containerd[1631]: time="2025-02-13T15:31:22.654224031Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:31:22.656089 containerd[1631]: time="2025-02-13T15:31:22.656025159Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Feb 13 15:31:22.657193 containerd[1631]: time="2025-02-13T15:31:22.657116444Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:31:22.659513 containerd[1631]: time="2025-02-13T15:31:22.659474654Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:31:22.660879 containerd[1631]: time="2025-02-13T15:31:22.660766499Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:31:22.661840 containerd[1631]: time="2025-02-13T15:31:22.661796064Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:31:22.663983 kubelet[2528]: W0213 15:31:22.663907 2528 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://138.199.158.182:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 138.199.158.182:6443: connect: connection refused Feb 13 15:31:22.663983 kubelet[2528]: E0213 15:31:22.663964 2528 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://138.199.158.182:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 138.199.158.182:6443: connect: connection refused Feb 13 15:31:22.666665 containerd[1631]: time="2025-02-13T15:31:22.666609045Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:31:22.668030 containerd[1631]: time="2025-02-13T15:31:22.667742010Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 553.022921ms" Feb 13 15:31:22.669846 containerd[1631]: time="2025-02-13T15:31:22.669811179Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 560.093872ms" Feb 13 15:31:22.672618 containerd[1631]: time="2025-02-13T15:31:22.672502750Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 557.839061ms" Feb 13 15:31:22.770328 containerd[1631]: time="2025-02-13T15:31:22.770218214Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:31:22.770563 containerd[1631]: time="2025-02-13T15:31:22.770314615Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:31:22.770563 containerd[1631]: time="2025-02-13T15:31:22.770348295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:31:22.770859 containerd[1631]: time="2025-02-13T15:31:22.770808257Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:31:22.774597 containerd[1631]: time="2025-02-13T15:31:22.774394553Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:31:22.774597 containerd[1631]: time="2025-02-13T15:31:22.774458073Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:31:22.774597 containerd[1631]: time="2025-02-13T15:31:22.774474193Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:31:22.774987 containerd[1631]: time="2025-02-13T15:31:22.774610394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:31:22.779330 containerd[1631]: time="2025-02-13T15:31:22.779240014Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:31:22.783108 containerd[1631]: time="2025-02-13T15:31:22.782013706Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:31:22.783108 containerd[1631]: time="2025-02-13T15:31:22.782069866Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:31:22.783108 containerd[1631]: time="2025-02-13T15:31:22.782168586Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:31:22.800419 kubelet[2528]: W0213 15:31:22.799663 2528 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://138.199.158.182:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-1-4-c758b1cf91&limit=500&resourceVersion=0": dial tcp 138.199.158.182:6443: connect: connection refused Feb 13 15:31:22.800419 kubelet[2528]: E0213 15:31:22.799728 2528 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://138.199.158.182:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-1-4-c758b1cf91&limit=500&resourceVersion=0": dial tcp 138.199.158.182:6443: connect: connection refused Feb 13 15:31:22.849263 containerd[1631]: time="2025-02-13T15:31:22.849225677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152-2-1-4-c758b1cf91,Uid:5ba4c690828210ec8caf65e8c28eaa8a,Namespace:kube-system,Attempt:0,} returns sandbox id \"e2f5a294e2c9d38e2427f1a26f643ae00a875cb0c6d6d0296b4c69aebf056300\"" Feb 13 15:31:22.855122 containerd[1631]: time="2025-02-13T15:31:22.855005903Z" level=info msg="CreateContainer within sandbox \"e2f5a294e2c9d38e2427f1a26f643ae00a875cb0c6d6d0296b4c69aebf056300\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 15:31:22.860709 containerd[1631]: time="2025-02-13T15:31:22.860612767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152-2-1-4-c758b1cf91,Uid:dc5a2170d476c972c3ecf9cc09f834a9,Namespace:kube-system,Attempt:0,} returns sandbox id \"d7a3d83984bda8d0392208967823a9686a4668d7695a8be3ab9eb03a771eac49\"" Feb 13 15:31:22.865387 containerd[1631]: time="2025-02-13T15:31:22.864995026Z" level=info msg="CreateContainer within sandbox \"d7a3d83984bda8d0392208967823a9686a4668d7695a8be3ab9eb03a771eac49\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 15:31:22.875114 containerd[1631]: time="2025-02-13T15:31:22.875036469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152-2-1-4-c758b1cf91,Uid:9ea225cc0b3a20cdfa743599e5d8c668,Namespace:kube-system,Attempt:0,} returns sandbox id \"6b80bdf0ae1f543faebba0f197190c4eb846749a5f3b48cd2d31457096876574\"" Feb 13 15:31:22.876141 containerd[1631]: time="2025-02-13T15:31:22.875385311Z" level=info msg="CreateContainer within sandbox \"e2f5a294e2c9d38e2427f1a26f643ae00a875cb0c6d6d0296b4c69aebf056300\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"240bde964294d7e1a2fdc5bce624d4c1e4f6bdeddeaf9620242aa2c62c70e2e8\"" Feb 13 15:31:22.877370 containerd[1631]: time="2025-02-13T15:31:22.877308239Z" level=info msg="StartContainer for \"240bde964294d7e1a2fdc5bce624d4c1e4f6bdeddeaf9620242aa2c62c70e2e8\"" Feb 13 15:31:22.879366 containerd[1631]: time="2025-02-13T15:31:22.879294448Z" level=info msg="CreateContainer within sandbox \"6b80bdf0ae1f543faebba0f197190c4eb846749a5f3b48cd2d31457096876574\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 15:31:22.896650 containerd[1631]: time="2025-02-13T15:31:22.896476163Z" level=info msg="CreateContainer within sandbox \"d7a3d83984bda8d0392208967823a9686a4668d7695a8be3ab9eb03a771eac49\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"bda022729487fd131144dc0cf9ee03b7a19eb50f9e2119a809e3934dd6d3f54b\"" Feb 13 15:31:22.898464 containerd[1631]: time="2025-02-13T15:31:22.898424371Z" level=info msg="StartContainer for \"bda022729487fd131144dc0cf9ee03b7a19eb50f9e2119a809e3934dd6d3f54b\"" Feb 13 15:31:22.903295 containerd[1631]: time="2025-02-13T15:31:22.903224352Z" level=info msg="CreateContainer within sandbox \"6b80bdf0ae1f543faebba0f197190c4eb846749a5f3b48cd2d31457096876574\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"30a7d84d613ddc94914554e4e26afed9ff6b1a108fc17f539aeddd8377871382\"" Feb 13 15:31:22.908491 containerd[1631]: time="2025-02-13T15:31:22.907171009Z" level=info msg="StartContainer for \"30a7d84d613ddc94914554e4e26afed9ff6b1a108fc17f539aeddd8377871382\"" Feb 13 15:31:23.002653 containerd[1631]: time="2025-02-13T15:31:23.002595423Z" level=info msg="StartContainer for \"240bde964294d7e1a2fdc5bce624d4c1e4f6bdeddeaf9620242aa2c62c70e2e8\" returns successfully" Feb 13 15:31:23.004529 containerd[1631]: time="2025-02-13T15:31:23.002753064Z" level=info msg="StartContainer for \"bda022729487fd131144dc0cf9ee03b7a19eb50f9e2119a809e3934dd6d3f54b\" returns successfully" Feb 13 15:31:23.055613 containerd[1631]: time="2025-02-13T15:31:23.055027847Z" level=info msg="StartContainer for \"30a7d84d613ddc94914554e4e26afed9ff6b1a108fc17f539aeddd8377871382\" returns successfully" Feb 13 15:31:23.073587 kubelet[2528]: W0213 15:31:23.073478 2528 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://138.199.158.182:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 138.199.158.182:6443: connect: connection refused Feb 13 15:31:23.073587 kubelet[2528]: E0213 15:31:23.073563 2528 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://138.199.158.182:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 138.199.158.182:6443: connect: connection refused Feb 13 15:31:23.079966 kubelet[2528]: E0213 15:31:23.079921 2528 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.158.182:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-1-4-c758b1cf91?timeout=10s\": dial tcp 138.199.158.182:6443: connect: connection refused" interval="1.6s" Feb 13 15:31:23.188844 kubelet[2528]: I0213 15:31:23.188555 2528 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-1-4-c758b1cf91" Feb 13 15:31:25.298765 kubelet[2528]: I0213 15:31:25.298711 2528 kubelet_node_status.go:76] "Successfully registered node" node="ci-4152-2-1-4-c758b1cf91" Feb 13 15:31:25.352054 kubelet[2528]: E0213 15:31:25.352006 2528 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152-2-1-4-c758b1cf91\" not found" Feb 13 15:31:25.408407 kubelet[2528]: E0213 15:31:25.408369 2528 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Feb 13 15:31:25.659855 kubelet[2528]: I0213 15:31:25.658576 2528 apiserver.go:52] "Watching apiserver" Feb 13 15:31:25.677876 kubelet[2528]: I0213 15:31:25.677842 2528 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 13 15:31:25.736546 kubelet[2528]: E0213 15:31:25.735164 2528 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4152-2-1-4-c758b1cf91\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4152-2-1-4-c758b1cf91" Feb 13 15:31:28.335850 systemd[1]: Reloading requested from client PID 2803 ('systemctl') (unit session-5.scope)... Feb 13 15:31:28.336142 systemd[1]: Reloading... Feb 13 15:31:28.451547 zram_generator::config[2846]: No configuration found. Feb 13 15:31:28.553255 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:31:28.629830 systemd[1]: Reloading finished in 293 ms. Feb 13 15:31:28.670476 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:31:28.687793 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:31:28.688456 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:31:28.698053 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:31:28.812719 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:31:28.826485 (kubelet)[2898]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:31:28.891499 kubelet[2898]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:31:28.891499 kubelet[2898]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:31:28.891499 kubelet[2898]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:31:28.891499 kubelet[2898]: I0213 15:31:28.891165 2898 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:31:28.898114 kubelet[2898]: I0213 15:31:28.897913 2898 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Feb 13 15:31:28.898114 kubelet[2898]: I0213 15:31:28.898069 2898 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:31:28.898462 kubelet[2898]: I0213 15:31:28.898426 2898 server.go:919] "Client rotation is on, will bootstrap in background" Feb 13 15:31:28.902612 kubelet[2898]: I0213 15:31:28.901724 2898 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 15:31:28.906812 kubelet[2898]: I0213 15:31:28.905762 2898 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:31:28.917272 kubelet[2898]: I0213 15:31:28.917227 2898 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:31:28.917761 kubelet[2898]: I0213 15:31:28.917742 2898 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:31:28.918365 kubelet[2898]: I0213 15:31:28.917955 2898 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:31:28.918365 kubelet[2898]: I0213 15:31:28.917982 2898 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:31:28.918365 kubelet[2898]: I0213 15:31:28.917991 2898 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:31:28.918365 kubelet[2898]: I0213 15:31:28.918027 2898 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:31:28.918365 kubelet[2898]: I0213 15:31:28.918123 2898 kubelet.go:396] "Attempting to sync node with API server" Feb 13 15:31:28.918365 kubelet[2898]: I0213 15:31:28.918136 2898 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:31:28.918365 kubelet[2898]: I0213 15:31:28.918156 2898 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:31:28.919171 kubelet[2898]: I0213 15:31:28.918568 2898 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:31:28.931555 kubelet[2898]: I0213 15:31:28.929622 2898 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:31:28.931555 kubelet[2898]: I0213 15:31:28.929823 2898 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:31:28.931555 kubelet[2898]: I0213 15:31:28.930160 2898 server.go:1256] "Started kubelet" Feb 13 15:31:28.935256 kubelet[2898]: I0213 15:31:28.934138 2898 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:31:28.942532 kubelet[2898]: I0213 15:31:28.942491 2898 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:31:28.943249 kubelet[2898]: I0213 15:31:28.943229 2898 server.go:461] "Adding debug handlers to kubelet server" Feb 13 15:31:28.944423 kubelet[2898]: I0213 15:31:28.944402 2898 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:31:28.945683 kubelet[2898]: I0213 15:31:28.944627 2898 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:31:28.951245 kubelet[2898]: I0213 15:31:28.951203 2898 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:31:28.955731 kubelet[2898]: I0213 15:31:28.955699 2898 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 13 15:31:28.955870 kubelet[2898]: I0213 15:31:28.955853 2898 reconciler_new.go:29] "Reconciler: start to sync state" Feb 13 15:31:28.964688 kubelet[2898]: I0213 15:31:28.964662 2898 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:31:28.966070 kubelet[2898]: I0213 15:31:28.965711 2898 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:31:28.966070 kubelet[2898]: I0213 15:31:28.965739 2898 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:31:28.966070 kubelet[2898]: I0213 15:31:28.965765 2898 kubelet.go:2329] "Starting kubelet main sync loop" Feb 13 15:31:28.966070 kubelet[2898]: E0213 15:31:28.965812 2898 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:31:28.973563 kubelet[2898]: I0213 15:31:28.972784 2898 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:31:28.973563 kubelet[2898]: I0213 15:31:28.972921 2898 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:31:28.979666 kubelet[2898]: I0213 15:31:28.979644 2898 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:31:28.982249 kubelet[2898]: E0213 15:31:28.982232 2898 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:31:29.033051 kubelet[2898]: I0213 15:31:29.033025 2898 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:31:29.033208 kubelet[2898]: I0213 15:31:29.033198 2898 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:31:29.033266 kubelet[2898]: I0213 15:31:29.033258 2898 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:31:29.033578 kubelet[2898]: I0213 15:31:29.033494 2898 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 15:31:29.033766 kubelet[2898]: I0213 15:31:29.033663 2898 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 15:31:29.033766 kubelet[2898]: I0213 15:31:29.033679 2898 policy_none.go:49] "None policy: Start" Feb 13 15:31:29.034767 kubelet[2898]: I0213 15:31:29.034745 2898 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:31:29.034804 kubelet[2898]: I0213 15:31:29.034786 2898 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:31:29.035031 kubelet[2898]: I0213 15:31:29.035016 2898 state_mem.go:75] "Updated machine memory state" Feb 13 15:31:29.036411 kubelet[2898]: I0213 15:31:29.036373 2898 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:31:29.038209 kubelet[2898]: I0213 15:31:29.038184 2898 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:31:29.055298 kubelet[2898]: I0213 15:31:29.055275 2898 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-1-4-c758b1cf91" Feb 13 15:31:29.064711 kubelet[2898]: I0213 15:31:29.064669 2898 kubelet_node_status.go:112] "Node was previously registered" node="ci-4152-2-1-4-c758b1cf91" Feb 13 15:31:29.065028 kubelet[2898]: I0213 15:31:29.064960 2898 kubelet_node_status.go:76] "Successfully registered node" node="ci-4152-2-1-4-c758b1cf91" Feb 13 15:31:29.066114 kubelet[2898]: I0213 15:31:29.066084 2898 topology_manager.go:215] "Topology Admit Handler" podUID="dc5a2170d476c972c3ecf9cc09f834a9" podNamespace="kube-system" podName="kube-apiserver-ci-4152-2-1-4-c758b1cf91" Feb 13 15:31:29.067041 kubelet[2898]: I0213 15:31:29.066628 2898 topology_manager.go:215] "Topology Admit Handler" podUID="9ea225cc0b3a20cdfa743599e5d8c668" podNamespace="kube-system" podName="kube-controller-manager-ci-4152-2-1-4-c758b1cf91" Feb 13 15:31:29.067041 kubelet[2898]: I0213 15:31:29.066717 2898 topology_manager.go:215] "Topology Admit Handler" podUID="5ba4c690828210ec8caf65e8c28eaa8a" podNamespace="kube-system" podName="kube-scheduler-ci-4152-2-1-4-c758b1cf91" Feb 13 15:31:29.157893 kubelet[2898]: I0213 15:31:29.157403 2898 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9ea225cc0b3a20cdfa743599e5d8c668-ca-certs\") pod \"kube-controller-manager-ci-4152-2-1-4-c758b1cf91\" (UID: \"9ea225cc0b3a20cdfa743599e5d8c668\") " pod="kube-system/kube-controller-manager-ci-4152-2-1-4-c758b1cf91" Feb 13 15:31:29.158610 kubelet[2898]: I0213 15:31:29.158141 2898 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9ea225cc0b3a20cdfa743599e5d8c668-flexvolume-dir\") pod \"kube-controller-manager-ci-4152-2-1-4-c758b1cf91\" (UID: \"9ea225cc0b3a20cdfa743599e5d8c668\") " pod="kube-system/kube-controller-manager-ci-4152-2-1-4-c758b1cf91" Feb 13 15:31:29.158610 kubelet[2898]: I0213 15:31:29.158178 2898 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9ea225cc0b3a20cdfa743599e5d8c668-k8s-certs\") pod \"kube-controller-manager-ci-4152-2-1-4-c758b1cf91\" (UID: \"9ea225cc0b3a20cdfa743599e5d8c668\") " pod="kube-system/kube-controller-manager-ci-4152-2-1-4-c758b1cf91" Feb 13 15:31:29.159500 kubelet[2898]: I0213 15:31:29.158778 2898 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5ba4c690828210ec8caf65e8c28eaa8a-kubeconfig\") pod \"kube-scheduler-ci-4152-2-1-4-c758b1cf91\" (UID: \"5ba4c690828210ec8caf65e8c28eaa8a\") " pod="kube-system/kube-scheduler-ci-4152-2-1-4-c758b1cf91" Feb 13 15:31:29.160071 kubelet[2898]: I0213 15:31:29.160005 2898 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dc5a2170d476c972c3ecf9cc09f834a9-ca-certs\") pod \"kube-apiserver-ci-4152-2-1-4-c758b1cf91\" (UID: \"dc5a2170d476c972c3ecf9cc09f834a9\") " pod="kube-system/kube-apiserver-ci-4152-2-1-4-c758b1cf91" Feb 13 15:31:29.160071 kubelet[2898]: I0213 15:31:29.160041 2898 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dc5a2170d476c972c3ecf9cc09f834a9-k8s-certs\") pod \"kube-apiserver-ci-4152-2-1-4-c758b1cf91\" (UID: \"dc5a2170d476c972c3ecf9cc09f834a9\") " pod="kube-system/kube-apiserver-ci-4152-2-1-4-c758b1cf91" Feb 13 15:31:29.160071 kubelet[2898]: I0213 15:31:29.160066 2898 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9ea225cc0b3a20cdfa743599e5d8c668-kubeconfig\") pod \"kube-controller-manager-ci-4152-2-1-4-c758b1cf91\" (UID: \"9ea225cc0b3a20cdfa743599e5d8c668\") " pod="kube-system/kube-controller-manager-ci-4152-2-1-4-c758b1cf91" Feb 13 15:31:29.160203 kubelet[2898]: I0213 15:31:29.160092 2898 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9ea225cc0b3a20cdfa743599e5d8c668-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152-2-1-4-c758b1cf91\" (UID: \"9ea225cc0b3a20cdfa743599e5d8c668\") " pod="kube-system/kube-controller-manager-ci-4152-2-1-4-c758b1cf91" Feb 13 15:31:29.160203 kubelet[2898]: I0213 15:31:29.160115 2898 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dc5a2170d476c972c3ecf9cc09f834a9-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152-2-1-4-c758b1cf91\" (UID: \"dc5a2170d476c972c3ecf9cc09f834a9\") " pod="kube-system/kube-apiserver-ci-4152-2-1-4-c758b1cf91" Feb 13 15:31:29.926957 kubelet[2898]: I0213 15:31:29.925571 2898 apiserver.go:52] "Watching apiserver" Feb 13 15:31:29.956558 kubelet[2898]: I0213 15:31:29.956301 2898 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 13 15:31:29.997690 sudo[1922]: pam_unix(sudo:session): session closed for user root Feb 13 15:31:30.048211 kubelet[2898]: I0213 15:31:30.048136 2898 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4152-2-1-4-c758b1cf91" podStartSLOduration=1.048086927 podStartE2EDuration="1.048086927s" podCreationTimestamp="2025-02-13 15:31:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:31:30.047900806 +0000 UTC m=+1.209796100" watchObservedRunningTime="2025-02-13 15:31:30.048086927 +0000 UTC m=+1.209982221" Feb 13 15:31:30.049891 kubelet[2898]: I0213 15:31:30.048239 2898 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4152-2-1-4-c758b1cf91" podStartSLOduration=1.048219807 podStartE2EDuration="1.048219807s" podCreationTimestamp="2025-02-13 15:31:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:31:30.036397601 +0000 UTC m=+1.198292935" watchObservedRunningTime="2025-02-13 15:31:30.048219807 +0000 UTC m=+1.210115101" Feb 13 15:31:30.065413 kubelet[2898]: I0213 15:31:30.065374 2898 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4152-2-1-4-c758b1cf91" podStartSLOduration=1.065305475 podStartE2EDuration="1.065305475s" podCreationTimestamp="2025-02-13 15:31:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:31:30.064265351 +0000 UTC m=+1.226160685" watchObservedRunningTime="2025-02-13 15:31:30.065305475 +0000 UTC m=+1.227200769" Feb 13 15:31:30.156882 sshd[1921]: Connection closed by 139.178.89.65 port 56522 Feb 13 15:31:30.158965 sshd-session[1918]: pam_unix(sshd:session): session closed for user core Feb 13 15:31:30.162478 systemd[1]: sshd@4-138.199.158.182:22-139.178.89.65:56522.service: Deactivated successfully. Feb 13 15:31:30.165706 systemd-logind[1602]: Session 5 logged out. Waiting for processes to exit. Feb 13 15:31:30.168830 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 15:31:30.170444 systemd-logind[1602]: Removed session 5. Feb 13 15:31:42.320144 kubelet[2898]: I0213 15:31:42.320095 2898 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 15:31:42.321212 containerd[1631]: time="2025-02-13T15:31:42.320921380Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 15:31:42.321817 kubelet[2898]: I0213 15:31:42.321384 2898 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 15:31:42.775032 kubelet[2898]: I0213 15:31:42.774655 2898 topology_manager.go:215] "Topology Admit Handler" podUID="9edcf9f4-7c3f-4233-bd6e-820435faeee2" podNamespace="kube-system" podName="kube-proxy-xmwqp" Feb 13 15:31:42.784870 kubelet[2898]: I0213 15:31:42.780509 2898 topology_manager.go:215] "Topology Admit Handler" podUID="284d9660-b838-4590-9878-242bb6fb2be9" podNamespace="kube-flannel" podName="kube-flannel-ds-rnq7x" Feb 13 15:31:42.847921 kubelet[2898]: I0213 15:31:42.847887 2898 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9edcf9f4-7c3f-4233-bd6e-820435faeee2-kube-proxy\") pod \"kube-proxy-xmwqp\" (UID: \"9edcf9f4-7c3f-4233-bd6e-820435faeee2\") " pod="kube-system/kube-proxy-xmwqp" Feb 13 15:31:42.848222 kubelet[2898]: I0213 15:31:42.848175 2898 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9edcf9f4-7c3f-4233-bd6e-820435faeee2-lib-modules\") pod \"kube-proxy-xmwqp\" (UID: \"9edcf9f4-7c3f-4233-bd6e-820435faeee2\") " pod="kube-system/kube-proxy-xmwqp" Feb 13 15:31:42.848325 kubelet[2898]: I0213 15:31:42.848313 2898 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/284d9660-b838-4590-9878-242bb6fb2be9-flannel-cfg\") pod \"kube-flannel-ds-rnq7x\" (UID: \"284d9660-b838-4590-9878-242bb6fb2be9\") " pod="kube-flannel/kube-flannel-ds-rnq7x" Feb 13 15:31:42.848417 kubelet[2898]: I0213 15:31:42.848406 2898 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94f8m\" (UniqueName: \"kubernetes.io/projected/284d9660-b838-4590-9878-242bb6fb2be9-kube-api-access-94f8m\") pod \"kube-flannel-ds-rnq7x\" (UID: \"284d9660-b838-4590-9878-242bb6fb2be9\") " pod="kube-flannel/kube-flannel-ds-rnq7x" Feb 13 15:31:42.848564 kubelet[2898]: I0213 15:31:42.848545 2898 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cndmk\" (UniqueName: \"kubernetes.io/projected/9edcf9f4-7c3f-4233-bd6e-820435faeee2-kube-api-access-cndmk\") pod \"kube-proxy-xmwqp\" (UID: \"9edcf9f4-7c3f-4233-bd6e-820435faeee2\") " pod="kube-system/kube-proxy-xmwqp" Feb 13 15:31:42.848725 kubelet[2898]: I0213 15:31:42.848713 2898 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/284d9660-b838-4590-9878-242bb6fb2be9-cni\") pod \"kube-flannel-ds-rnq7x\" (UID: \"284d9660-b838-4590-9878-242bb6fb2be9\") " pod="kube-flannel/kube-flannel-ds-rnq7x" Feb 13 15:31:42.848813 kubelet[2898]: I0213 15:31:42.848803 2898 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9edcf9f4-7c3f-4233-bd6e-820435faeee2-xtables-lock\") pod \"kube-proxy-xmwqp\" (UID: \"9edcf9f4-7c3f-4233-bd6e-820435faeee2\") " pod="kube-system/kube-proxy-xmwqp" Feb 13 15:31:42.848890 kubelet[2898]: I0213 15:31:42.848881 2898 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/284d9660-b838-4590-9878-242bb6fb2be9-run\") pod \"kube-flannel-ds-rnq7x\" (UID: \"284d9660-b838-4590-9878-242bb6fb2be9\") " pod="kube-flannel/kube-flannel-ds-rnq7x" Feb 13 15:31:42.848966 kubelet[2898]: I0213 15:31:42.848958 2898 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/284d9660-b838-4590-9878-242bb6fb2be9-cni-plugin\") pod \"kube-flannel-ds-rnq7x\" (UID: \"284d9660-b838-4590-9878-242bb6fb2be9\") " pod="kube-flannel/kube-flannel-ds-rnq7x" Feb 13 15:31:42.849050 kubelet[2898]: I0213 15:31:42.849041 2898 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/284d9660-b838-4590-9878-242bb6fb2be9-xtables-lock\") pod \"kube-flannel-ds-rnq7x\" (UID: \"284d9660-b838-4590-9878-242bb6fb2be9\") " pod="kube-flannel/kube-flannel-ds-rnq7x" Feb 13 15:31:42.963009 kubelet[2898]: E0213 15:31:42.962952 2898 projected.go:294] Couldn't get configMap kube-flannel/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Feb 13 15:31:42.963009 kubelet[2898]: E0213 15:31:42.962989 2898 projected.go:200] Error preparing data for projected volume kube-api-access-94f8m for pod kube-flannel/kube-flannel-ds-rnq7x: configmap "kube-root-ca.crt" not found Feb 13 15:31:42.963165 kubelet[2898]: E0213 15:31:42.963046 2898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/284d9660-b838-4590-9878-242bb6fb2be9-kube-api-access-94f8m podName:284d9660-b838-4590-9878-242bb6fb2be9 nodeName:}" failed. No retries permitted until 2025-02-13 15:31:43.463024961 +0000 UTC m=+14.624920215 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-94f8m" (UniqueName: "kubernetes.io/projected/284d9660-b838-4590-9878-242bb6fb2be9-kube-api-access-94f8m") pod "kube-flannel-ds-rnq7x" (UID: "284d9660-b838-4590-9878-242bb6fb2be9") : configmap "kube-root-ca.crt" not found Feb 13 15:31:42.964651 kubelet[2898]: E0213 15:31:42.964611 2898 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Feb 13 15:31:42.964651 kubelet[2898]: E0213 15:31:42.964655 2898 projected.go:200] Error preparing data for projected volume kube-api-access-cndmk for pod kube-system/kube-proxy-xmwqp: configmap "kube-root-ca.crt" not found Feb 13 15:31:42.964806 kubelet[2898]: E0213 15:31:42.964724 2898 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9edcf9f4-7c3f-4233-bd6e-820435faeee2-kube-api-access-cndmk podName:9edcf9f4-7c3f-4233-bd6e-820435faeee2 nodeName:}" failed. No retries permitted until 2025-02-13 15:31:43.464705743 +0000 UTC m=+14.626601037 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cndmk" (UniqueName: "kubernetes.io/projected/9edcf9f4-7c3f-4233-bd6e-820435faeee2-kube-api-access-cndmk") pod "kube-proxy-xmwqp" (UID: "9edcf9f4-7c3f-4233-bd6e-820435faeee2") : configmap "kube-root-ca.crt" not found Feb 13 15:31:43.683399 containerd[1631]: time="2025-02-13T15:31:43.683335575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xmwqp,Uid:9edcf9f4-7c3f-4233-bd6e-820435faeee2,Namespace:kube-system,Attempt:0,}" Feb 13 15:31:43.698914 containerd[1631]: time="2025-02-13T15:31:43.698001824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-rnq7x,Uid:284d9660-b838-4590-9878-242bb6fb2be9,Namespace:kube-flannel,Attempt:0,}" Feb 13 15:31:43.716322 containerd[1631]: time="2025-02-13T15:31:43.716103837Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:31:43.716322 containerd[1631]: time="2025-02-13T15:31:43.716154879Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:31:43.716322 containerd[1631]: time="2025-02-13T15:31:43.716169559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:31:43.716322 containerd[1631]: time="2025-02-13T15:31:43.716246522Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:31:43.739663 containerd[1631]: time="2025-02-13T15:31:43.738170153Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:31:43.739663 containerd[1631]: time="2025-02-13T15:31:43.738228555Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:31:43.739663 containerd[1631]: time="2025-02-13T15:31:43.738240475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:31:43.739663 containerd[1631]: time="2025-02-13T15:31:43.738402441Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:31:43.775425 containerd[1631]: time="2025-02-13T15:31:43.774853476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xmwqp,Uid:9edcf9f4-7c3f-4233-bd6e-820435faeee2,Namespace:kube-system,Attempt:0,} returns sandbox id \"3f21846e6b5a6944012014c52b872402984623ff6d7a66d881fc4366bbafc03d\"" Feb 13 15:31:43.782148 containerd[1631]: time="2025-02-13T15:31:43.782105097Z" level=info msg="CreateContainer within sandbox \"3f21846e6b5a6944012014c52b872402984623ff6d7a66d881fc4366bbafc03d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 15:31:43.800934 containerd[1631]: time="2025-02-13T15:31:43.800831053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-rnq7x,Uid:284d9660-b838-4590-9878-242bb6fb2be9,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"134591526ee8617b7f204ad5b07b0d1663aeee490f19ce581739c6840bda32f2\"" Feb 13 15:31:43.806204 containerd[1631]: time="2025-02-13T15:31:43.806019200Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 15:31:43.807975 containerd[1631]: time="2025-02-13T15:31:43.807926868Z" level=info msg="CreateContainer within sandbox \"3f21846e6b5a6944012014c52b872402984623ff6d7a66d881fc4366bbafc03d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f8c4721a2527b81dd9f4d2aa21896b30fbcda1c47eb0491853d30ada4e6c75c3\"" Feb 13 15:31:43.808813 containerd[1631]: time="2025-02-13T15:31:43.808771779Z" level=info msg="StartContainer for \"f8c4721a2527b81dd9f4d2aa21896b30fbcda1c47eb0491853d30ada4e6c75c3\"" Feb 13 15:31:43.869465 containerd[1631]: time="2025-02-13T15:31:43.869395005Z" level=info msg="StartContainer for \"f8c4721a2527b81dd9f4d2aa21896b30fbcda1c47eb0491853d30ada4e6c75c3\" returns successfully" Feb 13 15:31:46.656959 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount711411649.mount: Deactivated successfully. Feb 13 15:31:46.697093 containerd[1631]: time="2025-02-13T15:31:46.696234964Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:46.697093 containerd[1631]: time="2025-02-13T15:31:46.697037591Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3673532" Feb 13 15:31:46.698235 containerd[1631]: time="2025-02-13T15:31:46.698119267Z" level=info msg="ImageCreate event name:\"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:46.700752 containerd[1631]: time="2025-02-13T15:31:46.700707834Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:46.704268 containerd[1631]: time="2025-02-13T15:31:46.701894634Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3662650\" in 2.895835753s" Feb 13 15:31:46.704268 containerd[1631]: time="2025-02-13T15:31:46.701929035Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" Feb 13 15:31:46.705929 containerd[1631]: time="2025-02-13T15:31:46.705899848Z" level=info msg="CreateContainer within sandbox \"134591526ee8617b7f204ad5b07b0d1663aeee490f19ce581739c6840bda32f2\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Feb 13 15:31:46.720506 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1762147006.mount: Deactivated successfully. Feb 13 15:31:46.729410 containerd[1631]: time="2025-02-13T15:31:46.729342034Z" level=info msg="CreateContainer within sandbox \"134591526ee8617b7f204ad5b07b0d1663aeee490f19ce581739c6840bda32f2\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"dd86237f440da30322060fa34a14d1c25a4dd5ea1d91fd159d1fc7da9771e189\"" Feb 13 15:31:46.731048 containerd[1631]: time="2025-02-13T15:31:46.730119180Z" level=info msg="StartContainer for \"dd86237f440da30322060fa34a14d1c25a4dd5ea1d91fd159d1fc7da9771e189\"" Feb 13 15:31:46.782725 containerd[1631]: time="2025-02-13T15:31:46.782670540Z" level=info msg="StartContainer for \"dd86237f440da30322060fa34a14d1c25a4dd5ea1d91fd159d1fc7da9771e189\" returns successfully" Feb 13 15:31:46.818627 containerd[1631]: time="2025-02-13T15:31:46.818570103Z" level=info msg="shim disconnected" id=dd86237f440da30322060fa34a14d1c25a4dd5ea1d91fd159d1fc7da9771e189 namespace=k8s.io Feb 13 15:31:46.818988 containerd[1631]: time="2025-02-13T15:31:46.818963916Z" level=warning msg="cleaning up after shim disconnected" id=dd86237f440da30322060fa34a14d1c25a4dd5ea1d91fd159d1fc7da9771e189 namespace=k8s.io Feb 13 15:31:46.819061 containerd[1631]: time="2025-02-13T15:31:46.819047359Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:31:47.056052 containerd[1631]: time="2025-02-13T15:31:47.055537120Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Feb 13 15:31:47.073672 kubelet[2898]: I0213 15:31:47.073586 2898 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-xmwqp" podStartSLOduration=5.073493387 podStartE2EDuration="5.073493387s" podCreationTimestamp="2025-02-13 15:31:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:31:44.060222956 +0000 UTC m=+15.222118250" watchObservedRunningTime="2025-02-13 15:31:47.073493387 +0000 UTC m=+18.235388721" Feb 13 15:31:49.939240 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1982397727.mount: Deactivated successfully. Feb 13 15:31:50.651561 containerd[1631]: time="2025-02-13T15:31:50.651324191Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:50.653437 containerd[1631]: time="2025-02-13T15:31:50.653372414Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26874261" Feb 13 15:31:50.654225 containerd[1631]: time="2025-02-13T15:31:50.653997673Z" level=info msg="ImageCreate event name:\"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:50.657877 containerd[1631]: time="2025-02-13T15:31:50.657813309Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:50.659049 containerd[1631]: time="2025-02-13T15:31:50.659006465Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26863435\" in 3.602801604s" Feb 13 15:31:50.659049 containerd[1631]: time="2025-02-13T15:31:50.659043306Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\"" Feb 13 15:31:50.663494 containerd[1631]: time="2025-02-13T15:31:50.662239204Z" level=info msg="CreateContainer within sandbox \"134591526ee8617b7f204ad5b07b0d1663aeee490f19ce581739c6840bda32f2\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 15:31:50.679652 containerd[1631]: time="2025-02-13T15:31:50.679591772Z" level=info msg="CreateContainer within sandbox \"134591526ee8617b7f204ad5b07b0d1663aeee490f19ce581739c6840bda32f2\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e1c48c6d7277c4ca1ea9f3454c2f70c2db398bfe34982ca9523ae19bda17e97e\"" Feb 13 15:31:50.681268 containerd[1631]: time="2025-02-13T15:31:50.680243592Z" level=info msg="StartContainer for \"e1c48c6d7277c4ca1ea9f3454c2f70c2db398bfe34982ca9523ae19bda17e97e\"" Feb 13 15:31:50.743901 containerd[1631]: time="2025-02-13T15:31:50.743844689Z" level=info msg="StartContainer for \"e1c48c6d7277c4ca1ea9f3454c2f70c2db398bfe34982ca9523ae19bda17e97e\" returns successfully" Feb 13 15:31:50.800031 kubelet[2898]: I0213 15:31:50.799840 2898 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 15:31:50.817809 containerd[1631]: time="2025-02-13T15:31:50.817680417Z" level=info msg="shim disconnected" id=e1c48c6d7277c4ca1ea9f3454c2f70c2db398bfe34982ca9523ae19bda17e97e namespace=k8s.io Feb 13 15:31:50.817809 containerd[1631]: time="2025-02-13T15:31:50.817735899Z" level=warning msg="cleaning up after shim disconnected" id=e1c48c6d7277c4ca1ea9f3454c2f70c2db398bfe34982ca9523ae19bda17e97e namespace=k8s.io Feb 13 15:31:50.817809 containerd[1631]: time="2025-02-13T15:31:50.817746299Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:31:50.835566 kubelet[2898]: I0213 15:31:50.835329 2898 topology_manager.go:215] "Topology Admit Handler" podUID="65e32a27-1c49-44f0-9bfa-100c01f9751e" podNamespace="kube-system" podName="coredns-76f75df574-n6q69" Feb 13 15:31:50.841097 kubelet[2898]: I0213 15:31:50.838088 2898 topology_manager.go:215] "Topology Admit Handler" podUID="2cb50671-1a12-4015-ad8a-80593b9a506c" podNamespace="kube-system" podName="coredns-76f75df574-xpwv9" Feb 13 15:31:50.850310 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e1c48c6d7277c4ca1ea9f3454c2f70c2db398bfe34982ca9523ae19bda17e97e-rootfs.mount: Deactivated successfully. Feb 13 15:31:50.903440 kubelet[2898]: I0213 15:31:50.903299 2898 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m44hs\" (UniqueName: \"kubernetes.io/projected/2cb50671-1a12-4015-ad8a-80593b9a506c-kube-api-access-m44hs\") pod \"coredns-76f75df574-xpwv9\" (UID: \"2cb50671-1a12-4015-ad8a-80593b9a506c\") " pod="kube-system/coredns-76f75df574-xpwv9" Feb 13 15:31:50.903853 kubelet[2898]: I0213 15:31:50.903828 2898 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2cb50671-1a12-4015-ad8a-80593b9a506c-config-volume\") pod \"coredns-76f75df574-xpwv9\" (UID: \"2cb50671-1a12-4015-ad8a-80593b9a506c\") " pod="kube-system/coredns-76f75df574-xpwv9" Feb 13 15:31:50.904080 kubelet[2898]: I0213 15:31:50.904040 2898 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/65e32a27-1c49-44f0-9bfa-100c01f9751e-config-volume\") pod \"coredns-76f75df574-n6q69\" (UID: \"65e32a27-1c49-44f0-9bfa-100c01f9751e\") " pod="kube-system/coredns-76f75df574-n6q69" Feb 13 15:31:50.904306 kubelet[2898]: I0213 15:31:50.904214 2898 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttkh8\" (UniqueName: \"kubernetes.io/projected/65e32a27-1c49-44f0-9bfa-100c01f9751e-kube-api-access-ttkh8\") pod \"coredns-76f75df574-n6q69\" (UID: \"65e32a27-1c49-44f0-9bfa-100c01f9751e\") " pod="kube-system/coredns-76f75df574-n6q69" Feb 13 15:31:51.071081 containerd[1631]: time="2025-02-13T15:31:51.070548548Z" level=info msg="CreateContainer within sandbox \"134591526ee8617b7f204ad5b07b0d1663aeee490f19ce581739c6840bda32f2\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Feb 13 15:31:51.092753 containerd[1631]: time="2025-02-13T15:31:51.092712008Z" level=info msg="CreateContainer within sandbox \"134591526ee8617b7f204ad5b07b0d1663aeee490f19ce581739c6840bda32f2\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"8e7406db59d06f81233767456a166d3028a5ef0bab0a0a7737f682643366f61a\"" Feb 13 15:31:51.094888 containerd[1631]: time="2025-02-13T15:31:51.094768509Z" level=info msg="StartContainer for \"8e7406db59d06f81233767456a166d3028a5ef0bab0a0a7737f682643366f61a\"" Feb 13 15:31:51.147010 containerd[1631]: time="2025-02-13T15:31:51.146960381Z" level=info msg="StartContainer for \"8e7406db59d06f81233767456a166d3028a5ef0bab0a0a7737f682643366f61a\" returns successfully" Feb 13 15:31:51.150594 containerd[1631]: time="2025-02-13T15:31:51.149563139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-n6q69,Uid:65e32a27-1c49-44f0-9bfa-100c01f9751e,Namespace:kube-system,Attempt:0,}" Feb 13 15:31:51.150594 containerd[1631]: time="2025-02-13T15:31:51.149908189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xpwv9,Uid:2cb50671-1a12-4015-ad8a-80593b9a506c,Namespace:kube-system,Attempt:0,}" Feb 13 15:31:51.233185 containerd[1631]: time="2025-02-13T15:31:51.233019301Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-n6q69,Uid:65e32a27-1c49-44f0-9bfa-100c01f9751e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ef21dc0158f510c7d8d4abcdecb976e3791090c1e57e5aedbd357ae96825fd17\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 15:31:51.234002 kubelet[2898]: E0213 15:31:51.233751 2898 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef21dc0158f510c7d8d4abcdecb976e3791090c1e57e5aedbd357ae96825fd17\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 15:31:51.234002 kubelet[2898]: E0213 15:31:51.233826 2898 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef21dc0158f510c7d8d4abcdecb976e3791090c1e57e5aedbd357ae96825fd17\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-n6q69" Feb 13 15:31:51.234002 kubelet[2898]: E0213 15:31:51.233848 2898 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef21dc0158f510c7d8d4abcdecb976e3791090c1e57e5aedbd357ae96825fd17\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-n6q69" Feb 13 15:31:51.234002 kubelet[2898]: E0213 15:31:51.233912 2898 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-n6q69_kube-system(65e32a27-1c49-44f0-9bfa-100c01f9751e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-n6q69_kube-system(65e32a27-1c49-44f0-9bfa-100c01f9751e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ef21dc0158f510c7d8d4abcdecb976e3791090c1e57e5aedbd357ae96825fd17\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-76f75df574-n6q69" podUID="65e32a27-1c49-44f0-9bfa-100c01f9751e" Feb 13 15:31:51.235125 containerd[1631]: time="2025-02-13T15:31:51.235070882Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xpwv9,Uid:2cb50671-1a12-4015-ad8a-80593b9a506c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d5b3fa5a53c23d78fa4007717ea1d881f151e162b86916a0a15fe18de43c1638\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 15:31:51.235626 kubelet[2898]: E0213 15:31:51.235543 2898 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5b3fa5a53c23d78fa4007717ea1d881f151e162b86916a0a15fe18de43c1638\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 15:31:51.235626 kubelet[2898]: E0213 15:31:51.235600 2898 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5b3fa5a53c23d78fa4007717ea1d881f151e162b86916a0a15fe18de43c1638\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-xpwv9" Feb 13 15:31:51.235855 kubelet[2898]: E0213 15:31:51.235732 2898 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5b3fa5a53c23d78fa4007717ea1d881f151e162b86916a0a15fe18de43c1638\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-xpwv9" Feb 13 15:31:51.235997 kubelet[2898]: E0213 15:31:51.235915 2898 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-xpwv9_kube-system(2cb50671-1a12-4015-ad8a-80593b9a506c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-xpwv9_kube-system(2cb50671-1a12-4015-ad8a-80593b9a506c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d5b3fa5a53c23d78fa4007717ea1d881f151e162b86916a0a15fe18de43c1638\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-76f75df574-xpwv9" podUID="2cb50671-1a12-4015-ad8a-80593b9a506c" Feb 13 15:31:52.231616 systemd-networkd[1242]: flannel.1: Link UP Feb 13 15:31:52.231622 systemd-networkd[1242]: flannel.1: Gained carrier Feb 13 15:31:53.454809 systemd-networkd[1242]: flannel.1: Gained IPv6LL Feb 13 15:32:03.967386 containerd[1631]: time="2025-02-13T15:32:03.967273652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-n6q69,Uid:65e32a27-1c49-44f0-9bfa-100c01f9751e,Namespace:kube-system,Attempt:0,}" Feb 13 15:32:04.002428 systemd-networkd[1242]: cni0: Link UP Feb 13 15:32:04.002444 systemd-networkd[1242]: cni0: Gained carrier Feb 13 15:32:04.010891 systemd-networkd[1242]: cni0: Lost carrier Feb 13 15:32:04.012238 systemd-networkd[1242]: vethad26895d: Link UP Feb 13 15:32:04.014608 kernel: cni0: port 1(vethad26895d) entered blocking state Feb 13 15:32:04.014728 kernel: cni0: port 1(vethad26895d) entered disabled state Feb 13 15:32:04.015872 kernel: vethad26895d: entered allmulticast mode Feb 13 15:32:04.015965 kernel: vethad26895d: entered promiscuous mode Feb 13 15:32:04.027276 systemd-networkd[1242]: vethad26895d: Gained carrier Feb 13 15:32:04.027822 kernel: cni0: port 1(vethad26895d) entered blocking state Feb 13 15:32:04.027897 kernel: cni0: port 1(vethad26895d) entered forwarding state Feb 13 15:32:04.027945 systemd-networkd[1242]: cni0: Gained carrier Feb 13 15:32:04.033828 containerd[1631]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40000928e8), "name":"cbr0", "type":"bridge"} Feb 13 15:32:04.033828 containerd[1631]: delegateAdd: netconf sent to delegate plugin: Feb 13 15:32:04.052382 containerd[1631]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-02-13T15:32:04.051951921Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:32:04.052382 containerd[1631]: time="2025-02-13T15:32:04.052084644Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:32:04.052382 containerd[1631]: time="2025-02-13T15:32:04.052105964Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:32:04.052382 containerd[1631]: time="2025-02-13T15:32:04.052215447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:32:04.099512 containerd[1631]: time="2025-02-13T15:32:04.099165616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-n6q69,Uid:65e32a27-1c49-44f0-9bfa-100c01f9751e,Namespace:kube-system,Attempt:0,} returns sandbox id \"e822448449418ad04691d146f19d8e8b35c49de50bc0c9ca1371544c463f6dae\"" Feb 13 15:32:04.105350 containerd[1631]: time="2025-02-13T15:32:04.105308353Z" level=info msg="CreateContainer within sandbox \"e822448449418ad04691d146f19d8e8b35c49de50bc0c9ca1371544c463f6dae\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:32:04.121841 containerd[1631]: time="2025-02-13T15:32:04.121779162Z" level=info msg="CreateContainer within sandbox \"e822448449418ad04691d146f19d8e8b35c49de50bc0c9ca1371544c463f6dae\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b2bdbba403a6bceafa8bf62e42f935c1419e984bf11d44833f5db2b31ed0843d\"" Feb 13 15:32:04.123820 containerd[1631]: time="2025-02-13T15:32:04.122443776Z" level=info msg="StartContainer for \"b2bdbba403a6bceafa8bf62e42f935c1419e984bf11d44833f5db2b31ed0843d\"" Feb 13 15:32:04.179103 containerd[1631]: time="2025-02-13T15:32:04.179048722Z" level=info msg="StartContainer for \"b2bdbba403a6bceafa8bf62e42f935c1419e984bf11d44833f5db2b31ed0843d\" returns successfully" Feb 13 15:32:04.968113 containerd[1631]: time="2025-02-13T15:32:04.967946916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xpwv9,Uid:2cb50671-1a12-4015-ad8a-80593b9a506c,Namespace:kube-system,Attempt:0,}" Feb 13 15:32:05.006228 systemd-networkd[1242]: vetha8f327d3: Link UP Feb 13 15:32:05.008784 kernel: cni0: port 2(vetha8f327d3) entered blocking state Feb 13 15:32:05.008811 kernel: cni0: port 2(vetha8f327d3) entered disabled state Feb 13 15:32:05.008826 kernel: vetha8f327d3: entered allmulticast mode Feb 13 15:32:05.010277 kernel: vetha8f327d3: entered promiscuous mode Feb 13 15:32:05.010344 kernel: cni0: port 2(vetha8f327d3) entered blocking state Feb 13 15:32:05.010361 kernel: cni0: port 2(vetha8f327d3) entered forwarding state Feb 13 15:32:05.015884 systemd-networkd[1242]: vetha8f327d3: Gained carrier Feb 13 15:32:05.017196 containerd[1631]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40000948e8), "name":"cbr0", "type":"bridge"} Feb 13 15:32:05.017196 containerd[1631]: delegateAdd: netconf sent to delegate plugin: Feb 13 15:32:05.034592 containerd[1631]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-02-13T15:32:05.034430547Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:32:05.034746 containerd[1631]: time="2025-02-13T15:32:05.034501749Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:32:05.034746 containerd[1631]: time="2025-02-13T15:32:05.034646312Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:32:05.034822 containerd[1631]: time="2025-02-13T15:32:05.034760954Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:32:05.088343 containerd[1631]: time="2025-02-13T15:32:05.088277606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xpwv9,Uid:2cb50671-1a12-4015-ad8a-80593b9a506c,Namespace:kube-system,Attempt:0,} returns sandbox id \"9011457ef1bac17af53825f2ab2a75a79d4603fddfa83a147f624bdd610cc40a\"" Feb 13 15:32:05.092749 containerd[1631]: time="2025-02-13T15:32:05.092627862Z" level=info msg="CreateContainer within sandbox \"9011457ef1bac17af53825f2ab2a75a79d4603fddfa83a147f624bdd610cc40a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:32:05.102887 systemd-networkd[1242]: vethad26895d: Gained IPv6LL Feb 13 15:32:05.112175 containerd[1631]: time="2025-02-13T15:32:05.112130529Z" level=info msg="CreateContainer within sandbox \"9011457ef1bac17af53825f2ab2a75a79d4603fddfa83a147f624bdd610cc40a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2df277a2de23198e715426757e4686d9ecd6185b84f6d8932b92d010e004b4a9\"" Feb 13 15:32:05.113729 containerd[1631]: time="2025-02-13T15:32:05.113699563Z" level=info msg="StartContainer for \"2df277a2de23198e715426757e4686d9ecd6185b84f6d8932b92d010e004b4a9\"" Feb 13 15:32:05.139638 kubelet[2898]: I0213 15:32:05.139588 2898 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-rnq7x" podStartSLOduration=16.278809514 podStartE2EDuration="23.136493822s" podCreationTimestamp="2025-02-13 15:31:42 +0000 UTC" firstStartedPulling="2025-02-13 15:31:43.802686599 +0000 UTC m=+14.964581893" lastFinishedPulling="2025-02-13 15:31:50.660370907 +0000 UTC m=+21.822266201" observedRunningTime="2025-02-13 15:31:52.087757906 +0000 UTC m=+23.249653200" watchObservedRunningTime="2025-02-13 15:32:05.136493822 +0000 UTC m=+36.298389116" Feb 13 15:32:05.140990 kubelet[2898]: I0213 15:32:05.140958 2898 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-n6q69" podStartSLOduration=22.140914039 podStartE2EDuration="22.140914039s" podCreationTimestamp="2025-02-13 15:31:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:32:05.139977459 +0000 UTC m=+36.301872753" watchObservedRunningTime="2025-02-13 15:32:05.140914039 +0000 UTC m=+36.302809333" Feb 13 15:32:05.196890 containerd[1631]: time="2025-02-13T15:32:05.196829464Z" level=info msg="StartContainer for \"2df277a2de23198e715426757e4686d9ecd6185b84f6d8932b92d010e004b4a9\" returns successfully" Feb 13 15:32:05.550897 systemd-networkd[1242]: cni0: Gained IPv6LL Feb 13 15:32:06.132286 kubelet[2898]: I0213 15:32:06.131829 2898 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-xpwv9" podStartSLOduration=23.131776963 podStartE2EDuration="23.131776963s" podCreationTimestamp="2025-02-13 15:31:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:32:06.130633579 +0000 UTC m=+37.292528913" watchObservedRunningTime="2025-02-13 15:32:06.131776963 +0000 UTC m=+37.293672257" Feb 13 15:32:06.191790 systemd-networkd[1242]: vetha8f327d3: Gained IPv6LL Feb 13 15:34:40.977613 update_engine[1611]: I20250213 15:34:40.977478 1611 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Feb 13 15:34:40.978127 update_engine[1611]: I20250213 15:34:40.977583 1611 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Feb 13 15:34:40.978127 update_engine[1611]: I20250213 15:34:40.977948 1611 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Feb 13 15:34:40.978746 update_engine[1611]: I20250213 15:34:40.978653 1611 omaha_request_params.cc:62] Current group set to stable Feb 13 15:34:40.978999 update_engine[1611]: I20250213 15:34:40.978863 1611 update_attempter.cc:499] Already updated boot flags. Skipping. Feb 13 15:34:40.978999 update_engine[1611]: I20250213 15:34:40.978893 1611 update_attempter.cc:643] Scheduling an action processor start. Feb 13 15:34:40.978999 update_engine[1611]: I20250213 15:34:40.978928 1611 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 15:34:40.978999 update_engine[1611]: I20250213 15:34:40.978982 1611 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Feb 13 15:34:40.979129 update_engine[1611]: I20250213 15:34:40.979080 1611 omaha_request_action.cc:271] Posting an Omaha request to disabled Feb 13 15:34:40.979129 update_engine[1611]: I20250213 15:34:40.979095 1611 omaha_request_action.cc:272] Request: Feb 13 15:34:40.979129 update_engine[1611]: Feb 13 15:34:40.979129 update_engine[1611]: Feb 13 15:34:40.979129 update_engine[1611]: Feb 13 15:34:40.979129 update_engine[1611]: Feb 13 15:34:40.979129 update_engine[1611]: Feb 13 15:34:40.979129 update_engine[1611]: Feb 13 15:34:40.979129 update_engine[1611]: Feb 13 15:34:40.979129 update_engine[1611]: Feb 13 15:34:40.979129 update_engine[1611]: I20250213 15:34:40.979107 1611 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 15:34:40.979926 locksmithd[1653]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Feb 13 15:34:40.981298 update_engine[1611]: I20250213 15:34:40.981236 1611 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 15:34:40.981735 update_engine[1611]: I20250213 15:34:40.981685 1611 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 15:34:40.982465 update_engine[1611]: E20250213 15:34:40.982391 1611 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 15:34:40.982577 update_engine[1611]: I20250213 15:34:40.982465 1611 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Feb 13 15:34:50.889087 update_engine[1611]: I20250213 15:34:50.888978 1611 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 15:34:50.889699 update_engine[1611]: I20250213 15:34:50.889297 1611 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 15:34:50.889699 update_engine[1611]: I20250213 15:34:50.889612 1611 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 15:34:50.890186 update_engine[1611]: E20250213 15:34:50.890115 1611 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 15:34:50.890286 update_engine[1611]: I20250213 15:34:50.890200 1611 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Feb 13 15:35:00.888537 update_engine[1611]: I20250213 15:35:00.888427 1611 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 15:35:00.890579 update_engine[1611]: I20250213 15:35:00.888819 1611 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 15:35:00.890579 update_engine[1611]: I20250213 15:35:00.889106 1611 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 15:35:00.890579 update_engine[1611]: E20250213 15:35:00.889698 1611 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 15:35:00.890579 update_engine[1611]: I20250213 15:35:00.889777 1611 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Feb 13 15:35:10.887118 update_engine[1611]: I20250213 15:35:10.886863 1611 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 15:35:10.887861 update_engine[1611]: I20250213 15:35:10.887375 1611 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 15:35:10.887861 update_engine[1611]: I20250213 15:35:10.887786 1611 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 15:35:10.888263 update_engine[1611]: E20250213 15:35:10.888176 1611 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 15:35:10.888263 update_engine[1611]: I20250213 15:35:10.888257 1611 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 13 15:35:10.888402 update_engine[1611]: I20250213 15:35:10.888273 1611 omaha_request_action.cc:617] Omaha request response: Feb 13 15:35:10.888402 update_engine[1611]: E20250213 15:35:10.888376 1611 omaha_request_action.cc:636] Omaha request network transfer failed. Feb 13 15:35:10.888614 update_engine[1611]: I20250213 15:35:10.888401 1611 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Feb 13 15:35:10.888614 update_engine[1611]: I20250213 15:35:10.888412 1611 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 15:35:10.888614 update_engine[1611]: I20250213 15:35:10.888422 1611 update_attempter.cc:306] Processing Done. Feb 13 15:35:10.888614 update_engine[1611]: E20250213 15:35:10.888442 1611 update_attempter.cc:619] Update failed. Feb 13 15:35:10.888614 update_engine[1611]: I20250213 15:35:10.888456 1611 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Feb 13 15:35:10.888614 update_engine[1611]: I20250213 15:35:10.888465 1611 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Feb 13 15:35:10.888614 update_engine[1611]: I20250213 15:35:10.888475 1611 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Feb 13 15:35:10.888614 update_engine[1611]: I20250213 15:35:10.888609 1611 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 15:35:10.889575 update_engine[1611]: I20250213 15:35:10.888646 1611 omaha_request_action.cc:271] Posting an Omaha request to disabled Feb 13 15:35:10.889575 update_engine[1611]: I20250213 15:35:10.888679 1611 omaha_request_action.cc:272] Request: Feb 13 15:35:10.889575 update_engine[1611]: Feb 13 15:35:10.889575 update_engine[1611]: Feb 13 15:35:10.889575 update_engine[1611]: Feb 13 15:35:10.889575 update_engine[1611]: Feb 13 15:35:10.889575 update_engine[1611]: Feb 13 15:35:10.889575 update_engine[1611]: Feb 13 15:35:10.889575 update_engine[1611]: I20250213 15:35:10.888692 1611 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 15:35:10.889575 update_engine[1611]: I20250213 15:35:10.888954 1611 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 15:35:10.889575 update_engine[1611]: I20250213 15:35:10.889262 1611 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 15:35:10.890127 locksmithd[1653]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Feb 13 15:35:10.890431 update_engine[1611]: E20250213 15:35:10.889627 1611 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 15:35:10.890431 update_engine[1611]: I20250213 15:35:10.889716 1611 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 13 15:35:10.890431 update_engine[1611]: I20250213 15:35:10.889734 1611 omaha_request_action.cc:617] Omaha request response: Feb 13 15:35:10.890431 update_engine[1611]: I20250213 15:35:10.889745 1611 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 15:35:10.890431 update_engine[1611]: I20250213 15:35:10.889755 1611 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 15:35:10.890431 update_engine[1611]: I20250213 15:35:10.889765 1611 update_attempter.cc:306] Processing Done. Feb 13 15:35:10.890431 update_engine[1611]: I20250213 15:35:10.889775 1611 update_attempter.cc:310] Error event sent. Feb 13 15:35:10.890431 update_engine[1611]: I20250213 15:35:10.889787 1611 update_check_scheduler.cc:74] Next update check in 42m49s Feb 13 15:35:10.890723 locksmithd[1653]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Feb 13 15:36:12.591798 systemd[1]: Started sshd@5-138.199.158.182:22-139.178.89.65:60704.service - OpenSSH per-connection server daemon (139.178.89.65:60704). Feb 13 15:36:13.578642 sshd[4836]: Accepted publickey for core from 139.178.89.65 port 60704 ssh2: RSA SHA256:gOK0CXwz1tCyASFR4SaHsklqtVIpFWLnYpEIAs0TKRk Feb 13 15:36:13.580810 sshd-session[4836]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:36:13.588304 systemd-logind[1602]: New session 6 of user core. Feb 13 15:36:13.594870 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 15:36:14.350025 sshd[4860]: Connection closed by 139.178.89.65 port 60704 Feb 13 15:36:14.350616 sshd-session[4836]: pam_unix(sshd:session): session closed for user core Feb 13 15:36:14.358015 systemd[1]: sshd@5-138.199.158.182:22-139.178.89.65:60704.service: Deactivated successfully. Feb 13 15:36:14.358559 systemd-logind[1602]: Session 6 logged out. Waiting for processes to exit. Feb 13 15:36:14.362344 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 15:36:14.365254 systemd-logind[1602]: Removed session 6. Feb 13 15:36:19.518111 systemd[1]: Started sshd@6-138.199.158.182:22-139.178.89.65:45568.service - OpenSSH per-connection server daemon (139.178.89.65:45568). Feb 13 15:36:20.504650 sshd[4895]: Accepted publickey for core from 139.178.89.65 port 45568 ssh2: RSA SHA256:gOK0CXwz1tCyASFR4SaHsklqtVIpFWLnYpEIAs0TKRk Feb 13 15:36:20.506357 sshd-session[4895]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:36:20.512406 systemd-logind[1602]: New session 7 of user core. Feb 13 15:36:20.522562 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 15:36:21.258133 sshd[4898]: Connection closed by 139.178.89.65 port 45568 Feb 13 15:36:21.258889 sshd-session[4895]: pam_unix(sshd:session): session closed for user core Feb 13 15:36:21.262509 systemd[1]: sshd@6-138.199.158.182:22-139.178.89.65:45568.service: Deactivated successfully. Feb 13 15:36:21.267644 systemd-logind[1602]: Session 7 logged out. Waiting for processes to exit. Feb 13 15:36:21.269143 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 15:36:21.270844 systemd-logind[1602]: Removed session 7. Feb 13 15:36:26.424995 systemd[1]: Started sshd@7-138.199.158.182:22-139.178.89.65:47770.service - OpenSSH per-connection server daemon (139.178.89.65:47770). Feb 13 15:36:27.412416 sshd[4931]: Accepted publickey for core from 139.178.89.65 port 47770 ssh2: RSA SHA256:gOK0CXwz1tCyASFR4SaHsklqtVIpFWLnYpEIAs0TKRk Feb 13 15:36:27.414691 sshd-session[4931]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:36:27.420964 systemd-logind[1602]: New session 8 of user core. Feb 13 15:36:27.429108 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 15:36:28.167632 sshd[4934]: Connection closed by 139.178.89.65 port 47770 Feb 13 15:36:28.168465 sshd-session[4931]: pam_unix(sshd:session): session closed for user core Feb 13 15:36:28.173437 systemd[1]: sshd@7-138.199.158.182:22-139.178.89.65:47770.service: Deactivated successfully. Feb 13 15:36:28.179811 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 15:36:28.181128 systemd-logind[1602]: Session 8 logged out. Waiting for processes to exit. Feb 13 15:36:28.183049 systemd-logind[1602]: Removed session 8. Feb 13 15:36:28.337938 systemd[1]: Started sshd@8-138.199.158.182:22-139.178.89.65:47778.service - OpenSSH per-connection server daemon (139.178.89.65:47778). Feb 13 15:36:29.323470 sshd[4953]: Accepted publickey for core from 139.178.89.65 port 47778 ssh2: RSA SHA256:gOK0CXwz1tCyASFR4SaHsklqtVIpFWLnYpEIAs0TKRk Feb 13 15:36:29.325404 sshd-session[4953]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:36:29.331997 systemd-logind[1602]: New session 9 of user core. Feb 13 15:36:29.338162 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 15:36:30.118259 sshd[4973]: Connection closed by 139.178.89.65 port 47778 Feb 13 15:36:30.119837 sshd-session[4953]: pam_unix(sshd:session): session closed for user core Feb 13 15:36:30.124233 systemd[1]: sshd@8-138.199.158.182:22-139.178.89.65:47778.service: Deactivated successfully. Feb 13 15:36:30.129574 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 15:36:30.130663 systemd-logind[1602]: Session 9 logged out. Waiting for processes to exit. Feb 13 15:36:30.131793 systemd-logind[1602]: Removed session 9. Feb 13 15:36:30.290070 systemd[1]: Started sshd@9-138.199.158.182:22-139.178.89.65:47794.service - OpenSSH per-connection server daemon (139.178.89.65:47794). Feb 13 15:36:31.297166 sshd[4982]: Accepted publickey for core from 139.178.89.65 port 47794 ssh2: RSA SHA256:gOK0CXwz1tCyASFR4SaHsklqtVIpFWLnYpEIAs0TKRk Feb 13 15:36:31.299436 sshd-session[4982]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:36:31.304626 systemd-logind[1602]: New session 10 of user core. Feb 13 15:36:31.317042 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 15:36:32.061176 sshd[4985]: Connection closed by 139.178.89.65 port 47794 Feb 13 15:36:32.062087 sshd-session[4982]: pam_unix(sshd:session): session closed for user core Feb 13 15:36:32.066800 systemd[1]: sshd@9-138.199.158.182:22-139.178.89.65:47794.service: Deactivated successfully. Feb 13 15:36:32.071773 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 15:36:32.073439 systemd-logind[1602]: Session 10 logged out. Waiting for processes to exit. Feb 13 15:36:32.075482 systemd-logind[1602]: Removed session 10. Feb 13 15:36:37.229001 systemd[1]: Started sshd@10-138.199.158.182:22-139.178.89.65:34432.service - OpenSSH per-connection server daemon (139.178.89.65:34432). Feb 13 15:36:38.213843 sshd[5018]: Accepted publickey for core from 139.178.89.65 port 34432 ssh2: RSA SHA256:gOK0CXwz1tCyASFR4SaHsklqtVIpFWLnYpEIAs0TKRk Feb 13 15:36:38.215691 sshd-session[5018]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:36:38.223448 systemd-logind[1602]: New session 11 of user core. Feb 13 15:36:38.227816 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 15:36:38.969469 sshd[5027]: Connection closed by 139.178.89.65 port 34432 Feb 13 15:36:38.970389 sshd-session[5018]: pam_unix(sshd:session): session closed for user core Feb 13 15:36:38.976281 systemd[1]: sshd@10-138.199.158.182:22-139.178.89.65:34432.service: Deactivated successfully. Feb 13 15:36:38.979988 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 15:36:38.982222 systemd-logind[1602]: Session 11 logged out. Waiting for processes to exit. Feb 13 15:36:38.983503 systemd-logind[1602]: Removed session 11. Feb 13 15:36:39.132846 systemd[1]: Started sshd@11-138.199.158.182:22-139.178.89.65:34442.service - OpenSSH per-connection server daemon (139.178.89.65:34442). Feb 13 15:36:40.107044 sshd[5053]: Accepted publickey for core from 139.178.89.65 port 34442 ssh2: RSA SHA256:gOK0CXwz1tCyASFR4SaHsklqtVIpFWLnYpEIAs0TKRk Feb 13 15:36:40.109099 sshd-session[5053]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:36:40.116840 systemd-logind[1602]: New session 12 of user core. Feb 13 15:36:40.120591 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 15:36:40.894606 sshd[5056]: Connection closed by 139.178.89.65 port 34442 Feb 13 15:36:40.895603 sshd-session[5053]: pam_unix(sshd:session): session closed for user core Feb 13 15:36:40.902695 systemd[1]: sshd@11-138.199.158.182:22-139.178.89.65:34442.service: Deactivated successfully. Feb 13 15:36:40.904659 systemd-logind[1602]: Session 12 logged out. Waiting for processes to exit. Feb 13 15:36:40.908239 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 15:36:40.910064 systemd-logind[1602]: Removed session 12. Feb 13 15:36:41.062191 systemd[1]: Started sshd@12-138.199.158.182:22-139.178.89.65:34450.service - OpenSSH per-connection server daemon (139.178.89.65:34450). Feb 13 15:36:42.047445 sshd[5065]: Accepted publickey for core from 139.178.89.65 port 34450 ssh2: RSA SHA256:gOK0CXwz1tCyASFR4SaHsklqtVIpFWLnYpEIAs0TKRk Feb 13 15:36:42.049627 sshd-session[5065]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:36:42.055617 systemd-logind[1602]: New session 13 of user core. Feb 13 15:36:42.060036 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 15:36:44.277811 sshd[5068]: Connection closed by 139.178.89.65 port 34450 Feb 13 15:36:44.278925 sshd-session[5065]: pam_unix(sshd:session): session closed for user core Feb 13 15:36:44.285827 systemd[1]: sshd@12-138.199.158.182:22-139.178.89.65:34450.service: Deactivated successfully. Feb 13 15:36:44.290280 systemd-logind[1602]: Session 13 logged out. Waiting for processes to exit. Feb 13 15:36:44.291132 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 15:36:44.293365 systemd-logind[1602]: Removed session 13. Feb 13 15:36:44.447847 systemd[1]: Started sshd@13-138.199.158.182:22-139.178.89.65:34452.service - OpenSSH per-connection server daemon (139.178.89.65:34452). Feb 13 15:36:45.439101 sshd[5107]: Accepted publickey for core from 139.178.89.65 port 34452 ssh2: RSA SHA256:gOK0CXwz1tCyASFR4SaHsklqtVIpFWLnYpEIAs0TKRk Feb 13 15:36:45.440263 sshd-session[5107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:36:45.444824 systemd-logind[1602]: New session 14 of user core. Feb 13 15:36:45.456384 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 15:36:46.334503 sshd[5110]: Connection closed by 139.178.89.65 port 34452 Feb 13 15:36:46.335611 sshd-session[5107]: pam_unix(sshd:session): session closed for user core Feb 13 15:36:46.342580 systemd-logind[1602]: Session 14 logged out. Waiting for processes to exit. Feb 13 15:36:46.342643 systemd[1]: sshd@13-138.199.158.182:22-139.178.89.65:34452.service: Deactivated successfully. Feb 13 15:36:46.347775 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 15:36:46.349450 systemd-logind[1602]: Removed session 14. Feb 13 15:36:46.501793 systemd[1]: Started sshd@14-138.199.158.182:22-139.178.89.65:51928.service - OpenSSH per-connection server daemon (139.178.89.65:51928). Feb 13 15:36:47.490614 sshd[5119]: Accepted publickey for core from 139.178.89.65 port 51928 ssh2: RSA SHA256:gOK0CXwz1tCyASFR4SaHsklqtVIpFWLnYpEIAs0TKRk Feb 13 15:36:47.492281 sshd-session[5119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:36:47.497099 systemd-logind[1602]: New session 15 of user core. Feb 13 15:36:47.502791 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 15:36:48.250208 sshd[5122]: Connection closed by 139.178.89.65 port 51928 Feb 13 15:36:48.251140 sshd-session[5119]: pam_unix(sshd:session): session closed for user core Feb 13 15:36:48.256242 systemd[1]: sshd@14-138.199.158.182:22-139.178.89.65:51928.service: Deactivated successfully. Feb 13 15:36:48.260268 systemd-logind[1602]: Session 15 logged out. Waiting for processes to exit. Feb 13 15:36:48.261936 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 15:36:48.263916 systemd-logind[1602]: Removed session 15. Feb 13 15:36:53.416869 systemd[1]: Started sshd@15-138.199.158.182:22-139.178.89.65:51944.service - OpenSSH per-connection server daemon (139.178.89.65:51944). Feb 13 15:36:54.410058 sshd[5164]: Accepted publickey for core from 139.178.89.65 port 51944 ssh2: RSA SHA256:gOK0CXwz1tCyASFR4SaHsklqtVIpFWLnYpEIAs0TKRk Feb 13 15:36:54.412471 sshd-session[5164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:36:54.417977 systemd-logind[1602]: New session 16 of user core. Feb 13 15:36:54.425188 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 15:36:55.170666 sshd[5182]: Connection closed by 139.178.89.65 port 51944 Feb 13 15:36:55.171426 sshd-session[5164]: pam_unix(sshd:session): session closed for user core Feb 13 15:36:55.175381 systemd[1]: sshd@15-138.199.158.182:22-139.178.89.65:51944.service: Deactivated successfully. Feb 13 15:36:55.181641 systemd-logind[1602]: Session 16 logged out. Waiting for processes to exit. Feb 13 15:36:55.182550 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 15:36:55.183708 systemd-logind[1602]: Removed session 16. Feb 13 15:37:00.337999 systemd[1]: Started sshd@16-138.199.158.182:22-139.178.89.65:33842.service - OpenSSH per-connection server daemon (139.178.89.65:33842). Feb 13 15:37:01.329976 sshd[5214]: Accepted publickey for core from 139.178.89.65 port 33842 ssh2: RSA SHA256:gOK0CXwz1tCyASFR4SaHsklqtVIpFWLnYpEIAs0TKRk Feb 13 15:37:01.332158 sshd-session[5214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:01.339146 systemd-logind[1602]: New session 17 of user core. Feb 13 15:37:01.342220 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 15:37:02.090070 sshd[5217]: Connection closed by 139.178.89.65 port 33842 Feb 13 15:37:02.091671 sshd-session[5214]: pam_unix(sshd:session): session closed for user core Feb 13 15:37:02.096195 systemd[1]: sshd@16-138.199.158.182:22-139.178.89.65:33842.service: Deactivated successfully. Feb 13 15:37:02.101816 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 15:37:02.103239 systemd-logind[1602]: Session 17 logged out. Waiting for processes to exit. Feb 13 15:37:02.104290 systemd-logind[1602]: Removed session 17. Feb 13 15:37:18.837484 kubelet[2898]: E0213 15:37:18.837443 2898 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:41326->10.0.0.2:2379: read: connection timed out" Feb 13 15:37:18.848995 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-30a7d84d613ddc94914554e4e26afed9ff6b1a108fc17f539aeddd8377871382-rootfs.mount: Deactivated successfully. Feb 13 15:37:18.857130 containerd[1631]: time="2025-02-13T15:37:18.857050104Z" level=info msg="shim disconnected" id=30a7d84d613ddc94914554e4e26afed9ff6b1a108fc17f539aeddd8377871382 namespace=k8s.io Feb 13 15:37:18.858092 containerd[1631]: time="2025-02-13T15:37:18.858046552Z" level=warning msg="cleaning up after shim disconnected" id=30a7d84d613ddc94914554e4e26afed9ff6b1a108fc17f539aeddd8377871382 namespace=k8s.io Feb 13 15:37:18.858092 containerd[1631]: time="2025-02-13T15:37:18.858075952Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:37:18.887927 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-240bde964294d7e1a2fdc5bce624d4c1e4f6bdeddeaf9620242aa2c62c70e2e8-rootfs.mount: Deactivated successfully. Feb 13 15:37:18.892840 containerd[1631]: time="2025-02-13T15:37:18.892554824Z" level=info msg="shim disconnected" id=240bde964294d7e1a2fdc5bce624d4c1e4f6bdeddeaf9620242aa2c62c70e2e8 namespace=k8s.io Feb 13 15:37:18.892840 containerd[1631]: time="2025-02-13T15:37:18.892631065Z" level=warning msg="cleaning up after shim disconnected" id=240bde964294d7e1a2fdc5bce624d4c1e4f6bdeddeaf9620242aa2c62c70e2e8 namespace=k8s.io Feb 13 15:37:18.892840 containerd[1631]: time="2025-02-13T15:37:18.892644585Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:37:19.881248 kubelet[2898]: I0213 15:37:19.881189 2898 scope.go:117] "RemoveContainer" containerID="30a7d84d613ddc94914554e4e26afed9ff6b1a108fc17f539aeddd8377871382" Feb 13 15:37:19.884556 kubelet[2898]: I0213 15:37:19.884422 2898 scope.go:117] "RemoveContainer" containerID="240bde964294d7e1a2fdc5bce624d4c1e4f6bdeddeaf9620242aa2c62c70e2e8" Feb 13 15:37:19.885923 containerd[1631]: time="2025-02-13T15:37:19.885886099Z" level=info msg="CreateContainer within sandbox \"6b80bdf0ae1f543faebba0f197190c4eb846749a5f3b48cd2d31457096876574\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Feb 13 15:37:19.887905 containerd[1631]: time="2025-02-13T15:37:19.887297470Z" level=info msg="CreateContainer within sandbox \"e2f5a294e2c9d38e2427f1a26f643ae00a875cb0c6d6d0296b4c69aebf056300\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Feb 13 15:37:19.908541 containerd[1631]: time="2025-02-13T15:37:19.908479317Z" level=info msg="CreateContainer within sandbox \"6b80bdf0ae1f543faebba0f197190c4eb846749a5f3b48cd2d31457096876574\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"56653412fe03f14543c82926e16330994dc8415147067459a027652607ac96b2\"" Feb 13 15:37:19.909483 containerd[1631]: time="2025-02-13T15:37:19.909448325Z" level=info msg="StartContainer for \"56653412fe03f14543c82926e16330994dc8415147067459a027652607ac96b2\"" Feb 13 15:37:19.914550 containerd[1631]: time="2025-02-13T15:37:19.914499764Z" level=info msg="CreateContainer within sandbox \"e2f5a294e2c9d38e2427f1a26f643ae00a875cb0c6d6d0296b4c69aebf056300\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"1fa60d29dc98b65c57d8fc328bac10169b0038357734e4a50b239800f74fa424\"" Feb 13 15:37:19.915071 containerd[1631]: time="2025-02-13T15:37:19.915045649Z" level=info msg="StartContainer for \"1fa60d29dc98b65c57d8fc328bac10169b0038357734e4a50b239800f74fa424\"" Feb 13 15:37:20.001404 containerd[1631]: time="2025-02-13T15:37:20.001352529Z" level=info msg="StartContainer for \"56653412fe03f14543c82926e16330994dc8415147067459a027652607ac96b2\" returns successfully" Feb 13 15:37:20.007747 containerd[1631]: time="2025-02-13T15:37:20.007680739Z" level=info msg="StartContainer for \"1fa60d29dc98b65c57d8fc328bac10169b0038357734e4a50b239800f74fa424\" returns successfully" Feb 13 15:37:22.608306 kubelet[2898]: E0213 15:37:22.608093 2898 event.go:346] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:41166->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4152-2-1-4-c758b1cf91.1823ce97b899508c kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4152-2-1-4-c758b1cf91,UID:dc5a2170d476c972c3ecf9cc09f834a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4152-2-1-4-c758b1cf91,},FirstTimestamp:2025-02-13 15:37:12.127582348 +0000 UTC m=+343.289477722,LastTimestamp:2025-02-13 15:37:12.127582348 +0000 UTC m=+343.289477722,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152-2-1-4-c758b1cf91,}"