Jan 13 20:16:41.905052 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 13 20:16:41.905081 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Mon Jan 13 18:57:23 -00 2025 Jan 13 20:16:41.905093 kernel: KASLR enabled Jan 13 20:16:41.905099 kernel: efi: EFI v2.7 by EDK II Jan 13 20:16:41.905106 kernel: efi: SMBIOS 3.0=0x135ed0000 MEMATTR=0x133d4d698 ACPI 2.0=0x132430018 RNG=0x13243e918 MEMRESERVE=0x132303d98 Jan 13 20:16:41.905112 kernel: random: crng init done Jan 13 20:16:41.905119 kernel: secureboot: Secure boot disabled Jan 13 20:16:41.905126 kernel: ACPI: Early table checksum verification disabled Jan 13 20:16:41.905132 kernel: ACPI: RSDP 0x0000000132430018 000024 (v02 BOCHS ) Jan 13 20:16:41.905139 kernel: ACPI: XSDT 0x000000013243FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Jan 13 20:16:41.905147 kernel: ACPI: FACP 0x000000013243FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:16:41.905153 kernel: ACPI: DSDT 0x0000000132437518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:16:41.905160 kernel: ACPI: APIC 0x000000013243FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:16:41.905166 kernel: ACPI: PPTT 0x000000013243FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:16:41.905174 kernel: ACPI: GTDT 0x000000013243D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:16:41.905183 kernel: ACPI: MCFG 0x000000013243FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:16:41.905189 kernel: ACPI: SPCR 0x000000013243E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:16:41.905196 kernel: ACPI: DBG2 0x000000013243E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:16:41.905203 kernel: ACPI: IORT 0x000000013243E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:16:41.905209 kernel: ACPI: BGRT 0x000000013243E798 000038 (v01 INTEL EDK2 00000002 01000013) Jan 13 20:16:41.905216 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Jan 13 20:16:41.905222 kernel: NUMA: Failed to initialise from firmware Jan 13 20:16:41.905229 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Jan 13 20:16:41.905236 kernel: NUMA: NODE_DATA [mem 0x13981f800-0x139824fff] Jan 13 20:16:41.905242 kernel: Zone ranges: Jan 13 20:16:41.905249 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 13 20:16:41.905257 kernel: DMA32 empty Jan 13 20:16:41.905263 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Jan 13 20:16:41.905270 kernel: Movable zone start for each node Jan 13 20:16:41.905276 kernel: Early memory node ranges Jan 13 20:16:41.905283 kernel: node 0: [mem 0x0000000040000000-0x000000013243ffff] Jan 13 20:16:41.905289 kernel: node 0: [mem 0x0000000132440000-0x000000013272ffff] Jan 13 20:16:41.905296 kernel: node 0: [mem 0x0000000132730000-0x0000000135bfffff] Jan 13 20:16:41.905303 kernel: node 0: [mem 0x0000000135c00000-0x0000000135fdffff] Jan 13 20:16:41.905309 kernel: node 0: [mem 0x0000000135fe0000-0x0000000139ffffff] Jan 13 20:16:41.905316 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Jan 13 20:16:41.905322 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Jan 13 20:16:41.905330 kernel: psci: probing for conduit method from ACPI. Jan 13 20:16:41.905337 kernel: psci: PSCIv1.1 detected in firmware. Jan 13 20:16:41.905343 kernel: psci: Using standard PSCI v0.2 function IDs Jan 13 20:16:41.905353 kernel: psci: Trusted OS migration not required Jan 13 20:16:41.905360 kernel: psci: SMC Calling Convention v1.1 Jan 13 20:16:41.905367 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 13 20:16:41.905376 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 13 20:16:41.905383 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 13 20:16:41.905390 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 13 20:16:41.905397 kernel: Detected PIPT I-cache on CPU0 Jan 13 20:16:41.905404 kernel: CPU features: detected: GIC system register CPU interface Jan 13 20:16:41.905411 kernel: CPU features: detected: Hardware dirty bit management Jan 13 20:16:41.905418 kernel: CPU features: detected: Spectre-v4 Jan 13 20:16:41.905425 kernel: CPU features: detected: Spectre-BHB Jan 13 20:16:41.905432 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 13 20:16:41.905439 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 13 20:16:41.905446 kernel: CPU features: detected: ARM erratum 1418040 Jan 13 20:16:41.905454 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 13 20:16:41.905461 kernel: alternatives: applying boot alternatives Jan 13 20:16:41.905470 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=6ba5f90349644346e4f5fa9305ab5a05339928ee9f4f137665e797727c1fc436 Jan 13 20:16:41.905503 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 20:16:41.905512 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 20:16:41.905519 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 20:16:41.905526 kernel: Fallback order for Node 0: 0 Jan 13 20:16:41.905533 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Jan 13 20:16:41.905540 kernel: Policy zone: Normal Jan 13 20:16:41.905547 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 20:16:41.905554 kernel: software IO TLB: area num 2. Jan 13 20:16:41.905564 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Jan 13 20:16:41.905571 kernel: Memory: 3881336K/4096000K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39680K init, 897K bss, 214664K reserved, 0K cma-reserved) Jan 13 20:16:41.905578 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 13 20:16:41.905585 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 20:16:41.905593 kernel: rcu: RCU event tracing is enabled. Jan 13 20:16:41.905600 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 13 20:16:41.905608 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 20:16:41.905615 kernel: Tracing variant of Tasks RCU enabled. Jan 13 20:16:41.905622 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 20:16:41.905629 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 13 20:16:41.905636 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 13 20:16:41.905645 kernel: GICv3: 256 SPIs implemented Jan 13 20:16:41.905652 kernel: GICv3: 0 Extended SPIs implemented Jan 13 20:16:41.905659 kernel: Root IRQ handler: gic_handle_irq Jan 13 20:16:41.905666 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 13 20:16:41.905673 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 13 20:16:41.905680 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 13 20:16:41.905687 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Jan 13 20:16:41.905694 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Jan 13 20:16:41.905702 kernel: GICv3: using LPI property table @0x00000001000e0000 Jan 13 20:16:41.905709 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Jan 13 20:16:41.905716 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 20:16:41.905725 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 20:16:41.905732 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 13 20:16:41.905752 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 13 20:16:41.905760 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 13 20:16:41.905767 kernel: Console: colour dummy device 80x25 Jan 13 20:16:41.905775 kernel: ACPI: Core revision 20230628 Jan 13 20:16:41.905782 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 13 20:16:41.905790 kernel: pid_max: default: 32768 minimum: 301 Jan 13 20:16:41.905797 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 20:16:41.905805 kernel: landlock: Up and running. Jan 13 20:16:41.905814 kernel: SELinux: Initializing. Jan 13 20:16:41.905821 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:16:41.905829 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:16:41.905837 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:16:41.905844 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:16:41.905852 kernel: rcu: Hierarchical SRCU implementation. Jan 13 20:16:41.905859 kernel: rcu: Max phase no-delay instances is 400. Jan 13 20:16:41.905866 kernel: Platform MSI: ITS@0x8080000 domain created Jan 13 20:16:41.905874 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 13 20:16:41.905882 kernel: Remapping and enabling EFI services. Jan 13 20:16:41.905890 kernel: smp: Bringing up secondary CPUs ... Jan 13 20:16:41.905897 kernel: Detected PIPT I-cache on CPU1 Jan 13 20:16:41.905904 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 13 20:16:41.905912 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Jan 13 20:16:41.905919 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 20:16:41.905926 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 13 20:16:41.905934 kernel: smp: Brought up 1 node, 2 CPUs Jan 13 20:16:41.905941 kernel: SMP: Total of 2 processors activated. Jan 13 20:16:41.905948 kernel: CPU features: detected: 32-bit EL0 Support Jan 13 20:16:41.905957 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 13 20:16:41.905964 kernel: CPU features: detected: Common not Private translations Jan 13 20:16:41.905977 kernel: CPU features: detected: CRC32 instructions Jan 13 20:16:41.905986 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 13 20:16:41.905994 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 13 20:16:41.906002 kernel: CPU features: detected: LSE atomic instructions Jan 13 20:16:41.906009 kernel: CPU features: detected: Privileged Access Never Jan 13 20:16:41.906017 kernel: CPU features: detected: RAS Extension Support Jan 13 20:16:41.906025 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 13 20:16:41.906034 kernel: CPU: All CPU(s) started at EL1 Jan 13 20:16:41.906041 kernel: alternatives: applying system-wide alternatives Jan 13 20:16:41.906049 kernel: devtmpfs: initialized Jan 13 20:16:41.906057 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 20:16:41.906065 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 13 20:16:41.906073 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 20:16:41.906080 kernel: SMBIOS 3.0.0 present. Jan 13 20:16:41.906088 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Jan 13 20:16:41.906097 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 20:16:41.906105 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 13 20:16:41.906113 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 13 20:16:41.906121 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 13 20:16:41.906128 kernel: audit: initializing netlink subsys (disabled) Jan 13 20:16:41.906136 kernel: audit: type=2000 audit(0.012:1): state=initialized audit_enabled=0 res=1 Jan 13 20:16:41.906144 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 20:16:41.906151 kernel: cpuidle: using governor menu Jan 13 20:16:41.906159 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 13 20:16:41.906169 kernel: ASID allocator initialised with 32768 entries Jan 13 20:16:41.906176 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 20:16:41.906184 kernel: Serial: AMBA PL011 UART driver Jan 13 20:16:41.906192 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 13 20:16:41.906200 kernel: Modules: 0 pages in range for non-PLT usage Jan 13 20:16:41.906208 kernel: Modules: 508960 pages in range for PLT usage Jan 13 20:16:41.906215 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 20:16:41.906223 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 20:16:41.906231 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 13 20:16:41.906240 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 13 20:16:41.906248 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 20:16:41.906255 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 20:16:41.906263 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 13 20:16:41.906270 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 13 20:16:41.906278 kernel: ACPI: Added _OSI(Module Device) Jan 13 20:16:41.906286 kernel: ACPI: Added _OSI(Processor Device) Jan 13 20:16:41.906293 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 20:16:41.906301 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 20:16:41.906310 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 20:16:41.906318 kernel: ACPI: Interpreter enabled Jan 13 20:16:41.906326 kernel: ACPI: Using GIC for interrupt routing Jan 13 20:16:41.906333 kernel: ACPI: MCFG table detected, 1 entries Jan 13 20:16:41.906341 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 13 20:16:41.906349 kernel: printk: console [ttyAMA0] enabled Jan 13 20:16:41.906357 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 20:16:41.906641 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 20:16:41.908830 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 13 20:16:41.908928 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 13 20:16:41.908995 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 13 20:16:41.909063 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 13 20:16:41.909074 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 13 20:16:41.909082 kernel: PCI host bridge to bus 0000:00 Jan 13 20:16:41.909158 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 13 20:16:41.909226 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 13 20:16:41.909288 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 13 20:16:41.909349 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 20:16:41.909437 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 13 20:16:41.909556 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Jan 13 20:16:41.909632 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Jan 13 20:16:41.909701 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Jan 13 20:16:41.909806 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 13 20:16:41.909880 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Jan 13 20:16:41.909958 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 13 20:16:41.910027 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Jan 13 20:16:41.910100 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 13 20:16:41.910168 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Jan 13 20:16:41.910249 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 13 20:16:41.910317 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Jan 13 20:16:41.910392 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 13 20:16:41.910459 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Jan 13 20:16:41.910804 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 13 20:16:41.910885 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Jan 13 20:16:41.910965 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 13 20:16:41.911032 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Jan 13 20:16:41.911104 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 13 20:16:41.911171 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Jan 13 20:16:41.911243 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Jan 13 20:16:41.911308 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Jan 13 20:16:41.911387 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Jan 13 20:16:41.911457 kernel: pci 0000:00:04.0: reg 0x10: [io 0x8200-0x8207] Jan 13 20:16:41.911559 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Jan 13 20:16:41.911632 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Jan 13 20:16:41.911701 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 13 20:16:41.911822 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 13 20:16:41.911907 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 13 20:16:41.911984 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Jan 13 20:16:41.912062 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Jan 13 20:16:41.912131 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Jan 13 20:16:41.912199 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Jan 13 20:16:41.912275 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Jan 13 20:16:41.912344 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Jan 13 20:16:41.912428 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 13 20:16:41.914610 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Jan 13 20:16:41.914728 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Jan 13 20:16:41.914824 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Jan 13 20:16:41.914895 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Jan 13 20:16:41.914974 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Jan 13 20:16:41.915053 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Jan 13 20:16:41.915122 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Jan 13 20:16:41.915193 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 13 20:16:41.915269 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jan 13 20:16:41.915336 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Jan 13 20:16:41.915403 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Jan 13 20:16:41.915474 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jan 13 20:16:41.915567 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jan 13 20:16:41.915637 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Jan 13 20:16:41.915707 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 13 20:16:41.915844 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Jan 13 20:16:41.916191 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Jan 13 20:16:41.916791 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 13 20:16:41.916901 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Jan 13 20:16:41.916981 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jan 13 20:16:41.917056 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 13 20:16:41.917125 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Jan 13 20:16:41.917191 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x000fffff] to [bus 05] add_size 200000 add_align 100000 Jan 13 20:16:41.917262 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 13 20:16:41.917330 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Jan 13 20:16:41.917398 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Jan 13 20:16:41.917471 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 13 20:16:41.918469 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Jan 13 20:16:41.918562 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Jan 13 20:16:41.918638 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 13 20:16:41.918707 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Jan 13 20:16:41.918824 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Jan 13 20:16:41.918903 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 13 20:16:41.918970 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Jan 13 20:16:41.919044 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Jan 13 20:16:41.919116 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Jan 13 20:16:41.919183 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Jan 13 20:16:41.919254 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Jan 13 20:16:41.919321 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Jan 13 20:16:41.919394 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Jan 13 20:16:41.919462 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Jan 13 20:16:41.920412 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Jan 13 20:16:41.920517 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Jan 13 20:16:41.920592 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Jan 13 20:16:41.920660 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Jan 13 20:16:41.920731 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Jan 13 20:16:41.920854 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 13 20:16:41.920925 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Jan 13 20:16:41.921003 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 13 20:16:41.921074 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Jan 13 20:16:41.921144 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 13 20:16:41.921218 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Jan 13 20:16:41.921288 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Jan 13 20:16:41.921360 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Jan 13 20:16:41.921431 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Jan 13 20:16:41.921538 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Jan 13 20:16:41.921610 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jan 13 20:16:41.921683 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Jan 13 20:16:41.921768 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jan 13 20:16:41.921845 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Jan 13 20:16:41.921913 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jan 13 20:16:41.921982 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Jan 13 20:16:41.922050 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jan 13 20:16:41.922125 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Jan 13 20:16:41.922194 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jan 13 20:16:41.922264 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Jan 13 20:16:41.922331 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jan 13 20:16:41.922401 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Jan 13 20:16:41.922468 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jan 13 20:16:41.924776 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Jan 13 20:16:41.924866 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jan 13 20:16:41.924946 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Jan 13 20:16:41.925015 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Jan 13 20:16:41.925087 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Jan 13 20:16:41.925165 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Jan 13 20:16:41.925235 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 13 20:16:41.925305 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Jan 13 20:16:41.925375 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 13 20:16:41.925447 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 13 20:16:41.925534 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Jan 13 20:16:41.925603 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Jan 13 20:16:41.925677 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Jan 13 20:16:41.925764 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 13 20:16:41.925843 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 13 20:16:41.925913 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Jan 13 20:16:41.925978 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Jan 13 20:16:41.926054 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Jan 13 20:16:41.926124 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Jan 13 20:16:41.926193 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 13 20:16:41.926260 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 13 20:16:41.926327 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Jan 13 20:16:41.926398 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Jan 13 20:16:41.926473 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Jan 13 20:16:41.928698 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 13 20:16:41.928840 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 13 20:16:41.928912 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Jan 13 20:16:41.928978 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Jan 13 20:16:41.929055 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Jan 13 20:16:41.929125 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 13 20:16:41.929202 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 13 20:16:41.929267 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Jan 13 20:16:41.929334 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Jan 13 20:16:41.929409 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Jan 13 20:16:41.929491 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Jan 13 20:16:41.929568 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 13 20:16:41.929636 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 13 20:16:41.929702 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Jan 13 20:16:41.929795 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 13 20:16:41.929875 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Jan 13 20:16:41.929946 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Jan 13 20:16:41.930015 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Jan 13 20:16:41.930085 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 13 20:16:41.930156 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 13 20:16:41.930222 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Jan 13 20:16:41.930295 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 13 20:16:41.930368 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 13 20:16:41.930439 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 13 20:16:41.933814 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Jan 13 20:16:41.933939 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 13 20:16:41.934026 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 13 20:16:41.934109 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Jan 13 20:16:41.934179 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Jan 13 20:16:41.934256 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Jan 13 20:16:41.934325 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 13 20:16:41.934385 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 13 20:16:41.934444 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 13 20:16:41.934540 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 13 20:16:41.934604 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Jan 13 20:16:41.934665 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Jan 13 20:16:41.934753 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Jan 13 20:16:41.934817 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Jan 13 20:16:41.934879 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Jan 13 20:16:41.934957 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Jan 13 20:16:41.935024 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Jan 13 20:16:41.935085 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Jan 13 20:16:41.935156 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jan 13 20:16:41.935222 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Jan 13 20:16:41.935288 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Jan 13 20:16:41.935372 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Jan 13 20:16:41.935436 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Jan 13 20:16:41.936617 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Jan 13 20:16:41.936771 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Jan 13 20:16:41.936872 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Jan 13 20:16:41.936943 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 13 20:16:41.937019 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Jan 13 20:16:41.937083 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Jan 13 20:16:41.937150 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 13 20:16:41.937227 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Jan 13 20:16:41.937291 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Jan 13 20:16:41.937355 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 13 20:16:41.937428 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Jan 13 20:16:41.937523 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Jan 13 20:16:41.937592 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Jan 13 20:16:41.937606 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 13 20:16:41.937615 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 13 20:16:41.937624 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 13 20:16:41.937633 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 13 20:16:41.937641 kernel: iommu: Default domain type: Translated Jan 13 20:16:41.937649 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 13 20:16:41.937657 kernel: efivars: Registered efivars operations Jan 13 20:16:41.937666 kernel: vgaarb: loaded Jan 13 20:16:41.937674 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 13 20:16:41.937685 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 20:16:41.937694 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 20:16:41.937702 kernel: pnp: PnP ACPI init Jan 13 20:16:41.937801 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 13 20:16:41.937814 kernel: pnp: PnP ACPI: found 1 devices Jan 13 20:16:41.937823 kernel: NET: Registered PF_INET protocol family Jan 13 20:16:41.937831 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 20:16:41.937840 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 20:16:41.937848 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 20:16:41.937859 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 20:16:41.937868 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 20:16:41.937878 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 20:16:41.937887 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:16:41.937895 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:16:41.937904 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 20:16:41.937988 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Jan 13 20:16:41.938001 kernel: PCI: CLS 0 bytes, default 64 Jan 13 20:16:41.938011 kernel: kvm [1]: HYP mode not available Jan 13 20:16:41.938020 kernel: Initialise system trusted keyrings Jan 13 20:16:41.938028 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 20:16:41.938036 kernel: Key type asymmetric registered Jan 13 20:16:41.938045 kernel: Asymmetric key parser 'x509' registered Jan 13 20:16:41.938053 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 13 20:16:41.938061 kernel: io scheduler mq-deadline registered Jan 13 20:16:41.938069 kernel: io scheduler kyber registered Jan 13 20:16:41.938078 kernel: io scheduler bfq registered Jan 13 20:16:41.938086 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 13 20:16:41.938162 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Jan 13 20:16:41.938233 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Jan 13 20:16:41.938302 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:16:41.938375 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Jan 13 20:16:41.938445 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Jan 13 20:16:41.939381 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:16:41.939499 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Jan 13 20:16:41.939572 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Jan 13 20:16:41.940925 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:16:41.941053 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Jan 13 20:16:41.941135 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Jan 13 20:16:41.941205 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:16:41.941287 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Jan 13 20:16:41.941357 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Jan 13 20:16:41.941425 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:16:41.941518 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Jan 13 20:16:41.941591 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Jan 13 20:16:41.941660 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:16:41.941771 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Jan 13 20:16:41.941857 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Jan 13 20:16:41.941926 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:16:41.941997 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Jan 13 20:16:41.942065 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Jan 13 20:16:41.942132 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:16:41.942148 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Jan 13 20:16:41.942220 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Jan 13 20:16:41.942293 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Jan 13 20:16:41.942362 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:16:41.942373 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 13 20:16:41.942382 kernel: ACPI: button: Power Button [PWRB] Jan 13 20:16:41.942390 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 13 20:16:41.942469 kernel: virtio-pci 0000:03:00.0: enabling device (0000 -> 0002) Jan 13 20:16:41.944642 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Jan 13 20:16:41.944731 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Jan 13 20:16:41.944779 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 20:16:41.944789 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 13 20:16:41.944874 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Jan 13 20:16:41.944887 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Jan 13 20:16:41.944896 kernel: thunder_xcv, ver 1.0 Jan 13 20:16:41.944911 kernel: thunder_bgx, ver 1.0 Jan 13 20:16:41.944920 kernel: nicpf, ver 1.0 Jan 13 20:16:41.944928 kernel: nicvf, ver 1.0 Jan 13 20:16:41.945014 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 13 20:16:41.945081 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-13T20:16:41 UTC (1736799401) Jan 13 20:16:41.945092 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 13 20:16:41.945100 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 13 20:16:41.945109 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 13 20:16:41.945120 kernel: watchdog: Hard watchdog permanently disabled Jan 13 20:16:41.945128 kernel: NET: Registered PF_INET6 protocol family Jan 13 20:16:41.945136 kernel: Segment Routing with IPv6 Jan 13 20:16:41.945144 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 20:16:41.945154 kernel: NET: Registered PF_PACKET protocol family Jan 13 20:16:41.945162 kernel: Key type dns_resolver registered Jan 13 20:16:41.945171 kernel: registered taskstats version 1 Jan 13 20:16:41.945179 kernel: Loading compiled-in X.509 certificates Jan 13 20:16:41.945187 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: a9edf9d44b1b82dedf7830d1843430df7c4d16cb' Jan 13 20:16:41.945197 kernel: Key type .fscrypt registered Jan 13 20:16:41.945205 kernel: Key type fscrypt-provisioning registered Jan 13 20:16:41.945214 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 20:16:41.945222 kernel: ima: Allocated hash algorithm: sha1 Jan 13 20:16:41.945230 kernel: ima: No architecture policies found Jan 13 20:16:41.945238 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 13 20:16:41.945246 kernel: clk: Disabling unused clocks Jan 13 20:16:41.945254 kernel: Freeing unused kernel memory: 39680K Jan 13 20:16:41.945263 kernel: Run /init as init process Jan 13 20:16:41.945272 kernel: with arguments: Jan 13 20:16:41.945280 kernel: /init Jan 13 20:16:41.945288 kernel: with environment: Jan 13 20:16:41.945296 kernel: HOME=/ Jan 13 20:16:41.945304 kernel: TERM=linux Jan 13 20:16:41.945312 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 20:16:41.945322 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:16:41.945333 systemd[1]: Detected virtualization kvm. Jan 13 20:16:41.945343 systemd[1]: Detected architecture arm64. Jan 13 20:16:41.945351 systemd[1]: Running in initrd. Jan 13 20:16:41.945360 systemd[1]: No hostname configured, using default hostname. Jan 13 20:16:41.945368 systemd[1]: Hostname set to . Jan 13 20:16:41.945377 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:16:41.945385 systemd[1]: Queued start job for default target initrd.target. Jan 13 20:16:41.945394 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:16:41.945403 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:16:41.945414 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 20:16:41.945423 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:16:41.945431 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 20:16:41.945440 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 20:16:41.945450 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 20:16:41.945459 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 20:16:41.945468 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:16:41.945512 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:16:41.945522 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:16:41.945531 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:16:41.945540 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:16:41.945548 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:16:41.945557 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:16:41.945566 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:16:41.945575 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 20:16:41.945586 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 20:16:41.945595 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:16:41.945604 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:16:41.945612 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:16:41.945621 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:16:41.945630 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 20:16:41.945638 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:16:41.945647 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 20:16:41.945656 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 20:16:41.945667 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:16:41.945678 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:16:41.945688 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:16:41.945697 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 20:16:41.945705 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:16:41.945716 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 20:16:41.945763 systemd-journald[237]: Collecting audit messages is disabled. Jan 13 20:16:41.945787 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:16:41.945799 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:16:41.945808 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 20:16:41.945817 kernel: Bridge firewalling registered Jan 13 20:16:41.945826 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:16:41.945835 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:16:41.945845 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:16:41.945854 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:16:41.945863 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:16:41.945873 systemd-journald[237]: Journal started Jan 13 20:16:41.945895 systemd-journald[237]: Runtime Journal (/run/log/journal/02259cbdfb8e4ca29d70d365ca5e78f2) is 8.0M, max 76.5M, 68.5M free. Jan 13 20:16:41.896551 systemd-modules-load[238]: Inserted module 'overlay' Jan 13 20:16:41.947778 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:16:41.916333 systemd-modules-load[238]: Inserted module 'br_netfilter' Jan 13 20:16:41.953561 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:16:41.952513 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:16:41.954996 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:16:41.962935 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 20:16:41.966701 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:16:41.978713 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:16:41.980701 dracut-cmdline[270]: dracut-dracut-053 Jan 13 20:16:41.985725 dracut-cmdline[270]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=6ba5f90349644346e4f5fa9305ab5a05339928ee9f4f137665e797727c1fc436 Jan 13 20:16:41.986808 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:16:42.027437 systemd-resolved[283]: Positive Trust Anchors: Jan 13 20:16:42.027526 systemd-resolved[283]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:16:42.027557 systemd-resolved[283]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:16:42.033803 systemd-resolved[283]: Defaulting to hostname 'linux'. Jan 13 20:16:42.034962 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:16:42.035583 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:16:42.089528 kernel: SCSI subsystem initialized Jan 13 20:16:42.094516 kernel: Loading iSCSI transport class v2.0-870. Jan 13 20:16:42.102533 kernel: iscsi: registered transport (tcp) Jan 13 20:16:42.116509 kernel: iscsi: registered transport (qla4xxx) Jan 13 20:16:42.116575 kernel: QLogic iSCSI HBA Driver Jan 13 20:16:42.170034 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 20:16:42.176675 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 20:16:42.194530 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 20:16:42.194601 kernel: device-mapper: uevent: version 1.0.3 Jan 13 20:16:42.194613 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 20:16:42.246533 kernel: raid6: neonx8 gen() 15648 MB/s Jan 13 20:16:42.263547 kernel: raid6: neonx4 gen() 13957 MB/s Jan 13 20:16:42.280518 kernel: raid6: neonx2 gen() 13171 MB/s Jan 13 20:16:42.297533 kernel: raid6: neonx1 gen() 10456 MB/s Jan 13 20:16:42.314508 kernel: raid6: int64x8 gen() 6934 MB/s Jan 13 20:16:42.331526 kernel: raid6: int64x4 gen() 7318 MB/s Jan 13 20:16:42.348529 kernel: raid6: int64x2 gen() 6102 MB/s Jan 13 20:16:42.365549 kernel: raid6: int64x1 gen() 5040 MB/s Jan 13 20:16:42.365620 kernel: raid6: using algorithm neonx8 gen() 15648 MB/s Jan 13 20:16:42.382549 kernel: raid6: .... xor() 11875 MB/s, rmw enabled Jan 13 20:16:42.382641 kernel: raid6: using neon recovery algorithm Jan 13 20:16:42.387663 kernel: xor: measuring software checksum speed Jan 13 20:16:42.387752 kernel: 8regs : 19807 MB/sec Jan 13 20:16:42.387766 kernel: 32regs : 19664 MB/sec Jan 13 20:16:42.387778 kernel: arm64_neon : 25866 MB/sec Jan 13 20:16:42.388808 kernel: xor: using function: arm64_neon (25866 MB/sec) Jan 13 20:16:42.441524 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 20:16:42.455472 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:16:42.461813 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:16:42.475360 systemd-udevd[457]: Using default interface naming scheme 'v255'. Jan 13 20:16:42.478934 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:16:42.484686 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 20:16:42.502311 dracut-pre-trigger[464]: rd.md=0: removing MD RAID activation Jan 13 20:16:42.541312 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:16:42.546852 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:16:42.596623 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:16:42.605772 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 20:16:42.627059 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 20:16:42.628266 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:16:42.629619 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:16:42.632110 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:16:42.637383 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 20:16:42.665546 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:16:42.719556 kernel: scsi host0: Virtio SCSI HBA Jan 13 20:16:42.720423 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 13 20:16:42.720452 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 13 20:16:42.720388 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:16:42.720527 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:16:42.722239 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:16:42.736798 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:16:42.737027 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:16:42.738664 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:16:42.748901 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:16:42.755134 kernel: ACPI: bus type USB registered Jan 13 20:16:42.755181 kernel: usbcore: registered new interface driver usbfs Jan 13 20:16:42.755199 kernel: usbcore: registered new interface driver hub Jan 13 20:16:42.761968 kernel: usbcore: registered new device driver usb Jan 13 20:16:42.768067 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:16:42.785849 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:16:42.794266 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 13 20:16:42.794462 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jan 13 20:16:42.798502 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 13 20:16:42.798612 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 13 20:16:42.798700 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jan 13 20:16:42.799560 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jan 13 20:16:42.799689 kernel: hub 1-0:1.0: USB hub found Jan 13 20:16:42.799865 kernel: hub 1-0:1.0: 4 ports detected Jan 13 20:16:42.799960 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 13 20:16:42.801259 kernel: hub 2-0:1.0: USB hub found Jan 13 20:16:42.801418 kernel: sr 0:0:0:0: Power-on or device reset occurred Jan 13 20:16:42.801860 kernel: hub 2-0:1.0: 4 ports detected Jan 13 20:16:42.801962 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Jan 13 20:16:42.802053 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 13 20:16:42.802064 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Jan 13 20:16:42.805521 kernel: sd 0:0:0:1: Power-on or device reset occurred Jan 13 20:16:42.814691 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Jan 13 20:16:42.814856 kernel: sd 0:0:0:1: [sda] Write Protect is off Jan 13 20:16:42.814943 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Jan 13 20:16:42.815024 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 13 20:16:42.815125 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 20:16:42.815136 kernel: GPT:17805311 != 80003071 Jan 13 20:16:42.815146 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 20:16:42.815165 kernel: GPT:17805311 != 80003071 Jan 13 20:16:42.815174 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 20:16:42.815185 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 20:16:42.815195 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Jan 13 20:16:42.816929 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:16:42.854129 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (508) Jan 13 20:16:42.863794 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 13 20:16:42.875515 kernel: BTRFS: device fsid 8e09fced-e016-4c4f-bac5-4013d13dfd78 devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (527) Jan 13 20:16:42.883652 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 13 20:16:42.890067 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 13 20:16:42.895460 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 13 20:16:42.897389 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jan 13 20:16:42.903724 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 20:16:42.923723 disk-uuid[575]: Primary Header is updated. Jan 13 20:16:42.923723 disk-uuid[575]: Secondary Entries is updated. Jan 13 20:16:42.923723 disk-uuid[575]: Secondary Header is updated. Jan 13 20:16:42.930916 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 20:16:43.030514 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 13 20:16:43.271581 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Jan 13 20:16:43.408169 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Jan 13 20:16:43.408230 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jan 13 20:16:43.411577 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Jan 13 20:16:43.465064 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Jan 13 20:16:43.465555 kernel: usbcore: registered new interface driver usbhid Jan 13 20:16:43.465603 kernel: usbhid: USB HID core driver Jan 13 20:16:43.943621 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 20:16:43.944757 disk-uuid[576]: The operation has completed successfully. Jan 13 20:16:43.998805 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 20:16:44.000129 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 20:16:44.016717 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 20:16:44.020276 sh[590]: Success Jan 13 20:16:44.033572 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 13 20:16:44.094067 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 20:16:44.097223 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 20:16:44.098558 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 20:16:44.116016 kernel: BTRFS info (device dm-0): first mount of filesystem 8e09fced-e016-4c4f-bac5-4013d13dfd78 Jan 13 20:16:44.116103 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:16:44.116119 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 20:16:44.116133 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 20:16:44.116645 kernel: BTRFS info (device dm-0): using free space tree Jan 13 20:16:44.124600 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 13 20:16:44.127830 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 20:16:44.128696 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 20:16:44.135827 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 20:16:44.139837 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 20:16:44.153317 kernel: BTRFS info (device sda6): first mount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:16:44.153367 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:16:44.153378 kernel: BTRFS info (device sda6): using free space tree Jan 13 20:16:44.157502 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 20:16:44.157565 kernel: BTRFS info (device sda6): auto enabling async discard Jan 13 20:16:44.167798 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 20:16:44.168506 kernel: BTRFS info (device sda6): last unmount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:16:44.175535 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 20:16:44.180064 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 20:16:44.275441 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:16:44.283691 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:16:44.299528 ignition[688]: Ignition 2.20.0 Jan 13 20:16:44.299540 ignition[688]: Stage: fetch-offline Jan 13 20:16:44.299577 ignition[688]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:16:44.299586 ignition[688]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 13 20:16:44.301805 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:16:44.299801 ignition[688]: parsed url from cmdline: "" Jan 13 20:16:44.299805 ignition[688]: no config URL provided Jan 13 20:16:44.299810 ignition[688]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 20:16:44.299816 ignition[688]: no config at "/usr/lib/ignition/user.ign" Jan 13 20:16:44.299822 ignition[688]: failed to fetch config: resource requires networking Jan 13 20:16:44.300026 ignition[688]: Ignition finished successfully Jan 13 20:16:44.309536 systemd-networkd[778]: lo: Link UP Jan 13 20:16:44.309545 systemd-networkd[778]: lo: Gained carrier Jan 13 20:16:44.311819 systemd-networkd[778]: Enumeration completed Jan 13 20:16:44.312026 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:16:44.313500 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:16:44.313503 systemd-networkd[778]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:16:44.313946 systemd[1]: Reached target network.target - Network. Jan 13 20:16:44.314922 systemd-networkd[778]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:16:44.314925 systemd-networkd[778]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:16:44.315513 systemd-networkd[778]: eth0: Link UP Jan 13 20:16:44.315517 systemd-networkd[778]: eth0: Gained carrier Jan 13 20:16:44.315525 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:16:44.321932 systemd-networkd[778]: eth1: Link UP Jan 13 20:16:44.321936 systemd-networkd[778]: eth1: Gained carrier Jan 13 20:16:44.321948 systemd-networkd[778]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:16:44.323149 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 13 20:16:44.339531 ignition[782]: Ignition 2.20.0 Jan 13 20:16:44.340070 ignition[782]: Stage: fetch Jan 13 20:16:44.340285 ignition[782]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:16:44.340297 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 13 20:16:44.340383 ignition[782]: parsed url from cmdline: "" Jan 13 20:16:44.340386 ignition[782]: no config URL provided Jan 13 20:16:44.340391 ignition[782]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 20:16:44.340397 ignition[782]: no config at "/usr/lib/ignition/user.ign" Jan 13 20:16:44.340502 ignition[782]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jan 13 20:16:44.341211 ignition[782]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 13 20:16:44.359607 systemd-networkd[778]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 20:16:44.374640 systemd-networkd[778]: eth0: DHCPv4 address 138.199.153.195/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 13 20:16:44.542055 ignition[782]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Jan 13 20:16:44.546191 ignition[782]: GET result: OK Jan 13 20:16:44.546314 ignition[782]: parsing config with SHA512: fb3407fe630993c1039bd8ff921468b0a0dd8c094098302a0e4fc3717c5ec350cfb74fec8c05688d15b0df72e7d6aa0caa47ee993dd7c23fb2965c766fd52b44 Jan 13 20:16:44.551638 unknown[782]: fetched base config from "system" Jan 13 20:16:44.551650 unknown[782]: fetched base config from "system" Jan 13 20:16:44.552052 ignition[782]: fetch: fetch complete Jan 13 20:16:44.551657 unknown[782]: fetched user config from "hetzner" Jan 13 20:16:44.552058 ignition[782]: fetch: fetch passed Jan 13 20:16:44.554217 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 13 20:16:44.552111 ignition[782]: Ignition finished successfully Jan 13 20:16:44.564011 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 20:16:44.577328 ignition[789]: Ignition 2.20.0 Jan 13 20:16:44.577340 ignition[789]: Stage: kargs Jan 13 20:16:44.577562 ignition[789]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:16:44.577572 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 13 20:16:44.578605 ignition[789]: kargs: kargs passed Jan 13 20:16:44.578663 ignition[789]: Ignition finished successfully Jan 13 20:16:44.580091 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 20:16:44.586750 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 20:16:44.598594 ignition[796]: Ignition 2.20.0 Jan 13 20:16:44.598607 ignition[796]: Stage: disks Jan 13 20:16:44.598842 ignition[796]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:16:44.598856 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 13 20:16:44.599801 ignition[796]: disks: disks passed Jan 13 20:16:44.599857 ignition[796]: Ignition finished successfully Jan 13 20:16:44.602600 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 20:16:44.604341 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 20:16:44.605327 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 20:16:44.606641 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:16:44.607544 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:16:44.608370 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:16:44.614829 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 20:16:44.636442 systemd-fsck[804]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 13 20:16:44.642575 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 20:16:44.650898 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 20:16:44.710566 kernel: EXT4-fs (sda9): mounted filesystem 8fd847fb-a6be-44f6-9adf-0a0a79b9fa94 r/w with ordered data mode. Quota mode: none. Jan 13 20:16:44.712460 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 20:16:44.715319 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 20:16:44.725731 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:16:44.729677 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 20:16:44.733182 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 13 20:16:44.736707 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 20:16:44.736773 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:16:44.751579 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (812) Jan 13 20:16:44.754821 kernel: BTRFS info (device sda6): first mount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:16:44.754876 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:16:44.754889 kernel: BTRFS info (device sda6): using free space tree Jan 13 20:16:44.755740 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 20:16:44.760682 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 20:16:44.769689 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 20:16:44.769777 kernel: BTRFS info (device sda6): auto enabling async discard Jan 13 20:16:44.778706 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:16:44.824395 coreos-metadata[814]: Jan 13 20:16:44.823 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jan 13 20:16:44.826905 coreos-metadata[814]: Jan 13 20:16:44.826 INFO Fetch successful Jan 13 20:16:44.830117 coreos-metadata[814]: Jan 13 20:16:44.829 INFO wrote hostname ci-4152-2-0-6-9e5a1dc0a6 to /sysroot/etc/hostname Jan 13 20:16:44.832525 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 13 20:16:44.836123 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 20:16:44.843596 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Jan 13 20:16:44.850356 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 20:16:44.856702 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 20:16:44.998331 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 20:16:45.005819 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 20:16:45.013782 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 20:16:45.021545 kernel: BTRFS info (device sda6): last unmount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:16:45.052559 ignition[929]: INFO : Ignition 2.20.0 Jan 13 20:16:45.052565 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 20:16:45.054064 ignition[929]: INFO : Stage: mount Jan 13 20:16:45.054470 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:16:45.054470 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 13 20:16:45.056067 ignition[929]: INFO : mount: mount passed Jan 13 20:16:45.056509 ignition[929]: INFO : Ignition finished successfully Jan 13 20:16:45.057886 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 20:16:45.063799 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 20:16:45.115574 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 20:16:45.121802 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:16:45.135557 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (940) Jan 13 20:16:45.136999 kernel: BTRFS info (device sda6): first mount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:16:45.137045 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:16:45.137071 kernel: BTRFS info (device sda6): using free space tree Jan 13 20:16:45.140514 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 20:16:45.140587 kernel: BTRFS info (device sda6): auto enabling async discard Jan 13 20:16:45.143905 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:16:45.166489 ignition[957]: INFO : Ignition 2.20.0 Jan 13 20:16:45.166489 ignition[957]: INFO : Stage: files Jan 13 20:16:45.167580 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:16:45.167580 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 13 20:16:45.168757 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Jan 13 20:16:45.169979 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 20:16:45.169979 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 20:16:45.172991 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 20:16:45.173956 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 20:16:45.173956 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 20:16:45.173498 unknown[957]: wrote ssh authorized keys file for user: core Jan 13 20:16:45.176814 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 13 20:16:45.176814 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 13 20:16:45.330996 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 13 20:16:45.408644 systemd-networkd[778]: eth0: Gained IPv6LL Jan 13 20:16:45.920783 systemd-networkd[778]: eth1: Gained IPv6LL Jan 13 20:16:46.134516 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 13 20:16:46.134516 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 13 20:16:46.134516 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 20:16:46.134516 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:16:46.134516 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:16:46.134516 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:16:46.134516 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:16:46.134516 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:16:46.134516 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:16:46.144899 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:16:46.144899 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:16:46.144899 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 13 20:16:46.144899 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 13 20:16:46.144899 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 13 20:16:46.144899 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Jan 13 20:16:46.776671 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 13 20:16:49.023058 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 13 20:16:49.023058 ignition[957]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 13 20:16:49.026897 ignition[957]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:16:49.026897 ignition[957]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:16:49.026897 ignition[957]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 13 20:16:49.026897 ignition[957]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 13 20:16:49.026897 ignition[957]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 13 20:16:49.026897 ignition[957]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 13 20:16:49.026897 ignition[957]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 13 20:16:49.026897 ignition[957]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Jan 13 20:16:49.026897 ignition[957]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 20:16:49.026897 ignition[957]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:16:49.026897 ignition[957]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:16:49.026897 ignition[957]: INFO : files: files passed Jan 13 20:16:49.026897 ignition[957]: INFO : Ignition finished successfully Jan 13 20:16:49.028158 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 20:16:49.039774 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 20:16:49.042137 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 20:16:49.048992 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 20:16:49.050535 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 20:16:49.060658 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:16:49.060658 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:16:49.062374 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:16:49.065360 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:16:49.067076 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 20:16:49.071764 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 20:16:49.107156 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 20:16:49.107308 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 20:16:49.109122 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 20:16:49.110161 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 20:16:49.112919 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 20:16:49.126952 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 20:16:49.148596 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:16:49.156822 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 20:16:49.169001 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:16:49.169782 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:16:49.170436 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 20:16:49.171519 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 20:16:49.171642 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:16:49.173111 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 20:16:49.173680 systemd[1]: Stopped target basic.target - Basic System. Jan 13 20:16:49.174958 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 20:16:49.176129 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:16:49.177262 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 20:16:49.178144 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 20:16:49.179103 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:16:49.180165 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 20:16:49.181111 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 20:16:49.182033 systemd[1]: Stopped target swap.target - Swaps. Jan 13 20:16:49.182938 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 20:16:49.183065 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:16:49.184346 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:16:49.184981 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:16:49.185852 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 20:16:49.186245 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:16:49.186932 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 20:16:49.187051 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 20:16:49.188419 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 20:16:49.188546 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:16:49.189740 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 20:16:49.189838 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 20:16:49.190708 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 13 20:16:49.190808 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 13 20:16:49.201754 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 20:16:49.202272 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 20:16:49.202413 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:16:49.204746 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 20:16:49.208625 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 20:16:49.208933 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:16:49.211664 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 20:16:49.212126 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:16:49.222162 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 20:16:49.223784 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 20:16:49.228303 ignition[1009]: INFO : Ignition 2.20.0 Jan 13 20:16:49.229264 ignition[1009]: INFO : Stage: umount Jan 13 20:16:49.230865 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:16:49.230865 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 13 20:16:49.230865 ignition[1009]: INFO : umount: umount passed Jan 13 20:16:49.230865 ignition[1009]: INFO : Ignition finished successfully Jan 13 20:16:49.232189 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 20:16:49.232307 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 20:16:49.233410 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 20:16:49.233464 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 20:16:49.234363 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 20:16:49.234408 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 20:16:49.235129 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 13 20:16:49.235169 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 13 20:16:49.236101 systemd[1]: Stopped target network.target - Network. Jan 13 20:16:49.238084 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 20:16:49.238154 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:16:49.239086 systemd[1]: Stopped target paths.target - Path Units. Jan 13 20:16:49.239520 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 20:16:49.243743 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:16:49.244347 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 20:16:49.244813 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 20:16:49.245293 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 20:16:49.245337 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:16:49.246563 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 20:16:49.246599 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:16:49.247519 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 20:16:49.247564 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 20:16:49.248773 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 20:16:49.248811 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 20:16:49.250193 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 20:16:49.251333 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 20:16:49.253093 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 20:16:49.253669 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 20:16:49.253772 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 20:16:49.254822 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 20:16:49.254902 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 20:16:49.256551 systemd-networkd[778]: eth0: DHCPv6 lease lost Jan 13 20:16:49.258828 systemd-networkd[778]: eth1: DHCPv6 lease lost Jan 13 20:16:49.259936 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 20:16:49.260055 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 20:16:49.262827 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 20:16:49.262955 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 20:16:49.265213 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 20:16:49.266254 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:16:49.275537 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 20:16:49.276045 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 20:16:49.276115 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:16:49.277357 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:16:49.277405 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:16:49.278864 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 20:16:49.278912 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 20:16:49.280045 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 20:16:49.280080 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:16:49.282432 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:16:49.291461 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 20:16:49.291773 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:16:49.293623 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 20:16:49.294643 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 20:16:49.295868 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 20:16:49.295948 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 20:16:49.297060 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 20:16:49.297108 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:16:49.298086 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 20:16:49.298136 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:16:49.299683 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 20:16:49.299747 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 20:16:49.301198 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:16:49.301244 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:16:49.307759 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 20:16:49.308981 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 20:16:49.309088 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:16:49.313454 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:16:49.313555 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:16:49.315589 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 20:16:49.316343 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 20:16:49.318054 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 20:16:49.323765 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 20:16:49.335231 systemd[1]: Switching root. Jan 13 20:16:49.373220 systemd-journald[237]: Journal stopped Jan 13 20:16:50.328756 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Jan 13 20:16:50.328891 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 20:16:50.328929 kernel: SELinux: policy capability open_perms=1 Jan 13 20:16:50.328954 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 20:16:50.328978 kernel: SELinux: policy capability always_check_network=0 Jan 13 20:16:50.329010 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 20:16:50.329035 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 20:16:50.329059 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 20:16:50.329090 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 20:16:50.329116 systemd[1]: Successfully loaded SELinux policy in 38.132ms. Jan 13 20:16:50.329158 kernel: audit: type=1403 audit(1736799409.578:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 20:16:50.329188 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.209ms. Jan 13 20:16:50.329217 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:16:50.329245 systemd[1]: Detected virtualization kvm. Jan 13 20:16:50.329271 systemd[1]: Detected architecture arm64. Jan 13 20:16:50.329298 systemd[1]: Detected first boot. Jan 13 20:16:50.329324 systemd[1]: Hostname set to . Jan 13 20:16:50.329350 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:16:50.329377 zram_generator::config[1055]: No configuration found. Jan 13 20:16:50.329408 systemd[1]: Populated /etc with preset unit settings. Jan 13 20:16:50.329435 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 20:16:50.329461 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 20:16:50.329523 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 20:16:50.329556 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 20:16:50.329610 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 20:16:50.329637 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 20:16:50.329664 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 20:16:50.329708 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 20:16:50.329736 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 20:16:50.329764 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 20:16:50.329790 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 20:16:50.329817 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:16:50.329849 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:16:50.329876 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 20:16:50.329903 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 20:16:50.329929 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 20:16:50.329961 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:16:50.329987 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 13 20:16:50.330014 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:16:50.330040 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 20:16:50.330066 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 20:16:50.330093 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 20:16:50.330122 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 20:16:50.330149 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:16:50.330183 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:16:50.330250 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:16:50.330276 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:16:50.330302 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 20:16:50.330328 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 20:16:50.330355 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:16:50.330381 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:16:50.330408 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:16:50.330441 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 20:16:50.330471 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 20:16:50.330567 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 20:16:50.330597 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 20:16:50.330624 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 20:16:50.330651 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 20:16:50.330677 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 20:16:50.330720 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 20:16:50.330752 systemd[1]: Reached target machines.target - Containers. Jan 13 20:16:50.330780 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 20:16:50.330807 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:16:50.330834 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:16:50.330860 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 20:16:50.330888 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:16:50.330925 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:16:50.330956 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:16:50.330983 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 20:16:50.331011 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:16:50.331038 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 20:16:50.331065 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 20:16:50.331094 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 20:16:50.331123 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 20:16:50.331153 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 20:16:50.331181 kernel: fuse: init (API version 7.39) Jan 13 20:16:50.331206 kernel: ACPI: bus type drm_connector registered Jan 13 20:16:50.331231 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:16:50.331258 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:16:50.331285 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 20:16:50.331312 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 20:16:50.331392 systemd-journald[1121]: Collecting audit messages is disabled. Jan 13 20:16:50.331467 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:16:50.331558 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 20:16:50.331592 systemd[1]: Stopped verity-setup.service. Jan 13 20:16:50.331618 kernel: loop: module loaded Jan 13 20:16:50.331646 systemd-journald[1121]: Journal started Jan 13 20:16:50.331757 systemd-journald[1121]: Runtime Journal (/run/log/journal/02259cbdfb8e4ca29d70d365ca5e78f2) is 8.0M, max 76.5M, 68.5M free. Jan 13 20:16:50.084993 systemd[1]: Queued start job for default target multi-user.target. Jan 13 20:16:50.109433 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 13 20:16:50.109853 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 20:16:50.333805 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:16:50.335469 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 20:16:50.341804 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 20:16:50.344732 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 20:16:50.345413 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 20:16:50.347108 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 20:16:50.348652 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 20:16:50.351535 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:16:50.353835 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 20:16:50.354001 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 20:16:50.357743 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:16:50.357916 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:16:50.358788 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:16:50.358922 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:16:50.361039 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 20:16:50.362921 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:16:50.363064 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:16:50.364247 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 20:16:50.364457 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 20:16:50.365465 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:16:50.366011 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:16:50.367263 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:16:50.368398 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 20:16:50.369865 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 20:16:50.383584 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 20:16:50.391769 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 20:16:50.396709 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 20:16:50.397565 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 20:16:50.397742 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:16:50.399636 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 20:16:50.405863 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 20:16:50.412968 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 20:16:50.415762 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:16:50.427013 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 20:16:50.433820 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 20:16:50.434461 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:16:50.438721 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 20:16:50.441634 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:16:50.442963 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:16:50.448801 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 20:16:50.452063 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 20:16:50.457009 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 20:16:50.458740 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 20:16:50.462837 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 20:16:50.490840 systemd-journald[1121]: Time spent on flushing to /var/log/journal/02259cbdfb8e4ca29d70d365ca5e78f2 is 39.189ms for 1123 entries. Jan 13 20:16:50.490840 systemd-journald[1121]: System Journal (/var/log/journal/02259cbdfb8e4ca29d70d365ca5e78f2) is 8.0M, max 584.8M, 576.8M free. Jan 13 20:16:50.542222 systemd-journald[1121]: Received client request to flush runtime journal. Jan 13 20:16:50.542268 kernel: loop0: detected capacity change from 0 to 116808 Jan 13 20:16:50.508275 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 20:16:50.513452 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 20:16:50.526109 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 20:16:50.530221 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:16:50.546777 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 20:16:50.548927 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 20:16:50.552138 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:16:50.571885 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 20:16:50.593772 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 20:16:50.597060 kernel: loop1: detected capacity change from 0 to 189592 Jan 13 20:16:50.608859 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:16:50.610571 udevadm[1177]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 13 20:16:50.615283 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 20:16:50.617565 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 20:16:50.654496 kernel: loop2: detected capacity change from 0 to 8 Jan 13 20:16:50.666811 systemd-tmpfiles[1183]: ACLs are not supported, ignoring. Jan 13 20:16:50.666830 systemd-tmpfiles[1183]: ACLs are not supported, ignoring. Jan 13 20:16:50.675491 kernel: loop3: detected capacity change from 0 to 113536 Jan 13 20:16:50.677822 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:16:50.715516 kernel: loop4: detected capacity change from 0 to 116808 Jan 13 20:16:50.727510 kernel: loop5: detected capacity change from 0 to 189592 Jan 13 20:16:50.749813 kernel: loop6: detected capacity change from 0 to 8 Jan 13 20:16:50.751563 kernel: loop7: detected capacity change from 0 to 113536 Jan 13 20:16:50.761079 (sd-merge)[1190]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Jan 13 20:16:50.763093 (sd-merge)[1190]: Merged extensions into '/usr'. Jan 13 20:16:50.769349 systemd[1]: Reloading requested from client PID 1165 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 20:16:50.769370 systemd[1]: Reloading... Jan 13 20:16:50.886166 zram_generator::config[1212]: No configuration found. Jan 13 20:16:50.996519 ldconfig[1160]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 20:16:51.022585 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:16:51.068235 systemd[1]: Reloading finished in 298 ms. Jan 13 20:16:51.096389 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 20:16:51.097549 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 20:16:51.109818 systemd[1]: Starting ensure-sysext.service... Jan 13 20:16:51.114096 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:16:51.126588 systemd[1]: Reloading requested from client PID 1253 ('systemctl') (unit ensure-sysext.service)... Jan 13 20:16:51.126618 systemd[1]: Reloading... Jan 13 20:16:51.160206 systemd-tmpfiles[1254]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 20:16:51.160904 systemd-tmpfiles[1254]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 20:16:51.163771 systemd-tmpfiles[1254]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 20:16:51.164195 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. Jan 13 20:16:51.164322 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. Jan 13 20:16:51.167302 systemd-tmpfiles[1254]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:16:51.167742 systemd-tmpfiles[1254]: Skipping /boot Jan 13 20:16:51.178214 systemd-tmpfiles[1254]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:16:51.179212 systemd-tmpfiles[1254]: Skipping /boot Jan 13 20:16:51.234564 zram_generator::config[1283]: No configuration found. Jan 13 20:16:51.331540 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:16:51.377413 systemd[1]: Reloading finished in 250 ms. Jan 13 20:16:51.400603 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 20:16:51.411150 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:16:51.423785 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:16:51.428205 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 20:16:51.431606 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 20:16:51.438900 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:16:51.443934 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:16:51.448843 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 20:16:51.452324 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:16:51.457805 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:16:51.461506 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:16:51.467893 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:16:51.468599 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:16:51.474870 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 20:16:51.478054 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:16:51.478217 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:16:51.482426 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:16:51.489899 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:16:51.491824 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:16:51.501640 systemd[1]: Finished ensure-sysext.service. Jan 13 20:16:51.510498 systemd-udevd[1324]: Using default interface naming scheme 'v255'. Jan 13 20:16:51.511595 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 13 20:16:51.541939 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:16:51.542719 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:16:51.547741 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:16:51.549832 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 20:16:51.552064 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 20:16:51.573130 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:16:51.573696 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 20:16:51.574077 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:16:51.576585 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:16:51.577593 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:16:51.577788 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:16:51.578670 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:16:51.578815 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:16:51.583128 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 20:16:51.587317 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:16:51.587389 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:16:51.596791 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 20:16:51.634761 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 20:16:51.642191 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 20:16:51.658771 augenrules[1386]: No rules Jan 13 20:16:51.665893 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:16:51.666103 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:16:51.755631 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 13 20:16:51.772307 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 13 20:16:51.774827 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 20:16:51.777105 systemd-resolved[1322]: Positive Trust Anchors: Jan 13 20:16:51.777788 systemd-resolved[1322]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:16:51.777914 systemd-resolved[1322]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:16:51.785026 systemd-resolved[1322]: Using system hostname 'ci-4152-2-0-6-9e5a1dc0a6'. Jan 13 20:16:51.787546 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:16:51.788578 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:16:51.799666 systemd-networkd[1349]: lo: Link UP Jan 13 20:16:51.799715 systemd-networkd[1349]: lo: Gained carrier Jan 13 20:16:51.803177 systemd-networkd[1349]: Enumeration completed Jan 13 20:16:51.803318 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:16:51.804772 systemd[1]: Reached target network.target - Network. Jan 13 20:16:51.806782 systemd-networkd[1349]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:16:51.806798 systemd-networkd[1349]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:16:51.809049 systemd-networkd[1349]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:16:51.809059 systemd-networkd[1349]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:16:51.810210 systemd-networkd[1349]: eth0: Link UP Jan 13 20:16:51.810222 systemd-networkd[1349]: eth0: Gained carrier Jan 13 20:16:51.810241 systemd-networkd[1349]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:16:51.820752 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 20:16:51.823661 systemd-networkd[1349]: eth1: Link UP Jan 13 20:16:51.823683 systemd-networkd[1349]: eth1: Gained carrier Jan 13 20:16:51.823706 systemd-networkd[1349]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:16:51.846772 systemd-networkd[1349]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:16:51.858778 systemd-networkd[1349]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:16:51.859504 kernel: mousedev: PS/2 mouse device common for all mice Jan 13 20:16:51.874803 systemd-networkd[1349]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 20:16:51.875892 systemd-timesyncd[1337]: Network configuration changed, trying to establish connection. Jan 13 20:16:51.902763 systemd-networkd[1349]: eth0: DHCPv4 address 138.199.153.195/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 13 20:16:51.905012 systemd-timesyncd[1337]: Network configuration changed, trying to establish connection. Jan 13 20:16:51.920337 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Jan 13 20:16:51.920459 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:16:51.930960 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:16:51.936449 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:16:51.939516 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1346) Jan 13 20:16:51.941134 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:16:51.942398 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:16:51.942441 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 20:16:51.943183 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:16:51.944539 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:16:51.968074 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:16:51.968278 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:16:51.970866 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:16:51.971189 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:16:51.972973 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:16:51.973024 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:16:52.004751 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Jan 13 20:16:52.004860 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 13 20:16:52.004876 kernel: [drm] features: -context_init Jan 13 20:16:52.008521 kernel: [drm] number of scanouts: 1 Jan 13 20:16:52.008620 kernel: [drm] number of cap sets: 0 Jan 13 20:16:52.013523 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Jan 13 20:16:52.021751 kernel: Console: switching to colour frame buffer device 160x50 Jan 13 20:16:52.023644 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:16:52.028514 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 13 20:16:52.035236 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 13 20:16:52.044357 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 20:16:52.047159 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:16:52.048040 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:16:52.051729 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:16:52.070562 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 20:16:52.133497 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:16:52.177774 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 20:16:52.192974 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 20:16:52.208172 lvm[1438]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:16:52.238651 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 20:16:52.239963 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:16:52.240911 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:16:52.241886 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 20:16:52.242789 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 20:16:52.243771 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 20:16:52.244535 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 20:16:52.245273 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 20:16:52.246048 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 20:16:52.246076 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:16:52.246546 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:16:52.248428 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 20:16:52.250966 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 20:16:52.257554 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 20:16:52.260199 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 20:16:52.261472 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 20:16:52.262215 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:16:52.262746 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:16:52.263246 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:16:52.263277 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:16:52.264646 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 20:16:52.270771 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 13 20:16:52.272751 lvm[1442]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:16:52.274801 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 20:16:52.285632 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 20:16:52.291744 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 20:16:52.292293 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 20:16:52.297357 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 20:16:52.309825 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 20:16:52.314538 jq[1446]: false Jan 13 20:16:52.314611 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jan 13 20:16:52.324238 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 20:16:52.330287 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 20:16:52.339775 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 20:16:52.341141 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 20:16:52.341755 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 20:16:52.343649 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 20:16:52.347665 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 20:16:52.350645 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 20:16:52.351925 coreos-metadata[1444]: Jan 13 20:16:52.351 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jan 13 20:16:52.354108 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 20:16:52.355625 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 20:16:52.356824 extend-filesystems[1447]: Found loop4 Jan 13 20:16:52.356824 extend-filesystems[1447]: Found loop5 Jan 13 20:16:52.356824 extend-filesystems[1447]: Found loop6 Jan 13 20:16:52.356824 extend-filesystems[1447]: Found loop7 Jan 13 20:16:52.356824 extend-filesystems[1447]: Found sda Jan 13 20:16:52.356824 extend-filesystems[1447]: Found sda1 Jan 13 20:16:52.356824 extend-filesystems[1447]: Found sda2 Jan 13 20:16:52.356824 extend-filesystems[1447]: Found sda3 Jan 13 20:16:52.356824 extend-filesystems[1447]: Found usr Jan 13 20:16:52.356824 extend-filesystems[1447]: Found sda4 Jan 13 20:16:52.356824 extend-filesystems[1447]: Found sda6 Jan 13 20:16:52.356824 extend-filesystems[1447]: Found sda7 Jan 13 20:16:52.356824 extend-filesystems[1447]: Found sda9 Jan 13 20:16:52.356824 extend-filesystems[1447]: Checking size of /dev/sda9 Jan 13 20:16:52.369137 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 20:16:52.362575 dbus-daemon[1445]: [system] SELinux support is enabled Jan 13 20:16:52.390241 coreos-metadata[1444]: Jan 13 20:16:52.358 INFO Fetch successful Jan 13 20:16:52.390241 coreos-metadata[1444]: Jan 13 20:16:52.358 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jan 13 20:16:52.390241 coreos-metadata[1444]: Jan 13 20:16:52.362 INFO Fetch successful Jan 13 20:16:52.389077 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 20:16:52.389272 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 20:16:52.395132 extend-filesystems[1447]: Resized partition /dev/sda9 Jan 13 20:16:52.404010 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 20:16:52.407294 extend-filesystems[1475]: resize2fs 1.47.1 (20-May-2024) Jan 13 20:16:52.404058 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 20:16:52.406725 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 20:16:52.406753 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 20:16:52.417005 (ntainerd)[1469]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 20:16:52.420160 jq[1458]: true Jan 13 20:16:52.422524 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Jan 13 20:16:52.431052 tar[1460]: linux-arm64/helm Jan 13 20:16:52.483928 update_engine[1457]: I20250113 20:16:52.483731 1457 main.cc:92] Flatcar Update Engine starting Jan 13 20:16:52.486973 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 20:16:52.487147 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 20:16:52.502128 systemd[1]: Started update-engine.service - Update Engine. Jan 13 20:16:52.503268 update_engine[1457]: I20250113 20:16:52.503193 1457 update_check_scheduler.cc:74] Next update check in 11m27s Jan 13 20:16:52.516376 jq[1485]: true Jan 13 20:16:52.554411 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1344) Jan 13 20:16:52.519943 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 20:16:52.558167 systemd-logind[1456]: New seat seat0. Jan 13 20:16:52.562281 systemd-logind[1456]: Watching system buttons on /dev/input/event0 (Power Button) Jan 13 20:16:52.562305 systemd-logind[1456]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Jan 13 20:16:52.562554 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 20:16:52.618366 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 13 20:16:52.624580 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 20:16:52.642345 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Jan 13 20:16:52.667150 extend-filesystems[1475]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 13 20:16:52.667150 extend-filesystems[1475]: old_desc_blocks = 1, new_desc_blocks = 5 Jan 13 20:16:52.667150 extend-filesystems[1475]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Jan 13 20:16:52.671087 extend-filesystems[1447]: Resized filesystem in /dev/sda9 Jan 13 20:16:52.671087 extend-filesystems[1447]: Found sr0 Jan 13 20:16:52.669146 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 20:16:52.669878 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 20:16:52.696528 bash[1520]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:16:52.698819 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 20:16:52.713938 systemd[1]: Starting sshkeys.service... Jan 13 20:16:52.725649 locksmithd[1493]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 20:16:52.741798 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 13 20:16:52.750989 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 13 20:16:52.800353 coreos-metadata[1529]: Jan 13 20:16:52.800 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jan 13 20:16:52.802732 coreos-metadata[1529]: Jan 13 20:16:52.802 INFO Fetch successful Jan 13 20:16:52.810314 unknown[1529]: wrote ssh authorized keys file for user: core Jan 13 20:16:52.852080 update-ssh-keys[1533]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:16:52.853610 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 13 20:16:52.859588 systemd[1]: Finished sshkeys.service. Jan 13 20:16:52.866316 containerd[1469]: time="2025-01-13T20:16:52.866207000Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 13 20:16:52.937806 containerd[1469]: time="2025-01-13T20:16:52.937688720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:16:52.944864 containerd[1469]: time="2025-01-13T20:16:52.944732080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:16:52.944864 containerd[1469]: time="2025-01-13T20:16:52.944776880Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 20:16:52.944864 containerd[1469]: time="2025-01-13T20:16:52.944797040Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 20:16:52.945023 containerd[1469]: time="2025-01-13T20:16:52.944970080Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 20:16:52.945023 containerd[1469]: time="2025-01-13T20:16:52.944987960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 20:16:52.945072 containerd[1469]: time="2025-01-13T20:16:52.945050480Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:16:52.945072 containerd[1469]: time="2025-01-13T20:16:52.945064640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:16:52.945523 containerd[1469]: time="2025-01-13T20:16:52.945230240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:16:52.945523 containerd[1469]: time="2025-01-13T20:16:52.945252040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 20:16:52.945523 containerd[1469]: time="2025-01-13T20:16:52.945289480Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:16:52.945523 containerd[1469]: time="2025-01-13T20:16:52.945300120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 20:16:52.945523 containerd[1469]: time="2025-01-13T20:16:52.945371080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:16:52.947871 containerd[1469]: time="2025-01-13T20:16:52.947680560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:16:52.947871 containerd[1469]: time="2025-01-13T20:16:52.947842640Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:16:52.947871 containerd[1469]: time="2025-01-13T20:16:52.947857840Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 20:16:52.948013 containerd[1469]: time="2025-01-13T20:16:52.947949240Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 20:16:52.948013 containerd[1469]: time="2025-01-13T20:16:52.947991080Z" level=info msg="metadata content store policy set" policy=shared Jan 13 20:16:52.958836 containerd[1469]: time="2025-01-13T20:16:52.958489520Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 20:16:52.958836 containerd[1469]: time="2025-01-13T20:16:52.958568280Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 20:16:52.958836 containerd[1469]: time="2025-01-13T20:16:52.958617800Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 20:16:52.958836 containerd[1469]: time="2025-01-13T20:16:52.958639000Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 20:16:52.958836 containerd[1469]: time="2025-01-13T20:16:52.958657840Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 20:16:52.959028 containerd[1469]: time="2025-01-13T20:16:52.958851600Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 20:16:52.959509 containerd[1469]: time="2025-01-13T20:16:52.959170040Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 20:16:52.959509 containerd[1469]: time="2025-01-13T20:16:52.959284200Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 20:16:52.959509 containerd[1469]: time="2025-01-13T20:16:52.959299760Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 20:16:52.959509 containerd[1469]: time="2025-01-13T20:16:52.959314280Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 20:16:52.959509 containerd[1469]: time="2025-01-13T20:16:52.959328200Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 20:16:52.959509 containerd[1469]: time="2025-01-13T20:16:52.959341240Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 20:16:52.959509 containerd[1469]: time="2025-01-13T20:16:52.959357720Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 20:16:52.959509 containerd[1469]: time="2025-01-13T20:16:52.959370920Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 20:16:52.959509 containerd[1469]: time="2025-01-13T20:16:52.959385640Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 20:16:52.959509 containerd[1469]: time="2025-01-13T20:16:52.959398360Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 20:16:52.959509 containerd[1469]: time="2025-01-13T20:16:52.959414400Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 20:16:52.959509 containerd[1469]: time="2025-01-13T20:16:52.959426040Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 20:16:52.959509 containerd[1469]: time="2025-01-13T20:16:52.959445920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 20:16:52.959509 containerd[1469]: time="2025-01-13T20:16:52.959460400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 20:16:52.959826 containerd[1469]: time="2025-01-13T20:16:52.959472520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 20:16:52.959826 containerd[1469]: time="2025-01-13T20:16:52.959508560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 20:16:52.959826 containerd[1469]: time="2025-01-13T20:16:52.959522040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 20:16:52.959826 containerd[1469]: time="2025-01-13T20:16:52.959536480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 20:16:52.959826 containerd[1469]: time="2025-01-13T20:16:52.959548640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 20:16:52.959826 containerd[1469]: time="2025-01-13T20:16:52.959561680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 20:16:52.959826 containerd[1469]: time="2025-01-13T20:16:52.959576960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 20:16:52.959826 containerd[1469]: time="2025-01-13T20:16:52.959590960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 20:16:52.959826 containerd[1469]: time="2025-01-13T20:16:52.959605360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 20:16:52.959826 containerd[1469]: time="2025-01-13T20:16:52.959618080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 20:16:52.959826 containerd[1469]: time="2025-01-13T20:16:52.959630040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 20:16:52.959826 containerd[1469]: time="2025-01-13T20:16:52.959644320Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 20:16:52.959826 containerd[1469]: time="2025-01-13T20:16:52.959702080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 20:16:52.959826 containerd[1469]: time="2025-01-13T20:16:52.959719480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 20:16:52.959826 containerd[1469]: time="2025-01-13T20:16:52.959733000Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 20:16:52.960086 containerd[1469]: time="2025-01-13T20:16:52.959909800Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 20:16:52.960086 containerd[1469]: time="2025-01-13T20:16:52.959928800Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 20:16:52.960086 containerd[1469]: time="2025-01-13T20:16:52.959940360Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 20:16:52.960086 containerd[1469]: time="2025-01-13T20:16:52.959951880Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 20:16:52.960086 containerd[1469]: time="2025-01-13T20:16:52.959967880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 20:16:52.960086 containerd[1469]: time="2025-01-13T20:16:52.959982360Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 20:16:52.960086 containerd[1469]: time="2025-01-13T20:16:52.959992600Z" level=info msg="NRI interface is disabled by configuration." Jan 13 20:16:52.960086 containerd[1469]: time="2025-01-13T20:16:52.960003120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 20:16:52.961516 containerd[1469]: time="2025-01-13T20:16:52.960417920Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 20:16:52.965526 containerd[1469]: time="2025-01-13T20:16:52.960474200Z" level=info msg="Connect containerd service" Jan 13 20:16:52.965617 containerd[1469]: time="2025-01-13T20:16:52.965564840Z" level=info msg="using legacy CRI server" Jan 13 20:16:52.965617 containerd[1469]: time="2025-01-13T20:16:52.965578600Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 20:16:52.966255 containerd[1469]: time="2025-01-13T20:16:52.965878160Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 20:16:52.968869 containerd[1469]: time="2025-01-13T20:16:52.968823880Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:16:52.969938 containerd[1469]: time="2025-01-13T20:16:52.969174960Z" level=info msg="Start subscribing containerd event" Jan 13 20:16:52.969938 containerd[1469]: time="2025-01-13T20:16:52.969243480Z" level=info msg="Start recovering state" Jan 13 20:16:52.969938 containerd[1469]: time="2025-01-13T20:16:52.969317960Z" level=info msg="Start event monitor" Jan 13 20:16:52.969938 containerd[1469]: time="2025-01-13T20:16:52.969340840Z" level=info msg="Start snapshots syncer" Jan 13 20:16:52.969938 containerd[1469]: time="2025-01-13T20:16:52.969351840Z" level=info msg="Start cni network conf syncer for default" Jan 13 20:16:52.969938 containerd[1469]: time="2025-01-13T20:16:52.969360200Z" level=info msg="Start streaming server" Jan 13 20:16:52.969938 containerd[1469]: time="2025-01-13T20:16:52.969437600Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 20:16:52.971519 containerd[1469]: time="2025-01-13T20:16:52.971489200Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 20:16:52.971722 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 20:16:52.974598 containerd[1469]: time="2025-01-13T20:16:52.972393120Z" level=info msg="containerd successfully booted in 0.110190s" Jan 13 20:16:53.113177 tar[1460]: linux-arm64/LICENSE Jan 13 20:16:53.113382 tar[1460]: linux-arm64/README.md Jan 13 20:16:53.128551 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 20:16:53.408687 systemd-networkd[1349]: eth0: Gained IPv6LL Jan 13 20:16:53.409533 systemd-timesyncd[1337]: Network configuration changed, trying to establish connection. Jan 13 20:16:53.417802 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 20:16:53.419954 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 20:16:53.429986 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:16:53.433860 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 20:16:53.472656 systemd-networkd[1349]: eth1: Gained IPv6LL Jan 13 20:16:53.474075 systemd-timesyncd[1337]: Network configuration changed, trying to establish connection. Jan 13 20:16:53.479984 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 20:16:54.086452 sshd_keygen[1489]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 20:16:54.118355 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 20:16:54.123892 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 20:16:54.135439 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 20:16:54.137564 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 20:16:54.149019 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 20:16:54.160596 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 20:16:54.169825 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 20:16:54.172943 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 13 20:16:54.174196 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 20:16:54.197796 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:16:54.199248 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 20:16:54.203674 systemd[1]: Startup finished in 751ms (kernel) + 7.892s (initrd) + 4.663s (userspace) = 13.307s. Jan 13 20:16:54.204164 (kubelet)[1576]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:16:54.779827 kubelet[1576]: E0113 20:16:54.779739 1576 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:16:54.783731 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:16:54.783978 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:17:04.976304 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 20:17:04.983787 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:17:05.106799 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:17:05.109946 (kubelet)[1595]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:17:05.162358 kubelet[1595]: E0113 20:17:05.162198 1595 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:17:05.166264 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:17:05.166551 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:17:15.226199 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 20:17:15.234955 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:17:15.343344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:17:15.353031 (kubelet)[1610]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:17:15.407126 kubelet[1610]: E0113 20:17:15.407041 1610 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:17:15.410102 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:17:15.410288 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:17:23.627029 systemd-timesyncd[1337]: Contacted time server 144.76.43.40:123 (2.flatcar.pool.ntp.org). Jan 13 20:17:23.627103 systemd-timesyncd[1337]: Initial clock synchronization to Mon 2025-01-13 20:17:23.588089 UTC. Jan 13 20:17:25.475915 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 13 20:17:25.486382 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:17:25.619223 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:17:25.624278 (kubelet)[1624]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:17:25.665786 kubelet[1624]: E0113 20:17:25.665736 1624 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:17:25.669372 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:17:25.669690 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:17:35.727272 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 13 20:17:35.734867 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:17:35.876815 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:17:35.885327 (kubelet)[1640]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:17:35.935468 kubelet[1640]: E0113 20:17:35.935419 1640 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:17:35.937948 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:17:35.938104 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:17:37.942606 update_engine[1457]: I20250113 20:17:37.942427 1457 update_attempter.cc:509] Updating boot flags... Jan 13 20:17:37.990671 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1656) Jan 13 20:17:38.054808 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1652) Jan 13 20:17:38.101582 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1652) Jan 13 20:17:45.976605 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 13 20:17:45.984828 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:17:46.107618 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:17:46.113409 (kubelet)[1676]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:17:46.159028 kubelet[1676]: E0113 20:17:46.158949 1676 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:17:46.163217 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:17:46.163570 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:17:56.225897 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 13 20:17:56.235858 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:17:56.346747 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:17:56.361096 (kubelet)[1691]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:17:56.405310 kubelet[1691]: E0113 20:17:56.405168 1691 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:17:56.407955 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:17:56.408127 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:18:06.477532 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 13 20:18:06.483957 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:18:06.606763 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:18:06.614620 (kubelet)[1706]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:18:06.661442 kubelet[1706]: E0113 20:18:06.661369 1706 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:18:06.665155 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:18:06.665364 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:18:16.726073 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 13 20:18:16.734847 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:18:16.853997 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:18:16.869091 (kubelet)[1721]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:18:16.921093 kubelet[1721]: E0113 20:18:16.921015 1721 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:18:16.924586 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:18:16.924908 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:18:26.976248 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Jan 13 20:18:26.982942 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:18:27.104010 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:18:27.122999 (kubelet)[1735]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:18:27.167321 kubelet[1735]: E0113 20:18:27.167241 1735 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:18:27.170189 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:18:27.170408 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:18:37.226072 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Jan 13 20:18:37.238401 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:18:37.379855 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:18:37.394076 (kubelet)[1751]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:18:37.437623 kubelet[1751]: E0113 20:18:37.437575 1751 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:18:37.440461 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:18:37.440846 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:18:42.722105 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 20:18:42.727937 systemd[1]: Started sshd@0-138.199.153.195:22-147.75.109.163:56526.service - OpenSSH per-connection server daemon (147.75.109.163:56526). Jan 13 20:18:43.713827 sshd[1759]: Accepted publickey for core from 147.75.109.163 port 56526 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:18:43.716188 sshd-session[1759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:18:43.726627 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 20:18:43.731913 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 20:18:43.735972 systemd-logind[1456]: New session 1 of user core. Jan 13 20:18:43.750765 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 20:18:43.757981 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 20:18:43.772365 (systemd)[1763]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 20:18:43.883920 systemd[1763]: Queued start job for default target default.target. Jan 13 20:18:43.896996 systemd[1763]: Created slice app.slice - User Application Slice. Jan 13 20:18:43.897055 systemd[1763]: Reached target paths.target - Paths. Jan 13 20:18:43.897078 systemd[1763]: Reached target timers.target - Timers. Jan 13 20:18:43.899344 systemd[1763]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 20:18:43.920020 systemd[1763]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 20:18:43.920085 systemd[1763]: Reached target sockets.target - Sockets. Jan 13 20:18:43.920098 systemd[1763]: Reached target basic.target - Basic System. Jan 13 20:18:43.920147 systemd[1763]: Reached target default.target - Main User Target. Jan 13 20:18:43.920175 systemd[1763]: Startup finished in 139ms. Jan 13 20:18:43.920522 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 20:18:43.938998 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 20:18:44.635953 systemd[1]: Started sshd@1-138.199.153.195:22-147.75.109.163:56534.service - OpenSSH per-connection server daemon (147.75.109.163:56534). Jan 13 20:18:45.617348 sshd[1774]: Accepted publickey for core from 147.75.109.163 port 56534 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:18:45.619265 sshd-session[1774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:18:45.625959 systemd-logind[1456]: New session 2 of user core. Jan 13 20:18:45.635918 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 20:18:46.300118 sshd[1776]: Connection closed by 147.75.109.163 port 56534 Jan 13 20:18:46.301331 sshd-session[1774]: pam_unix(sshd:session): session closed for user core Jan 13 20:18:46.305329 systemd[1]: sshd@1-138.199.153.195:22-147.75.109.163:56534.service: Deactivated successfully. Jan 13 20:18:46.307600 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 20:18:46.310907 systemd-logind[1456]: Session 2 logged out. Waiting for processes to exit. Jan 13 20:18:46.313286 systemd-logind[1456]: Removed session 2. Jan 13 20:18:46.481539 systemd[1]: Started sshd@2-138.199.153.195:22-147.75.109.163:56550.service - OpenSSH per-connection server daemon (147.75.109.163:56550). Jan 13 20:18:47.465447 sshd[1781]: Accepted publickey for core from 147.75.109.163 port 56550 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:18:47.467071 sshd-session[1781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:18:47.468272 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Jan 13 20:18:47.478144 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:18:47.487286 systemd-logind[1456]: New session 3 of user core. Jan 13 20:18:47.490008 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 20:18:47.610732 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:18:47.623097 (kubelet)[1792]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:18:47.669422 kubelet[1792]: E0113 20:18:47.669351 1792 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:18:47.671845 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:18:47.672105 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:18:48.142331 sshd[1786]: Connection closed by 147.75.109.163 port 56550 Jan 13 20:18:48.143232 sshd-session[1781]: pam_unix(sshd:session): session closed for user core Jan 13 20:18:48.147543 systemd[1]: sshd@2-138.199.153.195:22-147.75.109.163:56550.service: Deactivated successfully. Jan 13 20:18:48.150201 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 20:18:48.152113 systemd-logind[1456]: Session 3 logged out. Waiting for processes to exit. Jan 13 20:18:48.154531 systemd-logind[1456]: Removed session 3. Jan 13 20:18:48.314591 systemd[1]: Started sshd@3-138.199.153.195:22-147.75.109.163:42652.service - OpenSSH per-connection server daemon (147.75.109.163:42652). Jan 13 20:18:49.312256 sshd[1803]: Accepted publickey for core from 147.75.109.163 port 42652 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:18:49.314289 sshd-session[1803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:18:49.320282 systemd-logind[1456]: New session 4 of user core. Jan 13 20:18:49.326778 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 20:18:49.995734 sshd[1805]: Connection closed by 147.75.109.163 port 42652 Jan 13 20:18:49.995630 sshd-session[1803]: pam_unix(sshd:session): session closed for user core Jan 13 20:18:50.000015 systemd[1]: sshd@3-138.199.153.195:22-147.75.109.163:42652.service: Deactivated successfully. Jan 13 20:18:50.001809 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 20:18:50.004135 systemd-logind[1456]: Session 4 logged out. Waiting for processes to exit. Jan 13 20:18:50.005403 systemd-logind[1456]: Removed session 4. Jan 13 20:18:50.169356 systemd[1]: Started sshd@4-138.199.153.195:22-147.75.109.163:42664.service - OpenSSH per-connection server daemon (147.75.109.163:42664). Jan 13 20:18:51.160927 sshd[1810]: Accepted publickey for core from 147.75.109.163 port 42664 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:18:51.163306 sshd-session[1810]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:18:51.169894 systemd-logind[1456]: New session 5 of user core. Jan 13 20:18:51.179925 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 20:18:51.697517 sudo[1813]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 20:18:51.697832 sudo[1813]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:18:52.029192 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 20:18:52.029195 (dockerd)[1831]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 20:18:52.265807 dockerd[1831]: time="2025-01-13T20:18:52.265217941Z" level=info msg="Starting up" Jan 13 20:18:52.343352 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport607233768-merged.mount: Deactivated successfully. Jan 13 20:18:52.360123 systemd[1]: var-lib-docker-metacopy\x2dcheck1508971745-merged.mount: Deactivated successfully. Jan 13 20:18:52.373018 dockerd[1831]: time="2025-01-13T20:18:52.372688672Z" level=info msg="Loading containers: start." Jan 13 20:18:52.572515 kernel: Initializing XFRM netlink socket Jan 13 20:18:52.669554 systemd-networkd[1349]: docker0: Link UP Jan 13 20:18:52.709295 dockerd[1831]: time="2025-01-13T20:18:52.709235269Z" level=info msg="Loading containers: done." Jan 13 20:18:52.727536 dockerd[1831]: time="2025-01-13T20:18:52.726901323Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 20:18:52.727536 dockerd[1831]: time="2025-01-13T20:18:52.727060520Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Jan 13 20:18:52.727536 dockerd[1831]: time="2025-01-13T20:18:52.727219196Z" level=info msg="Daemon has completed initialization" Jan 13 20:18:52.773798 dockerd[1831]: time="2025-01-13T20:18:52.773729164Z" level=info msg="API listen on /run/docker.sock" Jan 13 20:18:52.774159 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 20:18:53.872523 containerd[1469]: time="2025-01-13T20:18:53.872306792Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\"" Jan 13 20:18:54.453039 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2304460263.mount: Deactivated successfully. Jan 13 20:18:56.166526 containerd[1469]: time="2025-01-13T20:18:56.165203403Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:18:56.166526 containerd[1469]: time="2025-01-13T20:18:56.166580819Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.4: active requests=0, bytes read=25615677" Jan 13 20:18:56.168247 containerd[1469]: time="2025-01-13T20:18:56.167608800Z" level=info msg="ImageCreate event name:\"sha256:3e1123d6ebadbafa6eb77a9047f23f20befbbe2f177eb473a81b27a5de8c2ec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:18:56.174231 containerd[1469]: time="2025-01-13T20:18:56.174158484Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:18:56.175610 containerd[1469]: time="2025-01-13T20:18:56.175422782Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.4\" with image id \"sha256:3e1123d6ebadbafa6eb77a9047f23f20befbbe2f177eb473a81b27a5de8c2ec5\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\", size \"25612385\" in 2.303073311s" Jan 13 20:18:56.175610 containerd[1469]: time="2025-01-13T20:18:56.175470181Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\" returns image reference \"sha256:3e1123d6ebadbafa6eb77a9047f23f20befbbe2f177eb473a81b27a5de8c2ec5\"" Jan 13 20:18:56.177421 containerd[1469]: time="2025-01-13T20:18:56.177208390Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\"" Jan 13 20:18:57.726076 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Jan 13 20:18:57.731829 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:18:57.852807 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:18:57.855645 (kubelet)[2079]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:18:57.895402 kubelet[2079]: E0113 20:18:57.895293 2079 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:18:57.897689 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:18:57.897867 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:18:58.375125 containerd[1469]: time="2025-01-13T20:18:58.373570685Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:18:58.376660 containerd[1469]: time="2025-01-13T20:18:58.376577754Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.4: active requests=0, bytes read=22470116" Jan 13 20:18:58.378511 containerd[1469]: time="2025-01-13T20:18:58.378258326Z" level=info msg="ImageCreate event name:\"sha256:d5369864a42bf2c01d3ad462832526b7d3e40620c0e75fecefbffc203562ad55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:18:58.383705 containerd[1469]: time="2025-01-13T20:18:58.383653875Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:18:58.385144 containerd[1469]: time="2025-01-13T20:18:58.384987332Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.4\" with image id \"sha256:d5369864a42bf2c01d3ad462832526b7d3e40620c0e75fecefbffc203562ad55\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\", size \"23872417\" in 2.207735183s" Jan 13 20:18:58.385144 containerd[1469]: time="2025-01-13T20:18:58.385036531Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\" returns image reference \"sha256:d5369864a42bf2c01d3ad462832526b7d3e40620c0e75fecefbffc203562ad55\"" Jan 13 20:18:58.386321 containerd[1469]: time="2025-01-13T20:18:58.386016235Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\"" Jan 13 20:19:00.082982 containerd[1469]: time="2025-01-13T20:19:00.081739057Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:00.083415 containerd[1469]: time="2025-01-13T20:19:00.083369071Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.4: active requests=0, bytes read=17024222" Jan 13 20:19:00.084944 containerd[1469]: time="2025-01-13T20:19:00.084910166Z" level=info msg="ImageCreate event name:\"sha256:d99fc9a32f6b42ab5537eec09d599efae0f61c109406dae1ba255cec288fcb95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:00.089263 containerd[1469]: time="2025-01-13T20:19:00.089206377Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:00.090900 containerd[1469]: time="2025-01-13T20:19:00.090854430Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.4\" with image id \"sha256:d99fc9a32f6b42ab5537eec09d599efae0f61c109406dae1ba255cec288fcb95\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\", size \"18426541\" in 1.704792836s" Jan 13 20:19:00.091128 containerd[1469]: time="2025-01-13T20:19:00.091110466Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\" returns image reference \"sha256:d99fc9a32f6b42ab5537eec09d599efae0f61c109406dae1ba255cec288fcb95\"" Jan 13 20:19:00.092158 containerd[1469]: time="2025-01-13T20:19:00.092125450Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Jan 13 20:19:01.347668 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1666795165.mount: Deactivated successfully. Jan 13 20:19:01.734527 containerd[1469]: time="2025-01-13T20:19:01.734179248Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:01.738230 containerd[1469]: time="2025-01-13T20:19:01.738157625Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.4: active requests=0, bytes read=26771452" Jan 13 20:19:01.740643 containerd[1469]: time="2025-01-13T20:19:01.740564467Z" level=info msg="ImageCreate event name:\"sha256:34e142197cb996099cc1e98902c112642b3fb3dc559140c0a95279aa8d254d3a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:01.743683 containerd[1469]: time="2025-01-13T20:19:01.743615179Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:01.745205 containerd[1469]: time="2025-01-13T20:19:01.744992197Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.4\" with image id \"sha256:34e142197cb996099cc1e98902c112642b3fb3dc559140c0a95279aa8d254d3a\", repo tag \"registry.k8s.io/kube-proxy:v1.31.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\", size \"26770445\" in 1.652820748s" Jan 13 20:19:01.745205 containerd[1469]: time="2025-01-13T20:19:01.745060436Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:34e142197cb996099cc1e98902c112642b3fb3dc559140c0a95279aa8d254d3a\"" Jan 13 20:19:01.746542 containerd[1469]: time="2025-01-13T20:19:01.746468534Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 13 20:19:02.395522 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3567981159.mount: Deactivated successfully. Jan 13 20:19:03.446973 containerd[1469]: time="2025-01-13T20:19:03.446885785Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:03.449732 containerd[1469]: time="2025-01-13T20:19:03.449638584Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485461" Jan 13 20:19:03.452532 containerd[1469]: time="2025-01-13T20:19:03.451166041Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:03.459032 containerd[1469]: time="2025-01-13T20:19:03.458969443Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:03.461009 containerd[1469]: time="2025-01-13T20:19:03.460959373Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.7144024s" Jan 13 20:19:03.461168 containerd[1469]: time="2025-01-13T20:19:03.461151690Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 13 20:19:03.461943 containerd[1469]: time="2025-01-13T20:19:03.461893639Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 13 20:19:04.047626 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount443097345.mount: Deactivated successfully. Jan 13 20:19:04.058431 containerd[1469]: time="2025-01-13T20:19:04.058380553Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:04.059655 containerd[1469]: time="2025-01-13T20:19:04.059616695Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Jan 13 20:19:04.060851 containerd[1469]: time="2025-01-13T20:19:04.060820557Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:04.064630 containerd[1469]: time="2025-01-13T20:19:04.064585222Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:04.066670 containerd[1469]: time="2025-01-13T20:19:04.065648806Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 603.451731ms" Jan 13 20:19:04.066670 containerd[1469]: time="2025-01-13T20:19:04.065694046Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 13 20:19:04.067080 containerd[1469]: time="2025-01-13T20:19:04.067038586Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 13 20:19:04.664798 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1730587040.mount: Deactivated successfully. Jan 13 20:19:07.057153 containerd[1469]: time="2025-01-13T20:19:07.057094918Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:07.061373 containerd[1469]: time="2025-01-13T20:19:07.061008824Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406487" Jan 13 20:19:07.067502 containerd[1469]: time="2025-01-13T20:19:07.066351831Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:07.084987 containerd[1469]: time="2025-01-13T20:19:07.084865656Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:07.087431 containerd[1469]: time="2025-01-13T20:19:07.087252383Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 3.020156958s" Jan 13 20:19:07.087431 containerd[1469]: time="2025-01-13T20:19:07.087304782Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jan 13 20:19:07.976161 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Jan 13 20:19:07.985598 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:19:08.109863 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:19:08.111351 (kubelet)[2231]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:19:08.156800 kubelet[2231]: E0113 20:19:08.156754 2231 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:19:08.160340 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:19:08.160466 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:19:11.878286 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:19:11.889927 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:19:11.935768 systemd[1]: Reloading requested from client PID 2246 ('systemctl') (unit session-5.scope)... Jan 13 20:19:11.935792 systemd[1]: Reloading... Jan 13 20:19:12.066685 zram_generator::config[2292]: No configuration found. Jan 13 20:19:12.162513 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:19:12.232154 systemd[1]: Reloading finished in 295 ms. Jan 13 20:19:12.290230 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 20:19:12.290340 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 20:19:12.290719 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:19:12.297864 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:19:12.410749 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:19:12.418814 (kubelet)[2335]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:19:12.459508 kubelet[2335]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:19:12.459508 kubelet[2335]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:19:12.459508 kubelet[2335]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:19:12.459508 kubelet[2335]: I0113 20:19:12.459436 2335 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:19:13.248968 kubelet[2335]: I0113 20:19:13.248915 2335 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 13 20:19:13.249543 kubelet[2335]: I0113 20:19:13.249133 2335 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:19:13.249543 kubelet[2335]: I0113 20:19:13.249401 2335 server.go:929] "Client rotation is on, will bootstrap in background" Jan 13 20:19:13.281683 kubelet[2335]: E0113 20:19:13.281628 2335 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://138.199.153.195:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 138.199.153.195:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:19:13.283129 kubelet[2335]: I0113 20:19:13.283096 2335 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:19:13.298790 kubelet[2335]: E0113 20:19:13.298392 2335 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 13 20:19:13.298790 kubelet[2335]: I0113 20:19:13.298433 2335 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 13 20:19:13.302727 kubelet[2335]: I0113 20:19:13.302695 2335 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:19:13.304203 kubelet[2335]: I0113 20:19:13.304166 2335 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 13 20:19:13.304605 kubelet[2335]: I0113 20:19:13.304565 2335 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:19:13.304869 kubelet[2335]: I0113 20:19:13.304678 2335 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4152-2-0-6-9e5a1dc0a6","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 13 20:19:13.305179 kubelet[2335]: I0113 20:19:13.305166 2335 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:19:13.305231 kubelet[2335]: I0113 20:19:13.305223 2335 container_manager_linux.go:300] "Creating device plugin manager" Jan 13 20:19:13.305532 kubelet[2335]: I0113 20:19:13.305468 2335 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:19:13.308334 kubelet[2335]: I0113 20:19:13.308263 2335 kubelet.go:408] "Attempting to sync node with API server" Jan 13 20:19:13.308334 kubelet[2335]: I0113 20:19:13.308313 2335 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:19:13.308334 kubelet[2335]: I0113 20:19:13.308351 2335 kubelet.go:314] "Adding apiserver pod source" Jan 13 20:19:13.308703 kubelet[2335]: I0113 20:19:13.308365 2335 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:19:13.311960 kubelet[2335]: W0113 20:19:13.311673 2335 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://138.199.153.195:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-0-6-9e5a1dc0a6&limit=500&resourceVersion=0": dial tcp 138.199.153.195:6443: connect: connection refused Jan 13 20:19:13.311960 kubelet[2335]: E0113 20:19:13.311749 2335 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://138.199.153.195:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-0-6-9e5a1dc0a6&limit=500&resourceVersion=0\": dial tcp 138.199.153.195:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:19:13.313865 kubelet[2335]: W0113 20:19:13.312277 2335 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://138.199.153.195:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 138.199.153.195:6443: connect: connection refused Jan 13 20:19:13.313865 kubelet[2335]: E0113 20:19:13.312325 2335 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://138.199.153.195:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 138.199.153.195:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:19:13.314006 kubelet[2335]: I0113 20:19:13.313958 2335 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:19:13.316340 kubelet[2335]: I0113 20:19:13.316295 2335 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:19:13.317206 kubelet[2335]: W0113 20:19:13.317176 2335 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 20:19:13.318077 kubelet[2335]: I0113 20:19:13.318023 2335 server.go:1269] "Started kubelet" Jan 13 20:19:13.321337 kubelet[2335]: I0113 20:19:13.321302 2335 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:19:13.326009 kubelet[2335]: I0113 20:19:13.325958 2335 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:19:13.327246 kubelet[2335]: I0113 20:19:13.327213 2335 server.go:460] "Adding debug handlers to kubelet server" Jan 13 20:19:13.328444 kubelet[2335]: I0113 20:19:13.328380 2335 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:19:13.328754 kubelet[2335]: I0113 20:19:13.328723 2335 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 13 20:19:13.328915 kubelet[2335]: I0113 20:19:13.328899 2335 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:19:13.329152 kubelet[2335]: E0113 20:19:13.329106 2335 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4152-2-0-6-9e5a1dc0a6\" not found" Jan 13 20:19:13.332754 kubelet[2335]: I0113 20:19:13.332723 2335 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 13 20:19:13.332961 kubelet[2335]: I0113 20:19:13.332950 2335 reconciler.go:26] "Reconciler: start to sync state" Jan 13 20:19:13.333843 kubelet[2335]: I0113 20:19:13.333800 2335 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 13 20:19:13.335273 kubelet[2335]: E0113 20:19:13.334353 2335 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.153.195:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-0-6-9e5a1dc0a6?timeout=10s\": dial tcp 138.199.153.195:6443: connect: connection refused" interval="200ms" Jan 13 20:19:13.335273 kubelet[2335]: I0113 20:19:13.334825 2335 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:19:13.335273 kubelet[2335]: I0113 20:19:13.334946 2335 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:19:13.336857 kubelet[2335]: E0113 20:19:13.335196 2335 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://138.199.153.195:6443/api/v1/namespaces/default/events\": dial tcp 138.199.153.195:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4152-2-0-6-9e5a1dc0a6.181a59fde5937b40 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152-2-0-6-9e5a1dc0a6,UID:ci-4152-2-0-6-9e5a1dc0a6,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152-2-0-6-9e5a1dc0a6,},FirstTimestamp:2025-01-13 20:19:13.31799328 +0000 UTC m=+0.895218070,LastTimestamp:2025-01-13 20:19:13.31799328 +0000 UTC m=+0.895218070,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152-2-0-6-9e5a1dc0a6,}" Jan 13 20:19:13.339032 kubelet[2335]: W0113 20:19:13.338949 2335 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://138.199.153.195:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 138.199.153.195:6443: connect: connection refused Jan 13 20:19:13.339032 kubelet[2335]: E0113 20:19:13.339025 2335 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://138.199.153.195:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 138.199.153.195:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:19:13.339788 kubelet[2335]: E0113 20:19:13.339715 2335 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:19:13.340109 kubelet[2335]: I0113 20:19:13.339966 2335 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:19:13.346389 kubelet[2335]: I0113 20:19:13.346209 2335 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:19:13.358436 kubelet[2335]: I0113 20:19:13.358383 2335 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:19:13.359612 kubelet[2335]: I0113 20:19:13.358695 2335 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:19:13.359612 kubelet[2335]: I0113 20:19:13.358736 2335 kubelet.go:2321] "Starting kubelet main sync loop" Jan 13 20:19:13.359612 kubelet[2335]: E0113 20:19:13.358823 2335 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:19:13.371509 kubelet[2335]: W0113 20:19:13.371328 2335 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://138.199.153.195:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 138.199.153.195:6443: connect: connection refused Jan 13 20:19:13.371509 kubelet[2335]: E0113 20:19:13.371401 2335 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://138.199.153.195:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 138.199.153.195:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:19:13.375222 kubelet[2335]: I0113 20:19:13.375182 2335 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:19:13.375222 kubelet[2335]: I0113 20:19:13.375201 2335 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:19:13.375222 kubelet[2335]: I0113 20:19:13.375226 2335 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:19:13.377215 kubelet[2335]: I0113 20:19:13.377183 2335 policy_none.go:49] "None policy: Start" Jan 13 20:19:13.377826 kubelet[2335]: I0113 20:19:13.377809 2335 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:19:13.377826 kubelet[2335]: I0113 20:19:13.377861 2335 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:19:13.387603 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 20:19:13.397210 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 20:19:13.401901 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 20:19:13.412252 kubelet[2335]: I0113 20:19:13.411201 2335 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:19:13.412252 kubelet[2335]: I0113 20:19:13.411546 2335 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 13 20:19:13.412252 kubelet[2335]: I0113 20:19:13.411571 2335 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 20:19:13.412252 kubelet[2335]: I0113 20:19:13.412062 2335 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:19:13.416173 kubelet[2335]: E0113 20:19:13.416109 2335 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4152-2-0-6-9e5a1dc0a6\" not found" Jan 13 20:19:13.474571 systemd[1]: Created slice kubepods-burstable-podccac78aee092e912d55e3801622f6224.slice - libcontainer container kubepods-burstable-podccac78aee092e912d55e3801622f6224.slice. Jan 13 20:19:13.491612 systemd[1]: Created slice kubepods-burstable-pod421a0673669071417355351020b09dff.slice - libcontainer container kubepods-burstable-pod421a0673669071417355351020b09dff.slice. Jan 13 20:19:13.498652 systemd[1]: Created slice kubepods-burstable-poda5ac523be9bd728baf01b8d53149e7dd.slice - libcontainer container kubepods-burstable-poda5ac523be9bd728baf01b8d53149e7dd.slice. Jan 13 20:19:13.515860 kubelet[2335]: I0113 20:19:13.515647 2335 kubelet_node_status.go:72] "Attempting to register node" node="ci-4152-2-0-6-9e5a1dc0a6" Jan 13 20:19:13.516983 kubelet[2335]: E0113 20:19:13.516294 2335 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://138.199.153.195:6443/api/v1/nodes\": dial tcp 138.199.153.195:6443: connect: connection refused" node="ci-4152-2-0-6-9e5a1dc0a6" Jan 13 20:19:13.535471 kubelet[2335]: E0113 20:19:13.535390 2335 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.153.195:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-0-6-9e5a1dc0a6?timeout=10s\": dial tcp 138.199.153.195:6443: connect: connection refused" interval="400ms" Jan 13 20:19:13.633733 kubelet[2335]: I0113 20:19:13.633684 2335 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/421a0673669071417355351020b09dff-ca-certs\") pod \"kube-apiserver-ci-4152-2-0-6-9e5a1dc0a6\" (UID: \"421a0673669071417355351020b09dff\") " pod="kube-system/kube-apiserver-ci-4152-2-0-6-9e5a1dc0a6" Jan 13 20:19:13.634085 kubelet[2335]: I0113 20:19:13.633914 2335 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/421a0673669071417355351020b09dff-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152-2-0-6-9e5a1dc0a6\" (UID: \"421a0673669071417355351020b09dff\") " pod="kube-system/kube-apiserver-ci-4152-2-0-6-9e5a1dc0a6" Jan 13 20:19:13.634085 kubelet[2335]: I0113 20:19:13.633970 2335 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a5ac523be9bd728baf01b8d53149e7dd-k8s-certs\") pod \"kube-controller-manager-ci-4152-2-0-6-9e5a1dc0a6\" (UID: \"a5ac523be9bd728baf01b8d53149e7dd\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-6-9e5a1dc0a6" Jan 13 20:19:13.634085 kubelet[2335]: I0113 20:19:13.633992 2335 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ccac78aee092e912d55e3801622f6224-kubeconfig\") pod \"kube-scheduler-ci-4152-2-0-6-9e5a1dc0a6\" (UID: \"ccac78aee092e912d55e3801622f6224\") " pod="kube-system/kube-scheduler-ci-4152-2-0-6-9e5a1dc0a6" Jan 13 20:19:13.634085 kubelet[2335]: I0113 20:19:13.634032 2335 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/421a0673669071417355351020b09dff-k8s-certs\") pod \"kube-apiserver-ci-4152-2-0-6-9e5a1dc0a6\" (UID: \"421a0673669071417355351020b09dff\") " pod="kube-system/kube-apiserver-ci-4152-2-0-6-9e5a1dc0a6" Jan 13 20:19:13.634085 kubelet[2335]: I0113 20:19:13.634051 2335 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a5ac523be9bd728baf01b8d53149e7dd-ca-certs\") pod \"kube-controller-manager-ci-4152-2-0-6-9e5a1dc0a6\" (UID: \"a5ac523be9bd728baf01b8d53149e7dd\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-6-9e5a1dc0a6" Jan 13 20:19:13.634250 kubelet[2335]: I0113 20:19:13.634067 2335 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a5ac523be9bd728baf01b8d53149e7dd-flexvolume-dir\") pod \"kube-controller-manager-ci-4152-2-0-6-9e5a1dc0a6\" (UID: \"a5ac523be9bd728baf01b8d53149e7dd\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-6-9e5a1dc0a6" Jan 13 20:19:13.634674 kubelet[2335]: I0113 20:19:13.634587 2335 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a5ac523be9bd728baf01b8d53149e7dd-kubeconfig\") pod \"kube-controller-manager-ci-4152-2-0-6-9e5a1dc0a6\" (UID: \"a5ac523be9bd728baf01b8d53149e7dd\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-6-9e5a1dc0a6" Jan 13 20:19:13.634674 kubelet[2335]: I0113 20:19:13.634627 2335 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a5ac523be9bd728baf01b8d53149e7dd-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152-2-0-6-9e5a1dc0a6\" (UID: \"a5ac523be9bd728baf01b8d53149e7dd\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-6-9e5a1dc0a6" Jan 13 20:19:13.720084 kubelet[2335]: I0113 20:19:13.719671 2335 kubelet_node_status.go:72] "Attempting to register node" node="ci-4152-2-0-6-9e5a1dc0a6" Jan 13 20:19:13.720433 kubelet[2335]: E0113 20:19:13.720392 2335 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://138.199.153.195:6443/api/v1/nodes\": dial tcp 138.199.153.195:6443: connect: connection refused" node="ci-4152-2-0-6-9e5a1dc0a6" Jan 13 20:19:13.789081 containerd[1469]: time="2025-01-13T20:19:13.788907387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152-2-0-6-9e5a1dc0a6,Uid:ccac78aee092e912d55e3801622f6224,Namespace:kube-system,Attempt:0,}" Jan 13 20:19:13.796516 containerd[1469]: time="2025-01-13T20:19:13.796111580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152-2-0-6-9e5a1dc0a6,Uid:421a0673669071417355351020b09dff,Namespace:kube-system,Attempt:0,}" Jan 13 20:19:13.802199 containerd[1469]: time="2025-01-13T20:19:13.801671073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152-2-0-6-9e5a1dc0a6,Uid:a5ac523be9bd728baf01b8d53149e7dd,Namespace:kube-system,Attempt:0,}" Jan 13 20:19:13.936664 kubelet[2335]: E0113 20:19:13.936594 2335 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.153.195:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-0-6-9e5a1dc0a6?timeout=10s\": dial tcp 138.199.153.195:6443: connect: connection refused" interval="800ms" Jan 13 20:19:14.124685 kubelet[2335]: I0113 20:19:14.124194 2335 kubelet_node_status.go:72] "Attempting to register node" node="ci-4152-2-0-6-9e5a1dc0a6" Jan 13 20:19:14.124685 kubelet[2335]: E0113 20:19:14.124621 2335 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://138.199.153.195:6443/api/v1/nodes\": dial tcp 138.199.153.195:6443: connect: connection refused" node="ci-4152-2-0-6-9e5a1dc0a6" Jan 13 20:19:14.319047 kubelet[2335]: W0113 20:19:14.318889 2335 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://138.199.153.195:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 138.199.153.195:6443: connect: connection refused Jan 13 20:19:14.319237 kubelet[2335]: E0113 20:19:14.319197 2335 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://138.199.153.195:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 138.199.153.195:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:19:14.323685 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount186672799.mount: Deactivated successfully. Jan 13 20:19:14.331522 containerd[1469]: time="2025-01-13T20:19:14.331414670Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:19:14.333657 containerd[1469]: time="2025-01-13T20:19:14.333451966Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Jan 13 20:19:14.335179 containerd[1469]: time="2025-01-13T20:19:14.334347315Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:19:14.338553 containerd[1469]: time="2025-01-13T20:19:14.337270480Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:19:14.338553 containerd[1469]: time="2025-01-13T20:19:14.338240749Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:19:14.341326 containerd[1469]: time="2025-01-13T20:19:14.341282633Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:19:14.342594 containerd[1469]: time="2025-01-13T20:19:14.342419059Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:19:14.344311 containerd[1469]: time="2025-01-13T20:19:14.343756124Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:19:14.344311 containerd[1469]: time="2025-01-13T20:19:14.343845763Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 554.232544ms" Jan 13 20:19:14.345816 containerd[1469]: time="2025-01-13T20:19:14.345767340Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 543.968548ms" Jan 13 20:19:14.352283 containerd[1469]: time="2025-01-13T20:19:14.352070265Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 555.853766ms" Jan 13 20:19:14.461574 containerd[1469]: time="2025-01-13T20:19:14.460534221Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:19:14.461574 containerd[1469]: time="2025-01-13T20:19:14.460672979Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:19:14.461574 containerd[1469]: time="2025-01-13T20:19:14.460691219Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:19:14.462693 kubelet[2335]: W0113 20:19:14.462625 2335 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://138.199.153.195:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 138.199.153.195:6443: connect: connection refused Jan 13 20:19:14.462868 kubelet[2335]: E0113 20:19:14.462717 2335 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://138.199.153.195:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 138.199.153.195:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:19:14.463126 containerd[1469]: time="2025-01-13T20:19:14.462228561Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:19:14.464893 containerd[1469]: time="2025-01-13T20:19:14.464627652Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:19:14.464893 containerd[1469]: time="2025-01-13T20:19:14.464695732Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:19:14.464893 containerd[1469]: time="2025-01-13T20:19:14.464713131Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:19:14.465621 containerd[1469]: time="2025-01-13T20:19:14.465137046Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:19:14.465621 containerd[1469]: time="2025-01-13T20:19:14.465192886Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:19:14.465621 containerd[1469]: time="2025-01-13T20:19:14.465211405Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:19:14.465621 containerd[1469]: time="2025-01-13T20:19:14.465351924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:19:14.465995 containerd[1469]: time="2025-01-13T20:19:14.465614161Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:19:14.495268 systemd[1]: Started cri-containerd-f74cd99fa350588920ccd09a5a11dec61bbb7c4c441b0adf6094455c15bfa0da.scope - libcontainer container f74cd99fa350588920ccd09a5a11dec61bbb7c4c441b0adf6094455c15bfa0da. Jan 13 20:19:14.503426 systemd[1]: Started cri-containerd-cf178e288f791c8ceeba9a5556859b68fcfd91a7a7a7b95c5365ee3c8df4314b.scope - libcontainer container cf178e288f791c8ceeba9a5556859b68fcfd91a7a7a7b95c5365ee3c8df4314b. Jan 13 20:19:14.511138 systemd[1]: Started cri-containerd-b4c1057a81d56c18370d9b390e2b85991216cb136a189af31c7d10c28bc3a62d.scope - libcontainer container b4c1057a81d56c18370d9b390e2b85991216cb136a189af31c7d10c28bc3a62d. Jan 13 20:19:14.579469 containerd[1469]: time="2025-01-13T20:19:14.576451808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152-2-0-6-9e5a1dc0a6,Uid:a5ac523be9bd728baf01b8d53149e7dd,Namespace:kube-system,Attempt:0,} returns sandbox id \"f74cd99fa350588920ccd09a5a11dec61bbb7c4c441b0adf6094455c15bfa0da\"" Jan 13 20:19:14.579469 containerd[1469]: time="2025-01-13T20:19:14.576776444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152-2-0-6-9e5a1dc0a6,Uid:ccac78aee092e912d55e3801622f6224,Namespace:kube-system,Attempt:0,} returns sandbox id \"b4c1057a81d56c18370d9b390e2b85991216cb136a189af31c7d10c28bc3a62d\"" Jan 13 20:19:14.581269 containerd[1469]: time="2025-01-13T20:19:14.581198432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152-2-0-6-9e5a1dc0a6,Uid:421a0673669071417355351020b09dff,Namespace:kube-system,Attempt:0,} returns sandbox id \"cf178e288f791c8ceeba9a5556859b68fcfd91a7a7a7b95c5365ee3c8df4314b\"" Jan 13 20:19:14.586155 containerd[1469]: time="2025-01-13T20:19:14.585820737Z" level=info msg="CreateContainer within sandbox \"f74cd99fa350588920ccd09a5a11dec61bbb7c4c441b0adf6094455c15bfa0da\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 20:19:14.586155 containerd[1469]: time="2025-01-13T20:19:14.585840817Z" level=info msg="CreateContainer within sandbox \"b4c1057a81d56c18370d9b390e2b85991216cb136a189af31c7d10c28bc3a62d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 20:19:14.587211 containerd[1469]: time="2025-01-13T20:19:14.587170961Z" level=info msg="CreateContainer within sandbox \"cf178e288f791c8ceeba9a5556859b68fcfd91a7a7a7b95c5365ee3c8df4314b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 20:19:14.605650 containerd[1469]: time="2025-01-13T20:19:14.605104469Z" level=info msg="CreateContainer within sandbox \"f74cd99fa350588920ccd09a5a11dec61bbb7c4c441b0adf6094455c15bfa0da\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d7675dce7536f1155e0c5bba63db3fdb1d37f3e21f8ccc38d65be5e3455a25d6\"" Jan 13 20:19:14.606457 containerd[1469]: time="2025-01-13T20:19:14.606335334Z" level=info msg="StartContainer for \"d7675dce7536f1155e0c5bba63db3fdb1d37f3e21f8ccc38d65be5e3455a25d6\"" Jan 13 20:19:14.615382 containerd[1469]: time="2025-01-13T20:19:14.614698195Z" level=info msg="CreateContainer within sandbox \"cf178e288f791c8ceeba9a5556859b68fcfd91a7a7a7b95c5365ee3c8df4314b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"766e5468c803566e4968c65e3852cd8d23fcefbfe32c7808cf046e19423222b5\"" Jan 13 20:19:14.616292 containerd[1469]: time="2025-01-13T20:19:14.616262337Z" level=info msg="StartContainer for \"766e5468c803566e4968c65e3852cd8d23fcefbfe32c7808cf046e19423222b5\"" Jan 13 20:19:14.624785 containerd[1469]: time="2025-01-13T20:19:14.624728077Z" level=info msg="CreateContainer within sandbox \"b4c1057a81d56c18370d9b390e2b85991216cb136a189af31c7d10c28bc3a62d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f0ce663e59625a6ba62b6ae38b260f612d2fdf94533eb09b2225a267224ca8b8\"" Jan 13 20:19:14.626300 containerd[1469]: time="2025-01-13T20:19:14.626243459Z" level=info msg="StartContainer for \"f0ce663e59625a6ba62b6ae38b260f612d2fdf94533eb09b2225a267224ca8b8\"" Jan 13 20:19:14.639723 systemd[1]: Started cri-containerd-d7675dce7536f1155e0c5bba63db3fdb1d37f3e21f8ccc38d65be5e3455a25d6.scope - libcontainer container d7675dce7536f1155e0c5bba63db3fdb1d37f3e21f8ccc38d65be5e3455a25d6. Jan 13 20:19:14.676086 systemd[1]: Started cri-containerd-766e5468c803566e4968c65e3852cd8d23fcefbfe32c7808cf046e19423222b5.scope - libcontainer container 766e5468c803566e4968c65e3852cd8d23fcefbfe32c7808cf046e19423222b5. Jan 13 20:19:14.689730 systemd[1]: Started cri-containerd-f0ce663e59625a6ba62b6ae38b260f612d2fdf94533eb09b2225a267224ca8b8.scope - libcontainer container f0ce663e59625a6ba62b6ae38b260f612d2fdf94533eb09b2225a267224ca8b8. Jan 13 20:19:14.719978 containerd[1469]: time="2025-01-13T20:19:14.717185022Z" level=info msg="StartContainer for \"d7675dce7536f1155e0c5bba63db3fdb1d37f3e21f8ccc38d65be5e3455a25d6\" returns successfully" Jan 13 20:19:14.737615 kubelet[2335]: E0113 20:19:14.737227 2335 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.153.195:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-0-6-9e5a1dc0a6?timeout=10s\": dial tcp 138.199.153.195:6443: connect: connection refused" interval="1.6s" Jan 13 20:19:14.760946 containerd[1469]: time="2025-01-13T20:19:14.760810145Z" level=info msg="StartContainer for \"f0ce663e59625a6ba62b6ae38b260f612d2fdf94533eb09b2225a267224ca8b8\" returns successfully" Jan 13 20:19:14.769548 containerd[1469]: time="2025-01-13T20:19:14.768549294Z" level=info msg="StartContainer for \"766e5468c803566e4968c65e3852cd8d23fcefbfe32c7808cf046e19423222b5\" returns successfully" Jan 13 20:19:14.871762 kubelet[2335]: W0113 20:19:14.871674 2335 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://138.199.153.195:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-0-6-9e5a1dc0a6&limit=500&resourceVersion=0": dial tcp 138.199.153.195:6443: connect: connection refused Jan 13 20:19:14.871907 kubelet[2335]: E0113 20:19:14.871770 2335 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://138.199.153.195:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-0-6-9e5a1dc0a6&limit=500&resourceVersion=0\": dial tcp 138.199.153.195:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:19:14.884519 kubelet[2335]: W0113 20:19:14.884423 2335 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://138.199.153.195:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 138.199.153.195:6443: connect: connection refused Jan 13 20:19:14.884640 kubelet[2335]: E0113 20:19:14.884569 2335 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://138.199.153.195:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 138.199.153.195:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:19:14.927727 kubelet[2335]: I0113 20:19:14.927689 2335 kubelet_node_status.go:72] "Attempting to register node" node="ci-4152-2-0-6-9e5a1dc0a6" Jan 13 20:19:17.477799 kubelet[2335]: E0113 20:19:17.477738 2335 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4152-2-0-6-9e5a1dc0a6\" not found" node="ci-4152-2-0-6-9e5a1dc0a6" Jan 13 20:19:17.561265 kubelet[2335]: I0113 20:19:17.561181 2335 kubelet_node_status.go:75] "Successfully registered node" node="ci-4152-2-0-6-9e5a1dc0a6" Jan 13 20:19:17.561265 kubelet[2335]: E0113 20:19:17.561230 2335 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4152-2-0-6-9e5a1dc0a6\": node \"ci-4152-2-0-6-9e5a1dc0a6\" not found" Jan 13 20:19:17.615775 kubelet[2335]: E0113 20:19:17.615500 2335 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4152-2-0-6-9e5a1dc0a6.181a59fde5937b40 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152-2-0-6-9e5a1dc0a6,UID:ci-4152-2-0-6-9e5a1dc0a6,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152-2-0-6-9e5a1dc0a6,},FirstTimestamp:2025-01-13 20:19:13.31799328 +0000 UTC m=+0.895218070,LastTimestamp:2025-01-13 20:19:13.31799328 +0000 UTC m=+0.895218070,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152-2-0-6-9e5a1dc0a6,}" Jan 13 20:19:17.708511 kubelet[2335]: E0113 20:19:17.706724 2335 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4152-2-0-6-9e5a1dc0a6.181a59fde6deb94a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152-2-0-6-9e5a1dc0a6,UID:ci-4152-2-0-6-9e5a1dc0a6,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-4152-2-0-6-9e5a1dc0a6,},FirstTimestamp:2025-01-13 20:19:13.339701578 +0000 UTC m=+0.916926368,LastTimestamp:2025-01-13 20:19:13.339701578 +0000 UTC m=+0.916926368,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152-2-0-6-9e5a1dc0a6,}" Jan 13 20:19:17.773747 kubelet[2335]: E0113 20:19:17.773196 2335 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4152-2-0-6-9e5a1dc0a6.181a59fde8f38fc4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152-2-0-6-9e5a1dc0a6,UID:ci-4152-2-0-6-9e5a1dc0a6,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ci-4152-2-0-6-9e5a1dc0a6 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ci-4152-2-0-6-9e5a1dc0a6,},FirstTimestamp:2025-01-13 20:19:13.374621636 +0000 UTC m=+0.951846426,LastTimestamp:2025-01-13 20:19:13.374621636 +0000 UTC m=+0.951846426,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152-2-0-6-9e5a1dc0a6,}" Jan 13 20:19:18.315606 kubelet[2335]: I0113 20:19:18.315251 2335 apiserver.go:52] "Watching apiserver" Jan 13 20:19:18.333406 kubelet[2335]: I0113 20:19:18.333371 2335 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 13 20:19:19.874671 systemd[1]: Reloading requested from client PID 2613 ('systemctl') (unit session-5.scope)... Jan 13 20:19:19.874692 systemd[1]: Reloading... Jan 13 20:19:19.980749 zram_generator::config[2653]: No configuration found. Jan 13 20:19:20.094650 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:19:20.180005 systemd[1]: Reloading finished in 304 ms. Jan 13 20:19:20.226672 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:19:20.239800 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 20:19:20.241559 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:19:20.241649 systemd[1]: kubelet.service: Consumed 1.329s CPU time, 116.9M memory peak, 0B memory swap peak. Jan 13 20:19:20.248931 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:19:20.410788 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:19:20.413067 (kubelet)[2698]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:19:20.470001 kubelet[2698]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:19:20.470001 kubelet[2698]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:19:20.470001 kubelet[2698]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:19:20.470001 kubelet[2698]: I0113 20:19:20.469934 2698 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:19:20.483756 kubelet[2698]: I0113 20:19:20.483541 2698 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 13 20:19:20.483909 kubelet[2698]: I0113 20:19:20.483874 2698 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:19:20.486142 kubelet[2698]: I0113 20:19:20.484732 2698 server.go:929] "Client rotation is on, will bootstrap in background" Jan 13 20:19:20.486852 kubelet[2698]: I0113 20:19:20.486758 2698 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 20:19:20.489363 kubelet[2698]: I0113 20:19:20.489308 2698 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:19:20.496731 kubelet[2698]: E0113 20:19:20.496320 2698 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 13 20:19:20.496731 kubelet[2698]: I0113 20:19:20.496371 2698 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 13 20:19:20.500159 kubelet[2698]: I0113 20:19:20.500082 2698 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:19:20.500329 kubelet[2698]: I0113 20:19:20.500307 2698 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 13 20:19:20.501228 kubelet[2698]: I0113 20:19:20.501117 2698 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:19:20.501453 kubelet[2698]: I0113 20:19:20.501205 2698 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4152-2-0-6-9e5a1dc0a6","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 13 20:19:20.501614 kubelet[2698]: I0113 20:19:20.501465 2698 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:19:20.501614 kubelet[2698]: I0113 20:19:20.501491 2698 container_manager_linux.go:300] "Creating device plugin manager" Jan 13 20:19:20.501963 kubelet[2698]: I0113 20:19:20.501943 2698 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:19:20.502155 kubelet[2698]: I0113 20:19:20.502109 2698 kubelet.go:408] "Attempting to sync node with API server" Jan 13 20:19:20.502155 kubelet[2698]: I0113 20:19:20.502130 2698 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:19:20.506230 kubelet[2698]: I0113 20:19:20.505006 2698 kubelet.go:314] "Adding apiserver pod source" Jan 13 20:19:20.506230 kubelet[2698]: I0113 20:19:20.505036 2698 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:19:20.514516 kubelet[2698]: I0113 20:19:20.514438 2698 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:19:20.516486 kubelet[2698]: I0113 20:19:20.515989 2698 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:19:20.517845 kubelet[2698]: I0113 20:19:20.517817 2698 server.go:1269] "Started kubelet" Jan 13 20:19:20.524838 kubelet[2698]: I0113 20:19:20.524788 2698 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:19:20.525244 kubelet[2698]: I0113 20:19:20.525094 2698 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:19:20.525837 kubelet[2698]: I0113 20:19:20.525802 2698 server.go:460] "Adding debug handlers to kubelet server" Jan 13 20:19:20.529126 kubelet[2698]: I0113 20:19:20.527862 2698 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:19:20.529126 kubelet[2698]: I0113 20:19:20.528094 2698 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:19:20.541982 kubelet[2698]: I0113 20:19:20.541898 2698 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 13 20:19:20.545152 kubelet[2698]: E0113 20:19:20.545101 2698 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4152-2-0-6-9e5a1dc0a6\" not found" Jan 13 20:19:20.545152 kubelet[2698]: I0113 20:19:20.544930 2698 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 13 20:19:20.546902 kubelet[2698]: I0113 20:19:20.544914 2698 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 13 20:19:20.547742 kubelet[2698]: I0113 20:19:20.547711 2698 reconciler.go:26] "Reconciler: start to sync state" Jan 13 20:19:20.553462 kubelet[2698]: I0113 20:19:20.553429 2698 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:19:20.554076 kubelet[2698]: I0113 20:19:20.554045 2698 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:19:20.556821 kubelet[2698]: I0113 20:19:20.556267 2698 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:19:20.557724 kubelet[2698]: I0113 20:19:20.557595 2698 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:19:20.557724 kubelet[2698]: I0113 20:19:20.557628 2698 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:19:20.557724 kubelet[2698]: I0113 20:19:20.557648 2698 kubelet.go:2321] "Starting kubelet main sync loop" Jan 13 20:19:20.557724 kubelet[2698]: E0113 20:19:20.557696 2698 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:19:20.564232 kubelet[2698]: I0113 20:19:20.564200 2698 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:19:20.571512 kubelet[2698]: E0113 20:19:20.564891 2698 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:19:20.616550 kubelet[2698]: I0113 20:19:20.616456 2698 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:19:20.616749 kubelet[2698]: I0113 20:19:20.616642 2698 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:19:20.616749 kubelet[2698]: I0113 20:19:20.616669 2698 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:19:20.616904 kubelet[2698]: I0113 20:19:20.616879 2698 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 20:19:20.616967 kubelet[2698]: I0113 20:19:20.616897 2698 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 20:19:20.616967 kubelet[2698]: I0113 20:19:20.616916 2698 policy_none.go:49] "None policy: Start" Jan 13 20:19:20.617693 kubelet[2698]: I0113 20:19:20.617625 2698 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:19:20.617693 kubelet[2698]: I0113 20:19:20.617653 2698 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:19:20.617833 kubelet[2698]: I0113 20:19:20.617815 2698 state_mem.go:75] "Updated machine memory state" Jan 13 20:19:20.622510 kubelet[2698]: I0113 20:19:20.622452 2698 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:19:20.622717 kubelet[2698]: I0113 20:19:20.622658 2698 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 13 20:19:20.622717 kubelet[2698]: I0113 20:19:20.622679 2698 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 20:19:20.623313 kubelet[2698]: I0113 20:19:20.623273 2698 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:19:20.727169 kubelet[2698]: I0113 20:19:20.727012 2698 kubelet_node_status.go:72] "Attempting to register node" node="ci-4152-2-0-6-9e5a1dc0a6" Jan 13 20:19:20.743130 kubelet[2698]: I0113 20:19:20.743074 2698 kubelet_node_status.go:111] "Node was previously registered" node="ci-4152-2-0-6-9e5a1dc0a6" Jan 13 20:19:20.743449 kubelet[2698]: I0113 20:19:20.743379 2698 kubelet_node_status.go:75] "Successfully registered node" node="ci-4152-2-0-6-9e5a1dc0a6" Jan 13 20:19:20.750719 kubelet[2698]: I0113 20:19:20.750615 2698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/421a0673669071417355351020b09dff-k8s-certs\") pod \"kube-apiserver-ci-4152-2-0-6-9e5a1dc0a6\" (UID: \"421a0673669071417355351020b09dff\") " pod="kube-system/kube-apiserver-ci-4152-2-0-6-9e5a1dc0a6" Jan 13 20:19:20.750894 kubelet[2698]: I0113 20:19:20.750792 2698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a5ac523be9bd728baf01b8d53149e7dd-ca-certs\") pod \"kube-controller-manager-ci-4152-2-0-6-9e5a1dc0a6\" (UID: \"a5ac523be9bd728baf01b8d53149e7dd\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-6-9e5a1dc0a6" Jan 13 20:19:20.750929 kubelet[2698]: I0113 20:19:20.750864 2698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a5ac523be9bd728baf01b8d53149e7dd-flexvolume-dir\") pod \"kube-controller-manager-ci-4152-2-0-6-9e5a1dc0a6\" (UID: \"a5ac523be9bd728baf01b8d53149e7dd\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-6-9e5a1dc0a6" Jan 13 20:19:20.751017 kubelet[2698]: I0113 20:19:20.750948 2698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a5ac523be9bd728baf01b8d53149e7dd-kubeconfig\") pod \"kube-controller-manager-ci-4152-2-0-6-9e5a1dc0a6\" (UID: \"a5ac523be9bd728baf01b8d53149e7dd\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-6-9e5a1dc0a6" Jan 13 20:19:20.751107 kubelet[2698]: I0113 20:19:20.751040 2698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ccac78aee092e912d55e3801622f6224-kubeconfig\") pod \"kube-scheduler-ci-4152-2-0-6-9e5a1dc0a6\" (UID: \"ccac78aee092e912d55e3801622f6224\") " pod="kube-system/kube-scheduler-ci-4152-2-0-6-9e5a1dc0a6" Jan 13 20:19:20.751156 kubelet[2698]: I0113 20:19:20.751107 2698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/421a0673669071417355351020b09dff-ca-certs\") pod \"kube-apiserver-ci-4152-2-0-6-9e5a1dc0a6\" (UID: \"421a0673669071417355351020b09dff\") " pod="kube-system/kube-apiserver-ci-4152-2-0-6-9e5a1dc0a6" Jan 13 20:19:20.751260 kubelet[2698]: I0113 20:19:20.751188 2698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/421a0673669071417355351020b09dff-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152-2-0-6-9e5a1dc0a6\" (UID: \"421a0673669071417355351020b09dff\") " pod="kube-system/kube-apiserver-ci-4152-2-0-6-9e5a1dc0a6" Jan 13 20:19:20.751493 kubelet[2698]: I0113 20:19:20.751277 2698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a5ac523be9bd728baf01b8d53149e7dd-k8s-certs\") pod \"kube-controller-manager-ci-4152-2-0-6-9e5a1dc0a6\" (UID: \"a5ac523be9bd728baf01b8d53149e7dd\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-6-9e5a1dc0a6" Jan 13 20:19:20.751493 kubelet[2698]: I0113 20:19:20.751381 2698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a5ac523be9bd728baf01b8d53149e7dd-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152-2-0-6-9e5a1dc0a6\" (UID: \"a5ac523be9bd728baf01b8d53149e7dd\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-6-9e5a1dc0a6" Jan 13 20:19:21.512188 kubelet[2698]: I0113 20:19:21.510787 2698 apiserver.go:52] "Watching apiserver" Jan 13 20:19:21.545384 kubelet[2698]: I0113 20:19:21.545293 2698 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 13 20:19:21.613609 kubelet[2698]: E0113 20:19:21.613564 2698 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4152-2-0-6-9e5a1dc0a6\" already exists" pod="kube-system/kube-apiserver-ci-4152-2-0-6-9e5a1dc0a6" Jan 13 20:19:21.651420 kubelet[2698]: I0113 20:19:21.651286 2698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4152-2-0-6-9e5a1dc0a6" podStartSLOduration=1.651266787 podStartE2EDuration="1.651266787s" podCreationTimestamp="2025-01-13 20:19:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:19:21.633901086 +0000 UTC m=+1.211542517" watchObservedRunningTime="2025-01-13 20:19:21.651266787 +0000 UTC m=+1.228908218" Jan 13 20:19:21.666711 kubelet[2698]: I0113 20:19:21.666640 2698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4152-2-0-6-9e5a1dc0a6" podStartSLOduration=1.666621189 podStartE2EDuration="1.666621189s" podCreationTimestamp="2025-01-13 20:19:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:19:21.651552745 +0000 UTC m=+1.229194175" watchObservedRunningTime="2025-01-13 20:19:21.666621189 +0000 UTC m=+1.244262620" Jan 13 20:19:21.686041 kubelet[2698]: I0113 20:19:21.685961 2698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4152-2-0-6-9e5a1dc0a6" podStartSLOduration=1.6859425099999998 podStartE2EDuration="1.68594251s" podCreationTimestamp="2025-01-13 20:19:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:19:21.669712637 +0000 UTC m=+1.247354068" watchObservedRunningTime="2025-01-13 20:19:21.68594251 +0000 UTC m=+1.263583941" Jan 13 20:19:22.071721 sudo[1813]: pam_unix(sudo:session): session closed for user root Jan 13 20:19:22.231239 sshd[1812]: Connection closed by 147.75.109.163 port 42664 Jan 13 20:19:22.232008 sshd-session[1810]: pam_unix(sshd:session): session closed for user core Jan 13 20:19:22.237023 systemd[1]: sshd@4-138.199.153.195:22-147.75.109.163:42664.service: Deactivated successfully. Jan 13 20:19:22.241033 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 20:19:22.241252 systemd[1]: session-5.scope: Consumed 5.708s CPU time, 157.2M memory peak, 0B memory swap peak. Jan 13 20:19:22.243596 systemd-logind[1456]: Session 5 logged out. Waiting for processes to exit. Jan 13 20:19:22.244826 systemd-logind[1456]: Removed session 5. Jan 13 20:19:25.912189 kubelet[2698]: I0113 20:19:25.912112 2698 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 20:19:25.912853 containerd[1469]: time="2025-01-13T20:19:25.912655706Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 20:19:25.913382 kubelet[2698]: I0113 20:19:25.913170 2698 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 20:19:26.791949 kubelet[2698]: I0113 20:19:26.791896 2698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7bcbc92c-304a-4ca4-aa04-06c6cd8994a3-kube-proxy\") pod \"kube-proxy-chq2f\" (UID: \"7bcbc92c-304a-4ca4-aa04-06c6cd8994a3\") " pod="kube-system/kube-proxy-chq2f" Jan 13 20:19:26.792057 kubelet[2698]: I0113 20:19:26.791961 2698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7bcbc92c-304a-4ca4-aa04-06c6cd8994a3-xtables-lock\") pod \"kube-proxy-chq2f\" (UID: \"7bcbc92c-304a-4ca4-aa04-06c6cd8994a3\") " pod="kube-system/kube-proxy-chq2f" Jan 13 20:19:26.792057 kubelet[2698]: I0113 20:19:26.791984 2698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7bcbc92c-304a-4ca4-aa04-06c6cd8994a3-lib-modules\") pod \"kube-proxy-chq2f\" (UID: \"7bcbc92c-304a-4ca4-aa04-06c6cd8994a3\") " pod="kube-system/kube-proxy-chq2f" Jan 13 20:19:26.792057 kubelet[2698]: I0113 20:19:26.792000 2698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqb44\" (UniqueName: \"kubernetes.io/projected/7bcbc92c-304a-4ca4-aa04-06c6cd8994a3-kube-api-access-bqb44\") pod \"kube-proxy-chq2f\" (UID: \"7bcbc92c-304a-4ca4-aa04-06c6cd8994a3\") " pod="kube-system/kube-proxy-chq2f" Jan 13 20:19:26.792618 systemd[1]: Created slice kubepods-besteffort-pod7bcbc92c_304a_4ca4_aa04_06c6cd8994a3.slice - libcontainer container kubepods-besteffort-pod7bcbc92c_304a_4ca4_aa04_06c6cd8994a3.slice. Jan 13 20:19:26.817437 systemd[1]: Created slice kubepods-burstable-pod1b8c53ce_6686_4434_babe_b3a8a2960373.slice - libcontainer container kubepods-burstable-pod1b8c53ce_6686_4434_babe_b3a8a2960373.slice. Jan 13 20:19:26.894332 kubelet[2698]: I0113 20:19:26.892922 2698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/1b8c53ce-6686-4434-babe-b3a8a2960373-flannel-cfg\") pod \"kube-flannel-ds-f58zb\" (UID: \"1b8c53ce-6686-4434-babe-b3a8a2960373\") " pod="kube-flannel/kube-flannel-ds-f58zb" Jan 13 20:19:26.894332 kubelet[2698]: I0113 20:19:26.893078 2698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/1b8c53ce-6686-4434-babe-b3a8a2960373-cni\") pod \"kube-flannel-ds-f58zb\" (UID: \"1b8c53ce-6686-4434-babe-b3a8a2960373\") " pod="kube-flannel/kube-flannel-ds-f58zb" Jan 13 20:19:26.894332 kubelet[2698]: I0113 20:19:26.893138 2698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1b8c53ce-6686-4434-babe-b3a8a2960373-xtables-lock\") pod \"kube-flannel-ds-f58zb\" (UID: \"1b8c53ce-6686-4434-babe-b3a8a2960373\") " pod="kube-flannel/kube-flannel-ds-f58zb" Jan 13 20:19:26.894332 kubelet[2698]: I0113 20:19:26.893195 2698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/1b8c53ce-6686-4434-babe-b3a8a2960373-run\") pod \"kube-flannel-ds-f58zb\" (UID: \"1b8c53ce-6686-4434-babe-b3a8a2960373\") " pod="kube-flannel/kube-flannel-ds-f58zb" Jan 13 20:19:26.894332 kubelet[2698]: I0113 20:19:26.893219 2698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/1b8c53ce-6686-4434-babe-b3a8a2960373-cni-plugin\") pod \"kube-flannel-ds-f58zb\" (UID: \"1b8c53ce-6686-4434-babe-b3a8a2960373\") " pod="kube-flannel/kube-flannel-ds-f58zb" Jan 13 20:19:26.894677 kubelet[2698]: I0113 20:19:26.893262 2698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhqgr\" (UniqueName: \"kubernetes.io/projected/1b8c53ce-6686-4434-babe-b3a8a2960373-kube-api-access-lhqgr\") pod \"kube-flannel-ds-f58zb\" (UID: \"1b8c53ce-6686-4434-babe-b3a8a2960373\") " pod="kube-flannel/kube-flannel-ds-f58zb" Jan 13 20:19:27.104708 containerd[1469]: time="2025-01-13T20:19:27.103699029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-chq2f,Uid:7bcbc92c-304a-4ca4-aa04-06c6cd8994a3,Namespace:kube-system,Attempt:0,}" Jan 13 20:19:27.122882 containerd[1469]: time="2025-01-13T20:19:27.122472015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-f58zb,Uid:1b8c53ce-6686-4434-babe-b3a8a2960373,Namespace:kube-flannel,Attempt:0,}" Jan 13 20:19:27.135473 containerd[1469]: time="2025-01-13T20:19:27.135343176Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:19:27.135686 containerd[1469]: time="2025-01-13T20:19:27.135433176Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:19:27.135686 containerd[1469]: time="2025-01-13T20:19:27.135605734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:19:27.136190 containerd[1469]: time="2025-01-13T20:19:27.136149409Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:19:27.162861 systemd[1]: Started cri-containerd-562cf7bd5ff03fa6cde6c5734857681f566ea7d25cae82e4896c91f4fe3cbec4.scope - libcontainer container 562cf7bd5ff03fa6cde6c5734857681f566ea7d25cae82e4896c91f4fe3cbec4. Jan 13 20:19:27.173953 containerd[1469]: time="2025-01-13T20:19:27.172393514Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:19:27.174542 containerd[1469]: time="2025-01-13T20:19:27.173917140Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:19:27.174846 containerd[1469]: time="2025-01-13T20:19:27.174728493Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:19:27.175491 containerd[1469]: time="2025-01-13T20:19:27.174969850Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:19:27.197762 systemd[1]: Started cri-containerd-6f480bed589860005e8c065b678dec0705a88c497e974bd9ba1ad8a973cbc00a.scope - libcontainer container 6f480bed589860005e8c065b678dec0705a88c497e974bd9ba1ad8a973cbc00a. Jan 13 20:19:27.201380 containerd[1469]: time="2025-01-13T20:19:27.200746212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-chq2f,Uid:7bcbc92c-304a-4ca4-aa04-06c6cd8994a3,Namespace:kube-system,Attempt:0,} returns sandbox id \"562cf7bd5ff03fa6cde6c5734857681f566ea7d25cae82e4896c91f4fe3cbec4\"" Jan 13 20:19:27.209642 containerd[1469]: time="2025-01-13T20:19:27.209601010Z" level=info msg="CreateContainer within sandbox \"562cf7bd5ff03fa6cde6c5734857681f566ea7d25cae82e4896c91f4fe3cbec4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 20:19:27.241061 containerd[1469]: time="2025-01-13T20:19:27.240935441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-f58zb,Uid:1b8c53ce-6686-4434-babe-b3a8a2960373,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"6f480bed589860005e8c065b678dec0705a88c497e974bd9ba1ad8a973cbc00a\"" Jan 13 20:19:27.244109 containerd[1469]: time="2025-01-13T20:19:27.244072412Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Jan 13 20:19:27.244556 containerd[1469]: time="2025-01-13T20:19:27.244519768Z" level=info msg="CreateContainer within sandbox \"562cf7bd5ff03fa6cde6c5734857681f566ea7d25cae82e4896c91f4fe3cbec4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c5d88b92994e7eb509c84f152f72447a0a83ab1f981f816aad7ddda1a08fb924\"" Jan 13 20:19:27.246346 containerd[1469]: time="2025-01-13T20:19:27.246251432Z" level=info msg="StartContainer for \"c5d88b92994e7eb509c84f152f72447a0a83ab1f981f816aad7ddda1a08fb924\"" Jan 13 20:19:27.279740 systemd[1]: Started cri-containerd-c5d88b92994e7eb509c84f152f72447a0a83ab1f981f816aad7ddda1a08fb924.scope - libcontainer container c5d88b92994e7eb509c84f152f72447a0a83ab1f981f816aad7ddda1a08fb924. Jan 13 20:19:27.318509 containerd[1469]: time="2025-01-13T20:19:27.318417925Z" level=info msg="StartContainer for \"c5d88b92994e7eb509c84f152f72447a0a83ab1f981f816aad7ddda1a08fb924\" returns successfully" Jan 13 20:19:29.843062 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3717944849.mount: Deactivated successfully. Jan 13 20:19:29.893519 containerd[1469]: time="2025-01-13T20:19:29.893118698Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:29.896983 containerd[1469]: time="2025-01-13T20:19:29.896861944Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3673531" Jan 13 20:19:29.898634 containerd[1469]: time="2025-01-13T20:19:29.898551849Z" level=info msg="ImageCreate event name:\"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:29.903332 containerd[1469]: time="2025-01-13T20:19:29.902277256Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:29.903332 containerd[1469]: time="2025-01-13T20:19:29.903154608Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3662650\" in 2.659037076s" Jan 13 20:19:29.903332 containerd[1469]: time="2025-01-13T20:19:29.903191088Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" Jan 13 20:19:29.907405 containerd[1469]: time="2025-01-13T20:19:29.907357090Z" level=info msg="CreateContainer within sandbox \"6f480bed589860005e8c065b678dec0705a88c497e974bd9ba1ad8a973cbc00a\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Jan 13 20:19:29.923823 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2752416960.mount: Deactivated successfully. Jan 13 20:19:29.927184 containerd[1469]: time="2025-01-13T20:19:29.926986315Z" level=info msg="CreateContainer within sandbox \"6f480bed589860005e8c065b678dec0705a88c497e974bd9ba1ad8a973cbc00a\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"381f5d9971d266367d155270f45faa99caf2062d62865092cd5df964fd372085\"" Jan 13 20:19:29.928152 containerd[1469]: time="2025-01-13T20:19:29.927909027Z" level=info msg="StartContainer for \"381f5d9971d266367d155270f45faa99caf2062d62865092cd5df964fd372085\"" Jan 13 20:19:29.965757 systemd[1]: Started cri-containerd-381f5d9971d266367d155270f45faa99caf2062d62865092cd5df964fd372085.scope - libcontainer container 381f5d9971d266367d155270f45faa99caf2062d62865092cd5df964fd372085. Jan 13 20:19:29.998195 systemd[1]: cri-containerd-381f5d9971d266367d155270f45faa99caf2062d62865092cd5df964fd372085.scope: Deactivated successfully. Jan 13 20:19:30.001919 containerd[1469]: time="2025-01-13T20:19:30.001795928Z" level=info msg="StartContainer for \"381f5d9971d266367d155270f45faa99caf2062d62865092cd5df964fd372085\" returns successfully" Jan 13 20:19:30.050977 containerd[1469]: time="2025-01-13T20:19:30.050618539Z" level=info msg="shim disconnected" id=381f5d9971d266367d155270f45faa99caf2062d62865092cd5df964fd372085 namespace=k8s.io Jan 13 20:19:30.050977 containerd[1469]: time="2025-01-13T20:19:30.050858937Z" level=warning msg="cleaning up after shim disconnected" id=381f5d9971d266367d155270f45faa99caf2062d62865092cd5df964fd372085 namespace=k8s.io Jan 13 20:19:30.050977 containerd[1469]: time="2025-01-13T20:19:30.050870497Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:19:30.066569 containerd[1469]: time="2025-01-13T20:19:30.065768166Z" level=warning msg="cleanup warnings time=\"2025-01-13T20:19:30Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 13 20:19:30.576242 kubelet[2698]: I0113 20:19:30.576035 2698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-chq2f" podStartSLOduration=4.575995969 podStartE2EDuration="4.575995969s" podCreationTimestamp="2025-01-13 20:19:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:19:27.625232051 +0000 UTC m=+7.202873482" watchObservedRunningTime="2025-01-13 20:19:30.575995969 +0000 UTC m=+10.153637400" Jan 13 20:19:30.622384 containerd[1469]: time="2025-01-13T20:19:30.622319122Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Jan 13 20:19:30.755610 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-381f5d9971d266367d155270f45faa99caf2062d62865092cd5df964fd372085-rootfs.mount: Deactivated successfully. Jan 13 20:19:33.285729 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2459074985.mount: Deactivated successfully. Jan 13 20:19:34.279361 containerd[1469]: time="2025-01-13T20:19:34.279296513Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:34.281343 containerd[1469]: time="2025-01-13T20:19:34.281271777Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26874261" Jan 13 20:19:34.283083 containerd[1469]: time="2025-01-13T20:19:34.283010083Z" level=info msg="ImageCreate event name:\"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:34.287600 containerd[1469]: time="2025-01-13T20:19:34.286749092Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:34.288359 containerd[1469]: time="2025-01-13T20:19:34.288319559Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26863435\" in 3.665932957s" Jan 13 20:19:34.288359 containerd[1469]: time="2025-01-13T20:19:34.288357559Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\"" Jan 13 20:19:34.293412 containerd[1469]: time="2025-01-13T20:19:34.293370478Z" level=info msg="CreateContainer within sandbox \"6f480bed589860005e8c065b678dec0705a88c497e974bd9ba1ad8a973cbc00a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 13 20:19:34.313206 containerd[1469]: time="2025-01-13T20:19:34.313079916Z" level=info msg="CreateContainer within sandbox \"6f480bed589860005e8c065b678dec0705a88c497e974bd9ba1ad8a973cbc00a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"33e06be0c3a4864fe819337730600d1a560b0ca788a160853813d5a68039cd7e\"" Jan 13 20:19:34.314922 containerd[1469]: time="2025-01-13T20:19:34.314878061Z" level=info msg="StartContainer for \"33e06be0c3a4864fe819337730600d1a560b0ca788a160853813d5a68039cd7e\"" Jan 13 20:19:34.357875 systemd[1]: Started cri-containerd-33e06be0c3a4864fe819337730600d1a560b0ca788a160853813d5a68039cd7e.scope - libcontainer container 33e06be0c3a4864fe819337730600d1a560b0ca788a160853813d5a68039cd7e. Jan 13 20:19:34.392738 systemd[1]: cri-containerd-33e06be0c3a4864fe819337730600d1a560b0ca788a160853813d5a68039cd7e.scope: Deactivated successfully. Jan 13 20:19:34.392996 containerd[1469]: time="2025-01-13T20:19:34.392869659Z" level=info msg="StartContainer for \"33e06be0c3a4864fe819337730600d1a560b0ca788a160853813d5a68039cd7e\" returns successfully" Jan 13 20:19:34.414543 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-33e06be0c3a4864fe819337730600d1a560b0ca788a160853813d5a68039cd7e-rootfs.mount: Deactivated successfully. Jan 13 20:19:34.480326 kubelet[2698]: I0113 20:19:34.479447 2698 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 13 20:19:34.511939 kubelet[2698]: W0113 20:19:34.511728 2698 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4152-2-0-6-9e5a1dc0a6" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152-2-0-6-9e5a1dc0a6' and this object Jan 13 20:19:34.513033 kubelet[2698]: E0113 20:19:34.512522 2698 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ci-4152-2-0-6-9e5a1dc0a6\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4152-2-0-6-9e5a1dc0a6' and this object" logger="UnhandledError" Jan 13 20:19:34.523613 systemd[1]: Created slice kubepods-burstable-poded7671cc_9b10_481b_ac7d_d8f3a3a46a78.slice - libcontainer container kubepods-burstable-poded7671cc_9b10_481b_ac7d_d8f3a3a46a78.slice. Jan 13 20:19:34.527918 systemd[1]: Created slice kubepods-burstable-podecf51e7b_0916_4538_8a0e_f3fa041d5eb3.slice - libcontainer container kubepods-burstable-podecf51e7b_0916_4538_8a0e_f3fa041d5eb3.slice. Jan 13 20:19:34.538236 containerd[1469]: time="2025-01-13T20:19:34.537799588Z" level=info msg="shim disconnected" id=33e06be0c3a4864fe819337730600d1a560b0ca788a160853813d5a68039cd7e namespace=k8s.io Jan 13 20:19:34.538236 containerd[1469]: time="2025-01-13T20:19:34.537922867Z" level=warning msg="cleaning up after shim disconnected" id=33e06be0c3a4864fe819337730600d1a560b0ca788a160853813d5a68039cd7e namespace=k8s.io Jan 13 20:19:34.538236 containerd[1469]: time="2025-01-13T20:19:34.537939227Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:19:34.544918 kubelet[2698]: I0113 20:19:34.544875 2698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktdx4\" (UniqueName: \"kubernetes.io/projected/ed7671cc-9b10-481b-ac7d-d8f3a3a46a78-kube-api-access-ktdx4\") pod \"coredns-6f6b679f8f-pxv5l\" (UID: \"ed7671cc-9b10-481b-ac7d-d8f3a3a46a78\") " pod="kube-system/coredns-6f6b679f8f-pxv5l" Jan 13 20:19:34.544918 kubelet[2698]: I0113 20:19:34.544926 2698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed7671cc-9b10-481b-ac7d-d8f3a3a46a78-config-volume\") pod \"coredns-6f6b679f8f-pxv5l\" (UID: \"ed7671cc-9b10-481b-ac7d-d8f3a3a46a78\") " pod="kube-system/coredns-6f6b679f8f-pxv5l" Jan 13 20:19:34.545139 kubelet[2698]: I0113 20:19:34.544948 2698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq2z8\" (UniqueName: \"kubernetes.io/projected/ecf51e7b-0916-4538-8a0e-f3fa041d5eb3-kube-api-access-sq2z8\") pod \"coredns-6f6b679f8f-qvvkh\" (UID: \"ecf51e7b-0916-4538-8a0e-f3fa041d5eb3\") " pod="kube-system/coredns-6f6b679f8f-qvvkh" Jan 13 20:19:34.545139 kubelet[2698]: I0113 20:19:34.544970 2698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ecf51e7b-0916-4538-8a0e-f3fa041d5eb3-config-volume\") pod \"coredns-6f6b679f8f-qvvkh\" (UID: \"ecf51e7b-0916-4538-8a0e-f3fa041d5eb3\") " pod="kube-system/coredns-6f6b679f8f-qvvkh" Jan 13 20:19:34.639542 containerd[1469]: time="2025-01-13T20:19:34.637522528Z" level=info msg="CreateContainer within sandbox \"6f480bed589860005e8c065b678dec0705a88c497e974bd9ba1ad8a973cbc00a\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Jan 13 20:19:34.664516 containerd[1469]: time="2025-01-13T20:19:34.662773680Z" level=info msg="CreateContainer within sandbox \"6f480bed589860005e8c065b678dec0705a88c497e974bd9ba1ad8a973cbc00a\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"69ae10e14d7e5daaeb1ccdca50cb44a369e4d9d3164a7ce10e0f175e10d74729\"" Jan 13 20:19:34.664516 containerd[1469]: time="2025-01-13T20:19:34.663605593Z" level=info msg="StartContainer for \"69ae10e14d7e5daaeb1ccdca50cb44a369e4d9d3164a7ce10e0f175e10d74729\"" Jan 13 20:19:34.715225 systemd[1]: Started cri-containerd-69ae10e14d7e5daaeb1ccdca50cb44a369e4d9d3164a7ce10e0f175e10d74729.scope - libcontainer container 69ae10e14d7e5daaeb1ccdca50cb44a369e4d9d3164a7ce10e0f175e10d74729. Jan 13 20:19:34.762773 containerd[1469]: time="2025-01-13T20:19:34.762626779Z" level=info msg="StartContainer for \"69ae10e14d7e5daaeb1ccdca50cb44a369e4d9d3164a7ce10e0f175e10d74729\" returns successfully" Jan 13 20:19:35.646696 kubelet[2698]: E0113 20:19:35.646578 2698 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jan 13 20:19:35.646696 kubelet[2698]: E0113 20:19:35.646610 2698 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jan 13 20:19:35.646696 kubelet[2698]: E0113 20:19:35.646683 2698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed7671cc-9b10-481b-ac7d-d8f3a3a46a78-config-volume podName:ed7671cc-9b10-481b-ac7d-d8f3a3a46a78 nodeName:}" failed. No retries permitted until 2025-01-13 20:19:36.146660791 +0000 UTC m=+15.724302182 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/ed7671cc-9b10-481b-ac7d-d8f3a3a46a78-config-volume") pod "coredns-6f6b679f8f-pxv5l" (UID: "ed7671cc-9b10-481b-ac7d-d8f3a3a46a78") : failed to sync configmap cache: timed out waiting for the condition Jan 13 20:19:35.646696 kubelet[2698]: E0113 20:19:35.646701 2698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ecf51e7b-0916-4538-8a0e-f3fa041d5eb3-config-volume podName:ecf51e7b-0916-4538-8a0e-f3fa041d5eb3 nodeName:}" failed. No retries permitted until 2025-01-13 20:19:36.146693231 +0000 UTC m=+15.724334662 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/ecf51e7b-0916-4538-8a0e-f3fa041d5eb3-config-volume") pod "coredns-6f6b679f8f-qvvkh" (UID: "ecf51e7b-0916-4538-8a0e-f3fa041d5eb3") : failed to sync configmap cache: timed out waiting for the condition Jan 13 20:19:35.843139 systemd-networkd[1349]: flannel.1: Link UP Jan 13 20:19:35.843144 systemd-networkd[1349]: flannel.1: Gained carrier Jan 13 20:19:36.328444 containerd[1469]: time="2025-01-13T20:19:36.328051835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-pxv5l,Uid:ed7671cc-9b10-481b-ac7d-d8f3a3a46a78,Namespace:kube-system,Attempt:0,}" Jan 13 20:19:36.334015 containerd[1469]: time="2025-01-13T20:19:36.332897236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-qvvkh,Uid:ecf51e7b-0916-4538-8a0e-f3fa041d5eb3,Namespace:kube-system,Attempt:0,}" Jan 13 20:19:36.368596 systemd-networkd[1349]: cni0: Link UP Jan 13 20:19:36.368604 systemd-networkd[1349]: cni0: Gained carrier Jan 13 20:19:36.370930 systemd-networkd[1349]: cni0: Lost carrier Jan 13 20:19:36.381827 systemd-networkd[1349]: vethffca9b68: Link UP Jan 13 20:19:36.382414 systemd-networkd[1349]: vethb76ab830: Link UP Jan 13 20:19:36.383689 kernel: cni0: port 1(vethffca9b68) entered blocking state Jan 13 20:19:36.383758 kernel: cni0: port 1(vethffca9b68) entered disabled state Jan 13 20:19:36.383776 kernel: vethffca9b68: entered allmulticast mode Jan 13 20:19:36.386534 kernel: vethffca9b68: entered promiscuous mode Jan 13 20:19:36.387528 kernel: cni0: port 1(vethffca9b68) entered blocking state Jan 13 20:19:36.387590 kernel: cni0: port 1(vethffca9b68) entered forwarding state Jan 13 20:19:36.390581 kernel: cni0: port 1(vethffca9b68) entered disabled state Jan 13 20:19:36.390671 kernel: cni0: port 2(vethb76ab830) entered blocking state Jan 13 20:19:36.392793 kernel: cni0: port 2(vethb76ab830) entered disabled state Jan 13 20:19:36.394715 kernel: vethb76ab830: entered allmulticast mode Jan 13 20:19:36.394828 kernel: vethb76ab830: entered promiscuous mode Jan 13 20:19:36.401538 kernel: cni0: port 1(vethffca9b68) entered blocking state Jan 13 20:19:36.401633 kernel: cni0: port 1(vethffca9b68) entered forwarding state Jan 13 20:19:36.401660 systemd-networkd[1349]: vethffca9b68: Gained carrier Jan 13 20:19:36.402790 systemd-networkd[1349]: cni0: Gained carrier Jan 13 20:19:36.410503 kernel: cni0: port 2(vethb76ab830) entered blocking state Jan 13 20:19:36.410611 kernel: cni0: port 2(vethb76ab830) entered forwarding state Jan 13 20:19:36.410643 systemd-networkd[1349]: vethb76ab830: Gained carrier Jan 13 20:19:36.416786 containerd[1469]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40000928e8), "name":"cbr0", "type":"bridge"} Jan 13 20:19:36.416786 containerd[1469]: delegateAdd: netconf sent to delegate plugin: Jan 13 20:19:36.419204 containerd[1469]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"} Jan 13 20:19:36.419204 containerd[1469]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40000928e8), "name":"cbr0", "type":"bridge"} Jan 13 20:19:36.419204 containerd[1469]: delegateAdd: netconf sent to delegate plugin: Jan 13 20:19:36.449853 containerd[1469]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-01-13T20:19:36.445522738Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:19:36.449853 containerd[1469]: time="2025-01-13T20:19:36.445792816Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:19:36.449853 containerd[1469]: time="2025-01-13T20:19:36.445823976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:19:36.449853 containerd[1469]: time="2025-01-13T20:19:36.446458131Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:19:36.458414 containerd[1469]: time="2025-01-13T20:19:36.458299797Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:19:36.459645 containerd[1469]: time="2025-01-13T20:19:36.458868752Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:19:36.459861 containerd[1469]: time="2025-01-13T20:19:36.458935871Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:19:36.460038 containerd[1469]: time="2025-01-13T20:19:36.459991383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:19:36.476787 systemd[1]: Started cri-containerd-4ab498f411a0a80f917dbe8d684aed8dfa9db56f3d2e14b24fc54e63ff64bcfc.scope - libcontainer container 4ab498f411a0a80f917dbe8d684aed8dfa9db56f3d2e14b24fc54e63ff64bcfc. Jan 13 20:19:36.497897 systemd[1]: Started cri-containerd-93d151613f7df4310edf574b4be208d92b2be1d996059c7f0cde4789d0620f27.scope - libcontainer container 93d151613f7df4310edf574b4be208d92b2be1d996059c7f0cde4789d0620f27. Jan 13 20:19:36.538983 containerd[1469]: time="2025-01-13T20:19:36.538763795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-qvvkh,Uid:ecf51e7b-0916-4538-8a0e-f3fa041d5eb3,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ab498f411a0a80f917dbe8d684aed8dfa9db56f3d2e14b24fc54e63ff64bcfc\"" Jan 13 20:19:36.545271 containerd[1469]: time="2025-01-13T20:19:36.544837387Z" level=info msg="CreateContainer within sandbox \"4ab498f411a0a80f917dbe8d684aed8dfa9db56f3d2e14b24fc54e63ff64bcfc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:19:36.551692 containerd[1469]: time="2025-01-13T20:19:36.551656292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-pxv5l,Uid:ed7671cc-9b10-481b-ac7d-d8f3a3a46a78,Namespace:kube-system,Attempt:0,} returns sandbox id \"93d151613f7df4310edf574b4be208d92b2be1d996059c7f0cde4789d0620f27\"" Jan 13 20:19:36.559415 containerd[1469]: time="2025-01-13T20:19:36.559360471Z" level=info msg="CreateContainer within sandbox \"93d151613f7df4310edf574b4be208d92b2be1d996059c7f0cde4789d0620f27\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:19:36.566291 containerd[1469]: time="2025-01-13T20:19:36.566165577Z" level=info msg="CreateContainer within sandbox \"4ab498f411a0a80f917dbe8d684aed8dfa9db56f3d2e14b24fc54e63ff64bcfc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ba3e632c577f9c80840aafe0b1bc19571eeaaac6bb2b84ee3c0a6bb1623cfab4\"" Jan 13 20:19:36.567528 containerd[1469]: time="2025-01-13T20:19:36.567344007Z" level=info msg="StartContainer for \"ba3e632c577f9c80840aafe0b1bc19571eeaaac6bb2b84ee3c0a6bb1623cfab4\"" Jan 13 20:19:36.589565 containerd[1469]: time="2025-01-13T20:19:36.589404431Z" level=info msg="CreateContainer within sandbox \"93d151613f7df4310edf574b4be208d92b2be1d996059c7f0cde4789d0620f27\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1c9c82473dc186b7a4b3d32f6f5bf27a044ef268053a0c76dd01c0b651dd50c8\"" Jan 13 20:19:36.594279 containerd[1469]: time="2025-01-13T20:19:36.594209553Z" level=info msg="StartContainer for \"1c9c82473dc186b7a4b3d32f6f5bf27a044ef268053a0c76dd01c0b651dd50c8\"" Jan 13 20:19:36.605736 systemd[1]: Started cri-containerd-ba3e632c577f9c80840aafe0b1bc19571eeaaac6bb2b84ee3c0a6bb1623cfab4.scope - libcontainer container ba3e632c577f9c80840aafe0b1bc19571eeaaac6bb2b84ee3c0a6bb1623cfab4. Jan 13 20:19:36.633754 systemd[1]: Started cri-containerd-1c9c82473dc186b7a4b3d32f6f5bf27a044ef268053a0c76dd01c0b651dd50c8.scope - libcontainer container 1c9c82473dc186b7a4b3d32f6f5bf27a044ef268053a0c76dd01c0b651dd50c8. Jan 13 20:19:36.660607 containerd[1469]: time="2025-01-13T20:19:36.659288474Z" level=info msg="StartContainer for \"ba3e632c577f9c80840aafe0b1bc19571eeaaac6bb2b84ee3c0a6bb1623cfab4\" returns successfully" Jan 13 20:19:36.687764 containerd[1469]: time="2025-01-13T20:19:36.687178452Z" level=info msg="StartContainer for \"1c9c82473dc186b7a4b3d32f6f5bf27a044ef268053a0c76dd01c0b651dd50c8\" returns successfully" Jan 13 20:19:37.440868 systemd-networkd[1349]: flannel.1: Gained IPv6LL Jan 13 20:19:37.632749 systemd-networkd[1349]: cni0: Gained IPv6LL Jan 13 20:19:37.633687 systemd-networkd[1349]: vethffca9b68: Gained IPv6LL Jan 13 20:19:37.684228 kubelet[2698]: I0113 20:19:37.682431 2698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-f58zb" podStartSLOduration=4.635199158 podStartE2EDuration="11.682405759s" podCreationTimestamp="2025-01-13 20:19:26 +0000 UTC" firstStartedPulling="2025-01-13 20:19:27.242702145 +0000 UTC m=+6.820343576" lastFinishedPulling="2025-01-13 20:19:34.289908746 +0000 UTC m=+13.867550177" observedRunningTime="2025-01-13 20:19:35.656471512 +0000 UTC m=+15.234112943" watchObservedRunningTime="2025-01-13 20:19:37.682405759 +0000 UTC m=+17.260047190" Jan 13 20:19:37.684228 kubelet[2698]: I0113 20:19:37.682838 2698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-qvvkh" podStartSLOduration=11.682824956 podStartE2EDuration="11.682824956s" podCreationTimestamp="2025-01-13 20:19:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:19:37.682571398 +0000 UTC m=+17.260212829" watchObservedRunningTime="2025-01-13 20:19:37.682824956 +0000 UTC m=+17.260466427" Jan 13 20:19:37.699059 kubelet[2698]: I0113 20:19:37.697987 2698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-pxv5l" podStartSLOduration=11.697964117 podStartE2EDuration="11.697964117s" podCreationTimestamp="2025-01-13 20:19:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:19:37.697835638 +0000 UTC m=+17.275477029" watchObservedRunningTime="2025-01-13 20:19:37.697964117 +0000 UTC m=+17.275605548" Jan 13 20:19:38.144732 systemd-networkd[1349]: vethb76ab830: Gained IPv6LL Jan 13 20:24:04.154871 systemd[1]: Started sshd@5-138.199.153.195:22-147.75.109.163:54762.service - OpenSSH per-connection server daemon (147.75.109.163:54762). Jan 13 20:24:05.137303 sshd[4653]: Accepted publickey for core from 147.75.109.163 port 54762 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:24:05.139732 sshd-session[4653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:24:05.146897 systemd-logind[1456]: New session 6 of user core. Jan 13 20:24:05.149929 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 20:24:05.912137 sshd[4655]: Connection closed by 147.75.109.163 port 54762 Jan 13 20:24:05.911006 sshd-session[4653]: pam_unix(sshd:session): session closed for user core Jan 13 20:24:05.915279 systemd[1]: sshd@5-138.199.153.195:22-147.75.109.163:54762.service: Deactivated successfully. Jan 13 20:24:05.917835 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 20:24:05.920928 systemd-logind[1456]: Session 6 logged out. Waiting for processes to exit. Jan 13 20:24:05.923166 systemd-logind[1456]: Removed session 6. Jan 13 20:24:11.087988 systemd[1]: Started sshd@6-138.199.153.195:22-147.75.109.163:45122.service - OpenSSH per-connection server daemon (147.75.109.163:45122). Jan 13 20:24:12.067331 sshd[4688]: Accepted publickey for core from 147.75.109.163 port 45122 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:24:12.069905 sshd-session[4688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:24:12.075577 systemd-logind[1456]: New session 7 of user core. Jan 13 20:24:12.080839 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 20:24:12.816801 sshd[4696]: Connection closed by 147.75.109.163 port 45122 Jan 13 20:24:12.816655 sshd-session[4688]: pam_unix(sshd:session): session closed for user core Jan 13 20:24:12.821851 systemd[1]: sshd@6-138.199.153.195:22-147.75.109.163:45122.service: Deactivated successfully. Jan 13 20:24:12.826212 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 20:24:12.827844 systemd-logind[1456]: Session 7 logged out. Waiting for processes to exit. Jan 13 20:24:12.829234 systemd-logind[1456]: Removed session 7. Jan 13 20:24:17.993067 systemd[1]: Started sshd@7-138.199.153.195:22-147.75.109.163:47602.service - OpenSSH per-connection server daemon (147.75.109.163:47602). Jan 13 20:24:18.970042 sshd[4744]: Accepted publickey for core from 147.75.109.163 port 47602 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:24:18.972137 sshd-session[4744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:24:18.978036 systemd-logind[1456]: New session 8 of user core. Jan 13 20:24:18.984961 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 20:24:19.726991 sshd[4746]: Connection closed by 147.75.109.163 port 47602 Jan 13 20:24:19.725882 sshd-session[4744]: pam_unix(sshd:session): session closed for user core Jan 13 20:24:19.731162 systemd[1]: sshd@7-138.199.153.195:22-147.75.109.163:47602.service: Deactivated successfully. Jan 13 20:24:19.734669 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 20:24:19.737155 systemd-logind[1456]: Session 8 logged out. Waiting for processes to exit. Jan 13 20:24:19.738343 systemd-logind[1456]: Removed session 8. Jan 13 20:24:19.898974 systemd[1]: Started sshd@8-138.199.153.195:22-147.75.109.163:47610.service - OpenSSH per-connection server daemon (147.75.109.163:47610). Jan 13 20:24:20.874051 sshd[4758]: Accepted publickey for core from 147.75.109.163 port 47610 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:24:20.876468 sshd-session[4758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:24:20.883667 systemd-logind[1456]: New session 9 of user core. Jan 13 20:24:20.886663 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 20:24:21.678023 sshd[4762]: Connection closed by 147.75.109.163 port 47610 Jan 13 20:24:21.677893 sshd-session[4758]: pam_unix(sshd:session): session closed for user core Jan 13 20:24:21.682592 systemd[1]: sshd@8-138.199.153.195:22-147.75.109.163:47610.service: Deactivated successfully. Jan 13 20:24:21.688928 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 20:24:21.691531 systemd-logind[1456]: Session 9 logged out. Waiting for processes to exit. Jan 13 20:24:21.693976 systemd-logind[1456]: Removed session 9. Jan 13 20:24:21.859520 systemd[1]: Started sshd@9-138.199.153.195:22-147.75.109.163:47612.service - OpenSSH per-connection server daemon (147.75.109.163:47612). Jan 13 20:24:22.840995 sshd[4777]: Accepted publickey for core from 147.75.109.163 port 47612 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:24:22.843968 sshd-session[4777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:24:22.852144 systemd-logind[1456]: New session 10 of user core. Jan 13 20:24:22.857810 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 20:24:23.600048 sshd[4794]: Connection closed by 147.75.109.163 port 47612 Jan 13 20:24:23.600969 sshd-session[4777]: pam_unix(sshd:session): session closed for user core Jan 13 20:24:23.605998 systemd-logind[1456]: Session 10 logged out. Waiting for processes to exit. Jan 13 20:24:23.607308 systemd[1]: sshd@9-138.199.153.195:22-147.75.109.163:47612.service: Deactivated successfully. Jan 13 20:24:23.610772 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 20:24:23.612386 systemd-logind[1456]: Removed session 10. Jan 13 20:24:28.785709 systemd[1]: Started sshd@10-138.199.153.195:22-147.75.109.163:54434.service - OpenSSH per-connection server daemon (147.75.109.163:54434). Jan 13 20:24:29.778942 sshd[4829]: Accepted publickey for core from 147.75.109.163 port 54434 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:24:29.781970 sshd-session[4829]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:24:29.788041 systemd-logind[1456]: New session 11 of user core. Jan 13 20:24:29.792730 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 20:24:30.541463 sshd[4831]: Connection closed by 147.75.109.163 port 54434 Jan 13 20:24:30.542873 sshd-session[4829]: pam_unix(sshd:session): session closed for user core Jan 13 20:24:30.548054 systemd-logind[1456]: Session 11 logged out. Waiting for processes to exit. Jan 13 20:24:30.548314 systemd[1]: sshd@10-138.199.153.195:22-147.75.109.163:54434.service: Deactivated successfully. Jan 13 20:24:30.550551 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 20:24:30.553137 systemd-logind[1456]: Removed session 11. Jan 13 20:24:30.712620 systemd[1]: Started sshd@11-138.199.153.195:22-147.75.109.163:54448.service - OpenSSH per-connection server daemon (147.75.109.163:54448). Jan 13 20:24:31.722924 sshd[4842]: Accepted publickey for core from 147.75.109.163 port 54448 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:24:31.725115 sshd-session[4842]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:24:31.731882 systemd-logind[1456]: New session 12 of user core. Jan 13 20:24:31.739829 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 20:24:32.527982 sshd[4850]: Connection closed by 147.75.109.163 port 54448 Jan 13 20:24:32.527879 sshd-session[4842]: pam_unix(sshd:session): session closed for user core Jan 13 20:24:32.532082 systemd[1]: sshd@11-138.199.153.195:22-147.75.109.163:54448.service: Deactivated successfully. Jan 13 20:24:32.534199 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 20:24:32.535522 systemd-logind[1456]: Session 12 logged out. Waiting for processes to exit. Jan 13 20:24:32.536622 systemd-logind[1456]: Removed session 12. Jan 13 20:24:32.698882 systemd[1]: Started sshd@12-138.199.153.195:22-147.75.109.163:54458.service - OpenSSH per-connection server daemon (147.75.109.163:54458). Jan 13 20:24:33.691924 sshd[4874]: Accepted publickey for core from 147.75.109.163 port 54458 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:24:33.693937 sshd-session[4874]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:24:33.699839 systemd-logind[1456]: New session 13 of user core. Jan 13 20:24:33.705767 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 20:24:35.778111 sshd[4876]: Connection closed by 147.75.109.163 port 54458 Jan 13 20:24:35.779435 sshd-session[4874]: pam_unix(sshd:session): session closed for user core Jan 13 20:24:35.784377 systemd[1]: sshd@12-138.199.153.195:22-147.75.109.163:54458.service: Deactivated successfully. Jan 13 20:24:35.787546 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 20:24:35.788872 systemd-logind[1456]: Session 13 logged out. Waiting for processes to exit. Jan 13 20:24:35.791669 systemd-logind[1456]: Removed session 13. Jan 13 20:24:35.955105 systemd[1]: Started sshd@13-138.199.153.195:22-147.75.109.163:54472.service - OpenSSH per-connection server daemon (147.75.109.163:54472). Jan 13 20:24:36.942192 sshd[4892]: Accepted publickey for core from 147.75.109.163 port 54472 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:24:36.943747 sshd-session[4892]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:24:36.948368 systemd-logind[1456]: New session 14 of user core. Jan 13 20:24:36.956770 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 20:24:37.822373 sshd[4900]: Connection closed by 147.75.109.163 port 54472 Jan 13 20:24:37.823728 sshd-session[4892]: pam_unix(sshd:session): session closed for user core Jan 13 20:24:37.828045 systemd[1]: sshd@13-138.199.153.195:22-147.75.109.163:54472.service: Deactivated successfully. Jan 13 20:24:37.830372 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 20:24:37.831229 systemd-logind[1456]: Session 14 logged out. Waiting for processes to exit. Jan 13 20:24:37.832353 systemd-logind[1456]: Removed session 14. Jan 13 20:24:37.998890 systemd[1]: Started sshd@14-138.199.153.195:22-147.75.109.163:33982.service - OpenSSH per-connection server daemon (147.75.109.163:33982). Jan 13 20:24:38.985221 sshd[4924]: Accepted publickey for core from 147.75.109.163 port 33982 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:24:38.988215 sshd-session[4924]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:24:38.993548 systemd-logind[1456]: New session 15 of user core. Jan 13 20:24:39.000757 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 20:24:39.749017 sshd[4926]: Connection closed by 147.75.109.163 port 33982 Jan 13 20:24:39.749905 sshd-session[4924]: pam_unix(sshd:session): session closed for user core Jan 13 20:24:39.755885 systemd[1]: sshd@14-138.199.153.195:22-147.75.109.163:33982.service: Deactivated successfully. Jan 13 20:24:39.758349 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 20:24:39.759421 systemd-logind[1456]: Session 15 logged out. Waiting for processes to exit. Jan 13 20:24:39.760388 systemd-logind[1456]: Removed session 15. Jan 13 20:24:44.924781 systemd[1]: Started sshd@15-138.199.153.195:22-147.75.109.163:33990.service - OpenSSH per-connection server daemon (147.75.109.163:33990). Jan 13 20:24:45.933201 sshd[4961]: Accepted publickey for core from 147.75.109.163 port 33990 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:24:45.935292 sshd-session[4961]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:24:45.943155 systemd-logind[1456]: New session 16 of user core. Jan 13 20:24:45.950167 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 20:24:46.693073 sshd[4963]: Connection closed by 147.75.109.163 port 33990 Jan 13 20:24:46.694190 sshd-session[4961]: pam_unix(sshd:session): session closed for user core Jan 13 20:24:46.700350 systemd[1]: sshd@15-138.199.153.195:22-147.75.109.163:33990.service: Deactivated successfully. Jan 13 20:24:46.704442 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 20:24:46.706130 systemd-logind[1456]: Session 16 logged out. Waiting for processes to exit. Jan 13 20:24:46.708203 systemd-logind[1456]: Removed session 16. Jan 13 20:24:51.863266 systemd[1]: Started sshd@16-138.199.153.195:22-147.75.109.163:46334.service - OpenSSH per-connection server daemon (147.75.109.163:46334). Jan 13 20:24:52.853296 sshd[5000]: Accepted publickey for core from 147.75.109.163 port 46334 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:24:52.855636 sshd-session[5000]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:24:52.862274 systemd-logind[1456]: New session 17 of user core. Jan 13 20:24:52.868790 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 20:24:53.603199 sshd[5017]: Connection closed by 147.75.109.163 port 46334 Jan 13 20:24:53.604190 sshd-session[5000]: pam_unix(sshd:session): session closed for user core Jan 13 20:24:53.608978 systemd[1]: sshd@16-138.199.153.195:22-147.75.109.163:46334.service: Deactivated successfully. Jan 13 20:24:53.609110 systemd-logind[1456]: Session 17 logged out. Waiting for processes to exit. Jan 13 20:24:53.611374 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 20:24:53.613234 systemd-logind[1456]: Removed session 17.