Jan 13 20:16:46.927362 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 13 20:16:46.927388 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Mon Jan 13 18:56:28 -00 2025 Jan 13 20:16:46.927399 kernel: KASLR enabled Jan 13 20:16:46.927405 kernel: efi: EFI v2.7 by EDK II Jan 13 20:16:46.927411 kernel: efi: SMBIOS 3.0=0x135ed0000 MEMATTR=0x13479b218 ACPI 2.0=0x132430018 RNG=0x13243e918 MEMRESERVE=0x132357218 Jan 13 20:16:46.927416 kernel: random: crng init done Jan 13 20:16:46.927423 kernel: secureboot: Secure boot disabled Jan 13 20:16:46.927429 kernel: ACPI: Early table checksum verification disabled Jan 13 20:16:46.927435 kernel: ACPI: RSDP 0x0000000132430018 000024 (v02 BOCHS ) Jan 13 20:16:46.927441 kernel: ACPI: XSDT 0x000000013243FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Jan 13 20:16:46.927449 kernel: ACPI: FACP 0x000000013243FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:16:46.927455 kernel: ACPI: DSDT 0x0000000132437518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:16:46.927461 kernel: ACPI: APIC 0x000000013243FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:16:46.927467 kernel: ACPI: PPTT 0x000000013243FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:16:46.927474 kernel: ACPI: GTDT 0x000000013243D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:16:46.927482 kernel: ACPI: MCFG 0x000000013243FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:16:46.927489 kernel: ACPI: SPCR 0x000000013243E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:16:46.927496 kernel: ACPI: DBG2 0x000000013243E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:16:46.927502 kernel: ACPI: IORT 0x000000013243E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:16:46.927508 kernel: ACPI: BGRT 0x000000013243E798 000038 (v01 INTEL EDK2 00000002 01000013) Jan 13 20:16:46.927514 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Jan 13 20:16:46.927520 kernel: NUMA: Failed to initialise from firmware Jan 13 20:16:46.927527 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Jan 13 20:16:46.927533 kernel: NUMA: NODE_DATA [mem 0x139821800-0x139826fff] Jan 13 20:16:46.927539 kernel: Zone ranges: Jan 13 20:16:46.927546 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 13 20:16:46.927554 kernel: DMA32 empty Jan 13 20:16:46.927560 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Jan 13 20:16:46.927567 kernel: Movable zone start for each node Jan 13 20:16:46.927573 kernel: Early memory node ranges Jan 13 20:16:46.927579 kernel: node 0: [mem 0x0000000040000000-0x000000013243ffff] Jan 13 20:16:46.927586 kernel: node 0: [mem 0x0000000132440000-0x000000013272ffff] Jan 13 20:16:46.927592 kernel: node 0: [mem 0x0000000132730000-0x0000000135bfffff] Jan 13 20:16:46.927598 kernel: node 0: [mem 0x0000000135c00000-0x0000000135fdffff] Jan 13 20:16:46.927605 kernel: node 0: [mem 0x0000000135fe0000-0x0000000139ffffff] Jan 13 20:16:46.927611 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Jan 13 20:16:46.927617 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Jan 13 20:16:46.927625 kernel: psci: probing for conduit method from ACPI. Jan 13 20:16:46.927631 kernel: psci: PSCIv1.1 detected in firmware. Jan 13 20:16:46.927638 kernel: psci: Using standard PSCI v0.2 function IDs Jan 13 20:16:46.927647 kernel: psci: Trusted OS migration not required Jan 13 20:16:46.927654 kernel: psci: SMC Calling Convention v1.1 Jan 13 20:16:46.927661 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 13 20:16:46.927670 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 13 20:16:46.927676 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 13 20:16:46.927683 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 13 20:16:46.927690 kernel: Detected PIPT I-cache on CPU0 Jan 13 20:16:46.927696 kernel: CPU features: detected: GIC system register CPU interface Jan 13 20:16:46.927703 kernel: CPU features: detected: Hardware dirty bit management Jan 13 20:16:46.927710 kernel: CPU features: detected: Spectre-v4 Jan 13 20:16:46.927716 kernel: CPU features: detected: Spectre-BHB Jan 13 20:16:46.927723 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 13 20:16:46.927730 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 13 20:16:46.927737 kernel: CPU features: detected: ARM erratum 1418040 Jan 13 20:16:46.927745 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 13 20:16:46.927752 kernel: alternatives: applying boot alternatives Jan 13 20:16:46.927760 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=9798117b3b15ef802e3d618077f87253cc08e0d5280b8fe28b307e7558b7ebcc Jan 13 20:16:46.927767 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 20:16:46.927774 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 20:16:46.927781 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 20:16:46.927788 kernel: Fallback order for Node 0: 0 Jan 13 20:16:46.927794 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Jan 13 20:16:46.927801 kernel: Policy zone: Normal Jan 13 20:16:46.927808 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 20:16:46.927814 kernel: software IO TLB: area num 2. Jan 13 20:16:46.927829 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Jan 13 20:16:46.927840 kernel: Memory: 3881024K/4096000K available (10304K kernel code, 2184K rwdata, 8092K rodata, 39936K init, 897K bss, 214976K reserved, 0K cma-reserved) Jan 13 20:16:46.927848 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 13 20:16:46.927856 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 20:16:46.927865 kernel: rcu: RCU event tracing is enabled. Jan 13 20:16:46.927873 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 13 20:16:46.927880 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 20:16:46.927887 kernel: Tracing variant of Tasks RCU enabled. Jan 13 20:16:46.927893 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 20:16:46.927914 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 13 20:16:46.927921 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 13 20:16:46.927931 kernel: GICv3: 256 SPIs implemented Jan 13 20:16:46.927937 kernel: GICv3: 0 Extended SPIs implemented Jan 13 20:16:46.927944 kernel: Root IRQ handler: gic_handle_irq Jan 13 20:16:46.927951 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 13 20:16:46.927958 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 13 20:16:46.927964 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 13 20:16:46.927971 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Jan 13 20:16:46.927978 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Jan 13 20:16:46.927985 kernel: GICv3: using LPI property table @0x00000001000e0000 Jan 13 20:16:46.927992 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Jan 13 20:16:46.927998 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 20:16:46.928007 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 20:16:46.928014 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 13 20:16:46.928021 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 13 20:16:46.928028 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 13 20:16:46.928035 kernel: Console: colour dummy device 80x25 Jan 13 20:16:46.928042 kernel: ACPI: Core revision 20230628 Jan 13 20:16:46.928049 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 13 20:16:46.928056 kernel: pid_max: default: 32768 minimum: 301 Jan 13 20:16:46.928063 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 20:16:46.928070 kernel: landlock: Up and running. Jan 13 20:16:46.928079 kernel: SELinux: Initializing. Jan 13 20:16:46.928086 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:16:46.928104 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:16:46.928111 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:16:46.928119 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:16:46.928126 kernel: rcu: Hierarchical SRCU implementation. Jan 13 20:16:46.928133 kernel: rcu: Max phase no-delay instances is 400. Jan 13 20:16:46.928140 kernel: Platform MSI: ITS@0x8080000 domain created Jan 13 20:16:46.928147 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 13 20:16:46.928156 kernel: Remapping and enabling EFI services. Jan 13 20:16:46.928162 kernel: smp: Bringing up secondary CPUs ... Jan 13 20:16:46.928170 kernel: Detected PIPT I-cache on CPU1 Jan 13 20:16:46.928177 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 13 20:16:46.928184 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Jan 13 20:16:46.928191 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 20:16:46.928198 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 13 20:16:46.928205 kernel: smp: Brought up 1 node, 2 CPUs Jan 13 20:16:46.928212 kernel: SMP: Total of 2 processors activated. Jan 13 20:16:46.928219 kernel: CPU features: detected: 32-bit EL0 Support Jan 13 20:16:46.928228 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 13 20:16:46.928244 kernel: CPU features: detected: Common not Private translations Jan 13 20:16:46.928258 kernel: CPU features: detected: CRC32 instructions Jan 13 20:16:46.928267 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 13 20:16:46.928275 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 13 20:16:46.928282 kernel: CPU features: detected: LSE atomic instructions Jan 13 20:16:46.928290 kernel: CPU features: detected: Privileged Access Never Jan 13 20:16:46.928297 kernel: CPU features: detected: RAS Extension Support Jan 13 20:16:46.928305 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 13 20:16:46.928314 kernel: CPU: All CPU(s) started at EL1 Jan 13 20:16:46.928322 kernel: alternatives: applying system-wide alternatives Jan 13 20:16:46.928329 kernel: devtmpfs: initialized Jan 13 20:16:46.928336 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 20:16:46.928344 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 13 20:16:46.928351 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 20:16:46.928358 kernel: SMBIOS 3.0.0 present. Jan 13 20:16:46.928366 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Jan 13 20:16:46.928375 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 20:16:46.928382 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 13 20:16:46.928390 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 13 20:16:46.928397 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 13 20:16:46.928405 kernel: audit: initializing netlink subsys (disabled) Jan 13 20:16:46.928412 kernel: audit: type=2000 audit(0.011:1): state=initialized audit_enabled=0 res=1 Jan 13 20:16:46.928420 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 20:16:46.928427 kernel: cpuidle: using governor menu Jan 13 20:16:46.928434 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 13 20:16:46.928443 kernel: ASID allocator initialised with 32768 entries Jan 13 20:16:46.928451 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 20:16:46.928458 kernel: Serial: AMBA PL011 UART driver Jan 13 20:16:46.928465 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 13 20:16:46.928473 kernel: Modules: 0 pages in range for non-PLT usage Jan 13 20:16:46.928480 kernel: Modules: 508880 pages in range for PLT usage Jan 13 20:16:46.928488 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 20:16:46.928495 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 20:16:46.928503 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 13 20:16:46.928512 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 13 20:16:46.928519 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 20:16:46.928527 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 20:16:46.928534 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 13 20:16:46.928541 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 13 20:16:46.928548 kernel: ACPI: Added _OSI(Module Device) Jan 13 20:16:46.928556 kernel: ACPI: Added _OSI(Processor Device) Jan 13 20:16:46.928563 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 20:16:46.928570 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 20:16:46.928580 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 20:16:46.928587 kernel: ACPI: Interpreter enabled Jan 13 20:16:46.928594 kernel: ACPI: Using GIC for interrupt routing Jan 13 20:16:46.928602 kernel: ACPI: MCFG table detected, 1 entries Jan 13 20:16:46.928609 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 13 20:16:46.928617 kernel: printk: console [ttyAMA0] enabled Jan 13 20:16:46.928624 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 20:16:46.928803 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 20:16:46.928886 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 13 20:16:46.928981 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 13 20:16:46.929047 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 13 20:16:46.929111 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 13 20:16:46.929121 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 13 20:16:46.929128 kernel: PCI host bridge to bus 0000:00 Jan 13 20:16:46.929204 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 13 20:16:46.929288 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 13 20:16:46.929349 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 13 20:16:46.929409 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 20:16:46.929496 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 13 20:16:46.929579 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Jan 13 20:16:46.929649 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Jan 13 20:16:46.929719 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Jan 13 20:16:46.929803 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 13 20:16:46.929872 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Jan 13 20:16:46.929988 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 13 20:16:46.930059 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Jan 13 20:16:46.930135 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 13 20:16:46.930203 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Jan 13 20:16:46.930335 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 13 20:16:46.930410 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Jan 13 20:16:46.930488 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 13 20:16:46.930558 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Jan 13 20:16:46.930634 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 13 20:16:46.930704 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Jan 13 20:16:46.930785 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 13 20:16:46.930871 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Jan 13 20:16:46.930970 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 13 20:16:46.931040 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Jan 13 20:16:46.931118 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Jan 13 20:16:46.931185 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Jan 13 20:16:46.931291 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Jan 13 20:16:46.931361 kernel: pci 0000:00:04.0: reg 0x10: [io 0x8200-0x8207] Jan 13 20:16:46.931442 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Jan 13 20:16:46.931519 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Jan 13 20:16:46.931590 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 13 20:16:46.931659 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 13 20:16:46.931737 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 13 20:16:46.931812 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Jan 13 20:16:46.931893 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Jan 13 20:16:46.931997 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Jan 13 20:16:46.932071 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Jan 13 20:16:46.932149 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Jan 13 20:16:46.932221 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Jan 13 20:16:46.932354 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 13 20:16:46.932428 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Jan 13 20:16:46.932506 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Jan 13 20:16:46.932577 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Jan 13 20:16:46.932646 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Jan 13 20:16:46.932726 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Jan 13 20:16:46.932801 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Jan 13 20:16:46.932871 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Jan 13 20:16:46.933008 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 13 20:16:46.933082 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jan 13 20:16:46.933149 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Jan 13 20:16:46.933215 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Jan 13 20:16:46.933303 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jan 13 20:16:46.933377 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jan 13 20:16:46.933444 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Jan 13 20:16:46.933514 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 13 20:16:46.933579 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Jan 13 20:16:46.933644 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Jan 13 20:16:46.933717 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 13 20:16:46.933783 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Jan 13 20:16:46.933855 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jan 13 20:16:46.933947 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 13 20:16:46.934016 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Jan 13 20:16:46.934083 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x000fffff] to [bus 05] add_size 200000 add_align 100000 Jan 13 20:16:46.934153 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 13 20:16:46.934220 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Jan 13 20:16:46.934303 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Jan 13 20:16:46.934378 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 13 20:16:46.934454 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Jan 13 20:16:46.934521 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Jan 13 20:16:46.934592 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 13 20:16:46.934660 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Jan 13 20:16:46.934729 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Jan 13 20:16:46.934802 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 13 20:16:46.934870 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Jan 13 20:16:46.934956 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Jan 13 20:16:46.935028 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Jan 13 20:16:46.935096 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Jan 13 20:16:46.935175 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Jan 13 20:16:46.935280 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Jan 13 20:16:46.935359 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Jan 13 20:16:46.935426 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Jan 13 20:16:46.935504 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Jan 13 20:16:46.935573 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Jan 13 20:16:46.935643 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Jan 13 20:16:46.935713 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Jan 13 20:16:46.935783 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Jan 13 20:16:46.935850 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 13 20:16:46.935936 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Jan 13 20:16:46.936012 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 13 20:16:46.936081 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Jan 13 20:16:46.936146 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 13 20:16:46.936215 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Jan 13 20:16:46.936300 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Jan 13 20:16:46.936376 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Jan 13 20:16:46.936444 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Jan 13 20:16:46.936518 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Jan 13 20:16:46.936587 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jan 13 20:16:46.936656 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Jan 13 20:16:46.936723 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jan 13 20:16:46.936792 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Jan 13 20:16:46.936859 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jan 13 20:16:46.936942 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Jan 13 20:16:46.937013 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jan 13 20:16:46.937092 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Jan 13 20:16:46.937161 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jan 13 20:16:46.937232 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Jan 13 20:16:46.937344 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jan 13 20:16:46.937415 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Jan 13 20:16:46.937486 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jan 13 20:16:46.937556 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Jan 13 20:16:46.937624 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jan 13 20:16:46.937702 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Jan 13 20:16:46.937769 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Jan 13 20:16:46.937844 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Jan 13 20:16:46.938369 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Jan 13 20:16:46.938469 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 13 20:16:46.938539 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Jan 13 20:16:46.938609 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 13 20:16:46.938677 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 13 20:16:46.938751 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Jan 13 20:16:46.938816 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Jan 13 20:16:46.939083 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Jan 13 20:16:46.939225 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 13 20:16:46.939364 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 13 20:16:46.939439 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Jan 13 20:16:46.939506 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Jan 13 20:16:46.939584 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Jan 13 20:16:46.939654 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Jan 13 20:16:46.939728 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 13 20:16:46.939798 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 13 20:16:46.939866 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Jan 13 20:16:46.940319 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Jan 13 20:16:46.940413 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Jan 13 20:16:46.940487 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 13 20:16:46.940552 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 13 20:16:46.940618 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Jan 13 20:16:46.940683 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Jan 13 20:16:46.940759 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Jan 13 20:16:46.940843 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 13 20:16:46.940935 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 13 20:16:46.941003 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Jan 13 20:16:46.941304 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Jan 13 20:16:46.941394 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Jan 13 20:16:46.941466 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Jan 13 20:16:46.941537 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 13 20:16:46.941604 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 13 20:16:46.941670 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Jan 13 20:16:46.941749 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 13 20:16:46.941825 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Jan 13 20:16:46.942010 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Jan 13 20:16:46.942110 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Jan 13 20:16:46.942185 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 13 20:16:46.942305 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 13 20:16:46.942383 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Jan 13 20:16:46.942460 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 13 20:16:46.942533 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 13 20:16:46.942602 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 13 20:16:46.942669 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Jan 13 20:16:46.942735 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 13 20:16:46.942807 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 13 20:16:46.942873 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Jan 13 20:16:46.945137 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Jan 13 20:16:46.945271 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Jan 13 20:16:46.945368 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 13 20:16:46.945431 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 13 20:16:46.945491 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 13 20:16:46.945576 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 13 20:16:46.945639 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Jan 13 20:16:46.945700 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Jan 13 20:16:46.945776 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Jan 13 20:16:46.945840 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Jan 13 20:16:46.945924 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Jan 13 20:16:46.946001 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Jan 13 20:16:46.946063 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Jan 13 20:16:46.946123 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Jan 13 20:16:46.946196 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jan 13 20:16:46.946275 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Jan 13 20:16:46.946344 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Jan 13 20:16:46.946429 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Jan 13 20:16:46.946500 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Jan 13 20:16:46.946563 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Jan 13 20:16:46.946638 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Jan 13 20:16:46.946705 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Jan 13 20:16:46.946769 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 13 20:16:46.946847 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Jan 13 20:16:46.946931 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Jan 13 20:16:46.946999 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 13 20:16:46.947073 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Jan 13 20:16:46.947143 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Jan 13 20:16:46.947210 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 13 20:16:46.947344 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Jan 13 20:16:46.947416 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Jan 13 20:16:46.947481 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Jan 13 20:16:46.947497 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 13 20:16:46.947506 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 13 20:16:46.947514 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 13 20:16:46.947522 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 13 20:16:46.947530 kernel: iommu: Default domain type: Translated Jan 13 20:16:46.947538 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 13 20:16:46.947546 kernel: efivars: Registered efivars operations Jan 13 20:16:46.947554 kernel: vgaarb: loaded Jan 13 20:16:46.947562 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 13 20:16:46.947570 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 20:16:46.947580 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 20:16:46.947589 kernel: pnp: PnP ACPI init Jan 13 20:16:46.947679 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 13 20:16:46.947691 kernel: pnp: PnP ACPI: found 1 devices Jan 13 20:16:46.947699 kernel: NET: Registered PF_INET protocol family Jan 13 20:16:46.947707 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 20:16:46.947715 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 20:16:46.947723 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 20:16:46.947733 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 20:16:46.947741 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 20:16:46.947749 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 20:16:46.947757 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:16:46.947765 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:16:46.947772 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 20:16:46.947852 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Jan 13 20:16:46.947865 kernel: PCI: CLS 0 bytes, default 64 Jan 13 20:16:46.947873 kernel: kvm [1]: HYP mode not available Jan 13 20:16:46.947884 kernel: Initialise system trusted keyrings Jan 13 20:16:46.947892 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 20:16:46.948004 kernel: Key type asymmetric registered Jan 13 20:16:46.948016 kernel: Asymmetric key parser 'x509' registered Jan 13 20:16:46.948025 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 13 20:16:46.948039 kernel: io scheduler mq-deadline registered Jan 13 20:16:46.948048 kernel: io scheduler kyber registered Jan 13 20:16:46.948057 kernel: io scheduler bfq registered Jan 13 20:16:46.948065 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 13 20:16:46.948162 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Jan 13 20:16:46.948248 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Jan 13 20:16:46.948330 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:16:46.948404 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Jan 13 20:16:46.948471 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Jan 13 20:16:46.948538 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:16:46.948616 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Jan 13 20:16:46.948870 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Jan 13 20:16:46.951154 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:16:46.951298 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Jan 13 20:16:46.951377 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Jan 13 20:16:46.951447 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:16:46.951534 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Jan 13 20:16:46.951607 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Jan 13 20:16:46.951674 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:16:46.951748 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Jan 13 20:16:46.951816 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Jan 13 20:16:46.951884 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:16:46.952280 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Jan 13 20:16:46.952379 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Jan 13 20:16:46.952448 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:16:46.952526 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Jan 13 20:16:46.952595 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Jan 13 20:16:46.952661 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:16:46.952681 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Jan 13 20:16:46.952752 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Jan 13 20:16:46.952824 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Jan 13 20:16:46.952892 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:16:46.952918 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 13 20:16:46.952927 kernel: ACPI: button: Power Button [PWRB] Jan 13 20:16:46.952935 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 13 20:16:46.953023 kernel: virtio-pci 0000:03:00.0: enabling device (0000 -> 0002) Jan 13 20:16:46.953104 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Jan 13 20:16:46.953181 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Jan 13 20:16:46.953193 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 20:16:46.953201 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 13 20:16:46.953295 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Jan 13 20:16:46.953308 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Jan 13 20:16:46.953316 kernel: thunder_xcv, ver 1.0 Jan 13 20:16:46.953327 kernel: thunder_bgx, ver 1.0 Jan 13 20:16:46.953335 kernel: nicpf, ver 1.0 Jan 13 20:16:46.953343 kernel: nicvf, ver 1.0 Jan 13 20:16:46.953432 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 13 20:16:46.953498 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-13T20:16:46 UTC (1736799406) Jan 13 20:16:46.953508 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 13 20:16:46.953517 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 13 20:16:46.953525 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 13 20:16:46.953536 kernel: watchdog: Hard watchdog permanently disabled Jan 13 20:16:46.953544 kernel: NET: Registered PF_INET6 protocol family Jan 13 20:16:46.953552 kernel: Segment Routing with IPv6 Jan 13 20:16:46.953560 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 20:16:46.953568 kernel: NET: Registered PF_PACKET protocol family Jan 13 20:16:46.953576 kernel: Key type dns_resolver registered Jan 13 20:16:46.953584 kernel: registered taskstats version 1 Jan 13 20:16:46.953592 kernel: Loading compiled-in X.509 certificates Jan 13 20:16:46.953600 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 46cb4d1b22f3a5974766fe7d7b651e2f296d4fe0' Jan 13 20:16:46.953608 kernel: Key type .fscrypt registered Jan 13 20:16:46.953618 kernel: Key type fscrypt-provisioning registered Jan 13 20:16:46.953626 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 20:16:46.953634 kernel: ima: Allocated hash algorithm: sha1 Jan 13 20:16:46.953642 kernel: ima: No architecture policies found Jan 13 20:16:46.953649 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 13 20:16:46.953658 kernel: clk: Disabling unused clocks Jan 13 20:16:46.953668 kernel: Freeing unused kernel memory: 39936K Jan 13 20:16:46.953676 kernel: Run /init as init process Jan 13 20:16:46.953686 kernel: with arguments: Jan 13 20:16:46.953694 kernel: /init Jan 13 20:16:46.953702 kernel: with environment: Jan 13 20:16:46.953709 kernel: HOME=/ Jan 13 20:16:46.953717 kernel: TERM=linux Jan 13 20:16:46.953725 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 20:16:46.953735 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:16:46.953746 systemd[1]: Detected virtualization kvm. Jan 13 20:16:46.953757 systemd[1]: Detected architecture arm64. Jan 13 20:16:46.953765 systemd[1]: Running in initrd. Jan 13 20:16:46.953773 systemd[1]: No hostname configured, using default hostname. Jan 13 20:16:46.953781 systemd[1]: Hostname set to . Jan 13 20:16:46.953789 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:16:46.953798 systemd[1]: Queued start job for default target initrd.target. Jan 13 20:16:46.953807 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:16:46.953815 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:16:46.953827 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 20:16:46.953835 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:16:46.953844 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 20:16:46.953852 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 20:16:46.953862 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 20:16:46.953870 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 20:16:46.953879 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:16:46.953889 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:16:46.953908 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:16:46.953918 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:16:46.953927 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:16:46.953935 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:16:46.953944 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:16:46.953952 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:16:46.953961 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 20:16:46.953972 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 20:16:46.953980 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:16:46.953988 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:16:46.953996 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:16:46.954005 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:16:46.954013 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 20:16:46.954021 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:16:46.954030 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 20:16:46.954038 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 20:16:46.954048 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:16:46.954057 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:16:46.954066 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:16:46.954103 systemd-journald[237]: Collecting audit messages is disabled. Jan 13 20:16:46.954127 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 20:16:46.954136 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:16:46.954144 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 20:16:46.954155 systemd-journald[237]: Journal started Jan 13 20:16:46.954184 systemd-journald[237]: Runtime Journal (/run/log/journal/f155feca86a94053b04152cf469384a7) is 8.0M, max 76.5M, 68.5M free. Jan 13 20:16:46.954855 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:16:46.959412 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:16:46.964087 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:16:46.966302 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:16:46.968683 systemd-modules-load[238]: Inserted module 'overlay' Jan 13 20:16:46.986114 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 20:16:46.985327 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:16:46.987997 kernel: Bridge firewalling registered Jan 13 20:16:46.987488 systemd-modules-load[238]: Inserted module 'br_netfilter' Jan 13 20:16:46.995210 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:16:47.000941 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:16:47.001998 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:16:47.008179 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:16:47.023495 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:16:47.025929 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:16:47.035521 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:16:47.044284 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 20:16:47.047066 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:16:47.058386 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:16:47.062027 dracut-cmdline[270]: dracut-dracut-053 Jan 13 20:16:47.065994 dracut-cmdline[270]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=9798117b3b15ef802e3d618077f87253cc08e0d5280b8fe28b307e7558b7ebcc Jan 13 20:16:47.094596 systemd-resolved[275]: Positive Trust Anchors: Jan 13 20:16:47.095793 systemd-resolved[275]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:16:47.095831 systemd-resolved[275]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:16:47.107201 systemd-resolved[275]: Defaulting to hostname 'linux'. Jan 13 20:16:47.108756 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:16:47.111647 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:16:47.179182 kernel: SCSI subsystem initialized Jan 13 20:16:47.183966 kernel: Loading iSCSI transport class v2.0-870. Jan 13 20:16:47.193260 kernel: iscsi: registered transport (tcp) Jan 13 20:16:47.207199 kernel: iscsi: registered transport (qla4xxx) Jan 13 20:16:47.207385 kernel: QLogic iSCSI HBA Driver Jan 13 20:16:47.269683 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 20:16:47.277301 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 20:16:47.301984 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 20:16:47.303695 kernel: device-mapper: uevent: version 1.0.3 Jan 13 20:16:47.303756 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 20:16:47.360001 kernel: raid6: neonx8 gen() 14342 MB/s Jan 13 20:16:47.376971 kernel: raid6: neonx4 gen() 14434 MB/s Jan 13 20:16:47.393971 kernel: raid6: neonx2 gen() 11247 MB/s Jan 13 20:16:47.410974 kernel: raid6: neonx1 gen() 9651 MB/s Jan 13 20:16:47.428169 kernel: raid6: int64x8 gen() 6001 MB/s Jan 13 20:16:47.445043 kernel: raid6: int64x4 gen() 6508 MB/s Jan 13 20:16:47.461994 kernel: raid6: int64x2 gen() 5532 MB/s Jan 13 20:16:47.478959 kernel: raid6: int64x1 gen() 4757 MB/s Jan 13 20:16:47.479039 kernel: raid6: using algorithm neonx4 gen() 14434 MB/s Jan 13 20:16:47.496001 kernel: raid6: .... xor() 11348 MB/s, rmw enabled Jan 13 20:16:47.496104 kernel: raid6: using neon recovery algorithm Jan 13 20:16:47.501195 kernel: xor: measuring software checksum speed Jan 13 20:16:47.501312 kernel: 8regs : 20733 MB/sec Jan 13 20:16:47.502093 kernel: 32regs : 16467 MB/sec Jan 13 20:16:47.502135 kernel: arm64_neon : 27626 MB/sec Jan 13 20:16:47.502155 kernel: xor: using function: arm64_neon (27626 MB/sec) Jan 13 20:16:47.561971 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 20:16:47.578143 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:16:47.585999 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:16:47.602834 systemd-udevd[455]: Using default interface naming scheme 'v255'. Jan 13 20:16:47.606987 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:16:47.617151 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 20:16:47.636699 dracut-pre-trigger[465]: rd.md=0: removing MD RAID activation Jan 13 20:16:47.696045 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:16:47.703463 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:16:47.771040 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:16:47.780646 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 20:16:47.813736 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 20:16:47.818311 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:16:47.819217 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:16:47.821523 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:16:47.830273 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 20:16:47.865850 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:16:47.904411 kernel: scsi host0: Virtio SCSI HBA Jan 13 20:16:47.943361 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 13 20:16:47.943414 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 13 20:16:47.943431 kernel: ACPI: bus type USB registered Jan 13 20:16:47.943441 kernel: usbcore: registered new interface driver usbfs Jan 13 20:16:47.943450 kernel: usbcore: registered new interface driver hub Jan 13 20:16:47.944609 kernel: usbcore: registered new device driver usb Jan 13 20:16:47.965476 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:16:47.965645 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:16:47.966660 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:16:47.969926 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:16:47.970120 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:16:47.971048 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:16:47.985385 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:16:47.997958 kernel: sr 0:0:0:0: Power-on or device reset occurred Jan 13 20:16:48.000463 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Jan 13 20:16:48.000647 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 13 20:16:48.000659 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Jan 13 20:16:48.004183 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:16:48.008037 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 13 20:16:48.017687 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jan 13 20:16:48.017830 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 13 20:16:48.018047 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 13 20:16:48.018175 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jan 13 20:16:48.018357 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jan 13 20:16:48.018457 kernel: hub 1-0:1.0: USB hub found Jan 13 20:16:48.018654 kernel: hub 1-0:1.0: 4 ports detected Jan 13 20:16:48.018741 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 13 20:16:48.018842 kernel: hub 2-0:1.0: USB hub found Jan 13 20:16:48.018996 kernel: hub 2-0:1.0: 4 ports detected Jan 13 20:16:48.015198 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:16:48.022925 kernel: sd 0:0:0:1: Power-on or device reset occurred Jan 13 20:16:48.036576 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Jan 13 20:16:48.036736 kernel: sd 0:0:0:1: [sda] Write Protect is off Jan 13 20:16:48.036836 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Jan 13 20:16:48.036958 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 13 20:16:48.037043 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 20:16:48.037054 kernel: GPT:17805311 != 80003071 Jan 13 20:16:48.037062 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 20:16:48.037072 kernel: GPT:17805311 != 80003071 Jan 13 20:16:48.037081 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 20:16:48.037093 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 20:16:48.037104 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Jan 13 20:16:48.053409 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:16:48.098979 kernel: BTRFS: device fsid 2be7cc1c-29d4-4496-b29b-8561323213d2 devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (521) Jan 13 20:16:48.102079 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (518) Jan 13 20:16:48.114566 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 13 20:16:48.122664 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 13 20:16:48.132417 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 13 20:16:48.137207 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 13 20:16:48.137997 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jan 13 20:16:48.146192 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 20:16:48.156323 disk-uuid[575]: Primary Header is updated. Jan 13 20:16:48.156323 disk-uuid[575]: Secondary Entries is updated. Jan 13 20:16:48.156323 disk-uuid[575]: Secondary Header is updated. Jan 13 20:16:48.178147 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 20:16:48.255939 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 13 20:16:48.498347 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Jan 13 20:16:48.632374 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Jan 13 20:16:48.632443 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jan 13 20:16:48.633728 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Jan 13 20:16:48.688347 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Jan 13 20:16:48.688640 kernel: usbcore: registered new interface driver usbhid Jan 13 20:16:48.688660 kernel: usbhid: USB HID core driver Jan 13 20:16:49.184971 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 20:16:49.185567 disk-uuid[576]: The operation has completed successfully. Jan 13 20:16:49.277890 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 20:16:49.278951 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 20:16:49.285200 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 20:16:49.294427 sh[587]: Success Jan 13 20:16:49.310944 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 13 20:16:49.389752 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 20:16:49.412104 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 20:16:49.413776 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 20:16:49.440458 kernel: BTRFS info (device dm-0): first mount of filesystem 2be7cc1c-29d4-4496-b29b-8561323213d2 Jan 13 20:16:49.440547 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:16:49.440571 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 20:16:49.440607 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 20:16:49.441303 kernel: BTRFS info (device dm-0): using free space tree Jan 13 20:16:49.452975 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 13 20:16:49.457375 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 20:16:49.459118 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 20:16:49.467105 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 20:16:49.472237 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 20:16:49.492150 kernel: BTRFS info (device sda6): first mount of filesystem 9f8ecb6c-ace6-4d16-8781-f4e964dc0779 Jan 13 20:16:49.492262 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:16:49.492277 kernel: BTRFS info (device sda6): using free space tree Jan 13 20:16:49.498168 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 20:16:49.498306 kernel: BTRFS info (device sda6): auto enabling async discard Jan 13 20:16:49.511419 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 20:16:49.513000 kernel: BTRFS info (device sda6): last unmount of filesystem 9f8ecb6c-ace6-4d16-8781-f4e964dc0779 Jan 13 20:16:49.521033 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 20:16:49.530243 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 20:16:49.621655 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:16:49.639262 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:16:49.652546 ignition[689]: Ignition 2.20.0 Jan 13 20:16:49.653400 ignition[689]: Stage: fetch-offline Jan 13 20:16:49.653499 ignition[689]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:16:49.653512 ignition[689]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 13 20:16:49.653755 ignition[689]: parsed url from cmdline: "" Jan 13 20:16:49.653760 ignition[689]: no config URL provided Jan 13 20:16:49.653766 ignition[689]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 20:16:49.657475 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:16:49.653775 ignition[689]: no config at "/usr/lib/ignition/user.ign" Jan 13 20:16:49.653782 ignition[689]: failed to fetch config: resource requires networking Jan 13 20:16:49.654030 ignition[689]: Ignition finished successfully Jan 13 20:16:49.668295 systemd-networkd[774]: lo: Link UP Jan 13 20:16:49.668309 systemd-networkd[774]: lo: Gained carrier Jan 13 20:16:49.671290 systemd-networkd[774]: Enumeration completed Jan 13 20:16:49.671880 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:16:49.672790 systemd-networkd[774]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:16:49.672794 systemd-networkd[774]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:16:49.673767 systemd[1]: Reached target network.target - Network. Jan 13 20:16:49.676832 systemd-networkd[774]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:16:49.676835 systemd-networkd[774]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:16:49.677625 systemd-networkd[774]: eth0: Link UP Jan 13 20:16:49.677629 systemd-networkd[774]: eth0: Gained carrier Jan 13 20:16:49.677639 systemd-networkd[774]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:16:49.685609 systemd-networkd[774]: eth1: Link UP Jan 13 20:16:49.685633 systemd-networkd[774]: eth1: Gained carrier Jan 13 20:16:49.685651 systemd-networkd[774]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:16:49.688708 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 13 20:16:49.707798 ignition[779]: Ignition 2.20.0 Jan 13 20:16:49.707813 ignition[779]: Stage: fetch Jan 13 20:16:49.708117 ignition[779]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:16:49.708153 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 13 20:16:49.708277 ignition[779]: parsed url from cmdline: "" Jan 13 20:16:49.708281 ignition[779]: no config URL provided Jan 13 20:16:49.708286 ignition[779]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 20:16:49.708294 ignition[779]: no config at "/usr/lib/ignition/user.ign" Jan 13 20:16:49.708382 ignition[779]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jan 13 20:16:49.709433 ignition[779]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 13 20:16:49.732085 systemd-networkd[774]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 20:16:49.740019 systemd-networkd[774]: eth0: DHCPv4 address 138.199.153.196/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 13 20:16:49.909726 ignition[779]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Jan 13 20:16:49.916433 ignition[779]: GET result: OK Jan 13 20:16:49.916529 ignition[779]: parsing config with SHA512: 0bd9ba652924b4f8fb0aa25d7799fcaadaf23bc8d3b4c332bec7a93dd4767e7d3e4077bc797bbaf7e5976969ff10d7b227298c8defdd4d1e8a48b12e759e9b23 Jan 13 20:16:49.922102 unknown[779]: fetched base config from "system" Jan 13 20:16:49.922594 ignition[779]: fetch: fetch complete Jan 13 20:16:49.922114 unknown[779]: fetched base config from "system" Jan 13 20:16:49.922601 ignition[779]: fetch: fetch passed Jan 13 20:16:49.922120 unknown[779]: fetched user config from "hetzner" Jan 13 20:16:49.922662 ignition[779]: Ignition finished successfully Jan 13 20:16:49.926140 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 13 20:16:49.932294 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 20:16:49.947584 ignition[786]: Ignition 2.20.0 Jan 13 20:16:49.947595 ignition[786]: Stage: kargs Jan 13 20:16:49.947826 ignition[786]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:16:49.947838 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 13 20:16:49.949020 ignition[786]: kargs: kargs passed Jan 13 20:16:49.949097 ignition[786]: Ignition finished successfully Jan 13 20:16:49.951436 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 20:16:49.961186 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 20:16:49.973271 ignition[793]: Ignition 2.20.0 Jan 13 20:16:49.973283 ignition[793]: Stage: disks Jan 13 20:16:49.973466 ignition[793]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:16:49.973476 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 13 20:16:49.977360 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 20:16:49.974525 ignition[793]: disks: disks passed Jan 13 20:16:49.979065 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 20:16:49.974589 ignition[793]: Ignition finished successfully Jan 13 20:16:49.979737 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 20:16:49.980810 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:16:49.982480 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:16:49.983580 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:16:49.996765 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 20:16:50.016780 systemd-fsck[802]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 13 20:16:50.020240 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 20:16:50.031787 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 20:16:50.098048 kernel: EXT4-fs (sda9): mounted filesystem f9a95e53-2d63-4443-b523-cb2108fb48f6 r/w with ordered data mode. Quota mode: none. Jan 13 20:16:50.099347 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 20:16:50.101035 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 20:16:50.109403 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:16:50.113062 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 20:16:50.121275 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 13 20:16:50.124321 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 20:16:50.125961 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:16:50.131296 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 20:16:50.138142 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 20:16:50.142412 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (810) Jan 13 20:16:50.149361 kernel: BTRFS info (device sda6): first mount of filesystem 9f8ecb6c-ace6-4d16-8781-f4e964dc0779 Jan 13 20:16:50.149446 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:16:50.150874 kernel: BTRFS info (device sda6): using free space tree Jan 13 20:16:50.155929 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 20:16:50.156012 kernel: BTRFS info (device sda6): auto enabling async discard Jan 13 20:16:50.162057 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:16:50.204520 initrd-setup-root[837]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 20:16:50.212177 coreos-metadata[812]: Jan 13 20:16:50.212 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jan 13 20:16:50.216491 coreos-metadata[812]: Jan 13 20:16:50.215 INFO Fetch successful Jan 13 20:16:50.216491 coreos-metadata[812]: Jan 13 20:16:50.215 INFO wrote hostname ci-4186-1-0-0-2aa1049bb1 to /sysroot/etc/hostname Jan 13 20:16:50.220647 initrd-setup-root[844]: cut: /sysroot/etc/group: No such file or directory Jan 13 20:16:50.219669 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 13 20:16:50.226161 initrd-setup-root[851]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 20:16:50.233882 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 20:16:50.382692 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 20:16:50.390370 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 20:16:50.395294 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 20:16:50.405964 kernel: BTRFS info (device sda6): last unmount of filesystem 9f8ecb6c-ace6-4d16-8781-f4e964dc0779 Jan 13 20:16:50.438323 ignition[927]: INFO : Ignition 2.20.0 Jan 13 20:16:50.439810 ignition[927]: INFO : Stage: mount Jan 13 20:16:50.439810 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:16:50.439810 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 13 20:16:50.442361 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 20:16:50.448381 ignition[927]: INFO : mount: mount passed Jan 13 20:16:50.448381 ignition[927]: INFO : Ignition finished successfully Jan 13 20:16:50.450186 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 20:16:50.458090 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 20:16:50.464113 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 20:16:50.472145 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:16:50.498982 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (938) Jan 13 20:16:50.501366 kernel: BTRFS info (device sda6): first mount of filesystem 9f8ecb6c-ace6-4d16-8781-f4e964dc0779 Jan 13 20:16:50.501424 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:16:50.501964 kernel: BTRFS info (device sda6): using free space tree Jan 13 20:16:50.510971 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 20:16:50.511052 kernel: BTRFS info (device sda6): auto enabling async discard Jan 13 20:16:50.514615 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:16:50.537642 ignition[955]: INFO : Ignition 2.20.0 Jan 13 20:16:50.537642 ignition[955]: INFO : Stage: files Jan 13 20:16:50.540030 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:16:50.540030 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 13 20:16:50.540030 ignition[955]: DEBUG : files: compiled without relabeling support, skipping Jan 13 20:16:50.543057 ignition[955]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 20:16:50.543057 ignition[955]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 20:16:50.547467 ignition[955]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 20:16:50.548344 ignition[955]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 20:16:50.550024 ignition[955]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 20:16:50.548574 unknown[955]: wrote ssh authorized keys file for user: core Jan 13 20:16:50.551795 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 13 20:16:50.552954 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 13 20:16:50.857129 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 13 20:16:51.073618 systemd-networkd[774]: eth0: Gained IPv6LL Jan 13 20:16:51.521828 systemd-networkd[774]: eth1: Gained IPv6LL Jan 13 20:16:51.958783 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 13 20:16:51.958783 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 13 20:16:51.962859 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 20:16:51.962859 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:16:51.962859 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:16:51.962859 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:16:51.962859 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:16:51.962859 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:16:51.962859 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:16:51.962859 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:16:51.962859 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:16:51.962859 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 13 20:16:51.962859 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 13 20:16:51.962859 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 13 20:16:51.962859 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Jan 13 20:16:52.527564 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 13 20:16:53.009656 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 13 20:16:53.009656 ignition[955]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 13 20:16:53.014936 ignition[955]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:16:53.014936 ignition[955]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:16:53.014936 ignition[955]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 13 20:16:53.014936 ignition[955]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 13 20:16:53.014936 ignition[955]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 13 20:16:53.014936 ignition[955]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 13 20:16:53.014936 ignition[955]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 13 20:16:53.014936 ignition[955]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Jan 13 20:16:53.014936 ignition[955]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 20:16:53.026725 ignition[955]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:16:53.026725 ignition[955]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:16:53.026725 ignition[955]: INFO : files: files passed Jan 13 20:16:53.026725 ignition[955]: INFO : Ignition finished successfully Jan 13 20:16:53.018141 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 20:16:53.030326 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 20:16:53.034134 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 20:16:53.037551 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 20:16:53.038363 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 20:16:53.052251 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:16:53.052251 initrd-setup-root-after-ignition[984]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:16:53.055236 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:16:53.058616 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:16:53.060512 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 20:16:53.074282 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 20:16:53.123837 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 20:16:53.124473 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 20:16:53.126892 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 20:16:53.127592 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 20:16:53.128744 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 20:16:53.150321 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 20:16:53.169129 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:16:53.176287 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 20:16:53.199552 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:16:53.200547 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:16:53.201806 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 20:16:53.203049 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 20:16:53.203184 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:16:53.205048 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 20:16:53.206603 systemd[1]: Stopped target basic.target - Basic System. Jan 13 20:16:53.207756 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 20:16:53.208774 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:16:53.210007 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 20:16:53.211608 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 20:16:53.212866 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:16:53.214169 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 20:16:53.215553 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 20:16:53.216658 systemd[1]: Stopped target swap.target - Swaps. Jan 13 20:16:53.218282 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 20:16:53.218413 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:16:53.219934 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:16:53.220696 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:16:53.221953 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 20:16:53.222436 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:16:53.223484 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 20:16:53.223664 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 20:16:53.225458 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 20:16:53.225627 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:16:53.227284 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 20:16:53.227408 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 20:16:53.228737 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 13 20:16:53.228856 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 13 20:16:53.239241 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 20:16:53.245300 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 20:16:53.246498 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 20:16:53.246850 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:16:53.250049 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 20:16:53.250178 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:16:53.263875 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 20:16:53.264994 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 20:16:53.266738 ignition[1008]: INFO : Ignition 2.20.0 Jan 13 20:16:53.266738 ignition[1008]: INFO : Stage: umount Jan 13 20:16:53.268484 ignition[1008]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:16:53.268484 ignition[1008]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 13 20:16:53.270724 ignition[1008]: INFO : umount: umount passed Jan 13 20:16:53.272310 ignition[1008]: INFO : Ignition finished successfully Jan 13 20:16:53.273310 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 20:16:53.273446 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 20:16:53.278734 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 20:16:53.280009 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 20:16:53.280419 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 20:16:53.281336 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 20:16:53.281392 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 20:16:53.282520 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 13 20:16:53.282571 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 13 20:16:53.283803 systemd[1]: Stopped target network.target - Network. Jan 13 20:16:53.284601 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 20:16:53.284680 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:16:53.285817 systemd[1]: Stopped target paths.target - Path Units. Jan 13 20:16:53.286764 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 20:16:53.290390 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:16:53.292797 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 20:16:53.293432 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 20:16:53.294892 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 20:16:53.294992 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:16:53.296281 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 20:16:53.296352 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:16:53.297609 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 20:16:53.297671 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 20:16:53.299037 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 20:16:53.299094 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 20:16:53.300412 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 20:16:53.301400 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 20:16:53.304616 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 20:16:53.304786 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 20:16:53.306893 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 20:16:53.307104 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 20:16:53.308695 systemd-networkd[774]: eth0: DHCPv6 lease lost Jan 13 20:16:53.310490 systemd-networkd[774]: eth1: DHCPv6 lease lost Jan 13 20:16:53.312412 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 20:16:53.312511 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 20:16:53.313538 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 20:16:53.313601 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:16:53.315066 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 20:16:53.315219 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 20:16:53.316812 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 20:16:53.316884 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:16:53.323273 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 20:16:53.324108 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 20:16:53.324228 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:16:53.324983 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:16:53.325033 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:16:53.325738 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 20:16:53.325786 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 20:16:53.328578 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:16:53.350047 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 20:16:53.350244 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:16:53.351807 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 20:16:53.351860 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 20:16:53.353061 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 20:16:53.353097 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:16:53.354143 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 20:16:53.354231 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:16:53.355792 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 20:16:53.355848 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 20:16:53.357502 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:16:53.357560 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:16:53.364164 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 20:16:53.365791 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 20:16:53.365876 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:16:53.367989 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 13 20:16:53.368086 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:16:53.369257 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 20:16:53.369315 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:16:53.371115 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:16:53.371185 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:16:53.373123 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 20:16:53.373278 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 20:16:53.374445 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 20:16:53.374555 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 20:16:53.376187 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 20:16:53.382220 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 20:16:53.392154 systemd[1]: Switching root. Jan 13 20:16:53.433460 systemd-journald[237]: Journal stopped Jan 13 20:16:54.705436 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Jan 13 20:16:54.705510 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 20:16:54.705524 kernel: SELinux: policy capability open_perms=1 Jan 13 20:16:54.705533 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 20:16:54.705542 kernel: SELinux: policy capability always_check_network=0 Jan 13 20:16:54.705551 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 20:16:54.705570 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 20:16:54.705583 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 20:16:54.705592 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 20:16:54.705601 kernel: audit: type=1403 audit(1736799413.653:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 20:16:54.705611 systemd[1]: Successfully loaded SELinux policy in 41.155ms. Jan 13 20:16:54.705633 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.155ms. Jan 13 20:16:54.705644 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:16:54.705654 systemd[1]: Detected virtualization kvm. Jan 13 20:16:54.705664 systemd[1]: Detected architecture arm64. Jan 13 20:16:54.705676 systemd[1]: Detected first boot. Jan 13 20:16:54.705687 systemd[1]: Hostname set to . Jan 13 20:16:54.705698 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:16:54.705712 zram_generator::config[1051]: No configuration found. Jan 13 20:16:54.705728 systemd[1]: Populated /etc with preset unit settings. Jan 13 20:16:54.705739 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 20:16:54.705749 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 20:16:54.705759 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 20:16:54.705770 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 20:16:54.705782 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 20:16:54.705793 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 20:16:54.705803 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 20:16:54.705814 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 20:16:54.705825 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 20:16:54.705835 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 20:16:54.705845 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 20:16:54.705855 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:16:54.705867 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:16:54.705878 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 20:16:54.705888 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 20:16:54.705921 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 20:16:54.705934 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:16:54.705945 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 13 20:16:54.705955 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:16:54.705965 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 20:16:54.705978 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 20:16:54.705991 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 20:16:54.706003 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 20:16:54.706013 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:16:54.706025 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:16:54.706036 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:16:54.706048 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:16:54.706062 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 20:16:54.706073 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 20:16:54.706083 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:16:54.706094 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:16:54.706105 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:16:54.706115 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 20:16:54.706130 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 20:16:54.706140 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 20:16:54.706150 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 20:16:54.706161 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 20:16:54.706173 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 20:16:54.706194 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 20:16:54.706207 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 20:16:54.706217 systemd[1]: Reached target machines.target - Containers. Jan 13 20:16:54.706227 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 20:16:54.706238 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:16:54.706249 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:16:54.706267 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 20:16:54.706278 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:16:54.706288 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:16:54.706298 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:16:54.706308 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 20:16:54.706319 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:16:54.706331 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 20:16:54.706341 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 20:16:54.706352 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 20:16:54.706363 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 20:16:54.706373 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 20:16:54.706384 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:16:54.706394 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:16:54.706405 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 20:16:54.706415 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 20:16:54.706426 kernel: loop: module loaded Jan 13 20:16:54.706436 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:16:54.706446 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 20:16:54.706456 systemd[1]: Stopped verity-setup.service. Jan 13 20:16:54.706466 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 20:16:54.706477 kernel: fuse: init (API version 7.39) Jan 13 20:16:54.706487 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 20:16:54.706498 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 20:16:54.706507 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 20:16:54.706519 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 20:16:54.706529 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 20:16:54.706540 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:16:54.706551 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 20:16:54.706561 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 20:16:54.706573 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:16:54.706584 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:16:54.706595 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:16:54.706605 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:16:54.706617 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 20:16:54.706629 kernel: ACPI: bus type drm_connector registered Jan 13 20:16:54.706639 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 20:16:54.706649 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:16:54.706659 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:16:54.706669 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:16:54.706679 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:16:54.706689 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:16:54.706699 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 20:16:54.706819 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 20:16:54.706837 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 20:16:54.706876 systemd-journald[1111]: Collecting audit messages is disabled. Jan 13 20:16:54.710988 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 20:16:54.711036 systemd-journald[1111]: Journal started Jan 13 20:16:54.711069 systemd-journald[1111]: Runtime Journal (/run/log/journal/f155feca86a94053b04152cf469384a7) is 8.0M, max 76.5M, 68.5M free. Jan 13 20:16:54.411354 systemd[1]: Queued start job for default target multi-user.target. Jan 13 20:16:54.436309 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 13 20:16:54.436836 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 20:16:54.722030 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 20:16:54.722106 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 20:16:54.722129 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:16:54.728435 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 20:16:54.735921 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 20:16:54.743425 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 20:16:54.743718 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:16:54.753465 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 20:16:54.757942 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:16:54.762342 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 20:16:54.763949 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:16:54.771016 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:16:54.784946 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 20:16:54.827941 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:16:54.828028 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:16:54.819807 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 20:16:54.838815 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 20:16:54.860873 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 20:16:54.881887 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:16:54.912967 kernel: loop0: detected capacity change from 0 to 194096 Jan 13 20:16:54.913716 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 20:16:54.926666 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 20:16:54.935939 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 20:16:54.949937 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 20:16:54.954242 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 20:16:54.963584 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 20:16:54.976556 kernel: loop1: detected capacity change from 0 to 113552 Jan 13 20:16:54.977295 systemd-tmpfiles[1147]: ACLs are not supported, ignoring. Jan 13 20:16:54.977310 systemd-tmpfiles[1147]: ACLs are not supported, ignoring. Jan 13 20:16:54.984204 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 20:16:54.988012 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:16:54.997403 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:16:55.004677 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 20:16:55.019792 systemd-journald[1111]: Time spent on flushing to /var/log/journal/f155feca86a94053b04152cf469384a7 is 82.547ms for 1138 entries. Jan 13 20:16:55.019792 systemd-journald[1111]: System Journal (/var/log/journal/f155feca86a94053b04152cf469384a7) is 8.0M, max 584.8M, 576.8M free. Jan 13 20:16:55.118101 systemd-journald[1111]: Received client request to flush runtime journal. Jan 13 20:16:55.118156 kernel: loop2: detected capacity change from 0 to 8 Jan 13 20:16:55.118170 kernel: loop3: detected capacity change from 0 to 116784 Jan 13 20:16:55.051079 udevadm[1178]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 13 20:16:55.076172 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 20:16:55.081989 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 20:16:55.096347 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 20:16:55.111248 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:16:55.125411 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 20:16:55.146929 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Jan 13 20:16:55.146945 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Jan 13 20:16:55.153730 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:16:55.162848 kernel: loop4: detected capacity change from 0 to 194096 Jan 13 20:16:55.202935 kernel: loop5: detected capacity change from 0 to 113552 Jan 13 20:16:55.228555 kernel: loop6: detected capacity change from 0 to 8 Jan 13 20:16:55.231975 kernel: loop7: detected capacity change from 0 to 116784 Jan 13 20:16:55.257263 (sd-merge)[1194]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Jan 13 20:16:55.258821 (sd-merge)[1194]: Merged extensions into '/usr'. Jan 13 20:16:55.270436 systemd[1]: Reloading requested from client PID 1144 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 20:16:55.270626 systemd[1]: Reloading... Jan 13 20:16:55.438992 zram_generator::config[1226]: No configuration found. Jan 13 20:16:55.587989 ldconfig[1136]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 20:16:55.610122 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:16:55.675057 systemd[1]: Reloading finished in 403 ms. Jan 13 20:16:55.700974 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 20:16:55.702450 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 20:16:55.720477 systemd[1]: Starting ensure-sysext.service... Jan 13 20:16:55.725076 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:16:55.727014 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 20:16:55.738159 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:16:55.744173 systemd[1]: Reloading requested from client PID 1257 ('systemctl') (unit ensure-sysext.service)... Jan 13 20:16:55.744239 systemd[1]: Reloading... Jan 13 20:16:55.761727 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 20:16:55.762601 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 20:16:55.763877 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 20:16:55.764325 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Jan 13 20:16:55.764465 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Jan 13 20:16:55.771878 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:16:55.772110 systemd-tmpfiles[1258]: Skipping /boot Jan 13 20:16:55.791295 systemd-udevd[1260]: Using default interface naming scheme 'v255'. Jan 13 20:16:55.797215 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:16:55.797262 systemd-tmpfiles[1258]: Skipping /boot Jan 13 20:16:55.846930 zram_generator::config[1287]: No configuration found. Jan 13 20:16:56.081477 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:16:56.149048 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 13 20:16:56.149245 systemd[1]: Reloading finished in 404 ms. Jan 13 20:16:56.157941 kernel: mousedev: PS/2 mouse device common for all mice Jan 13 20:16:56.175674 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:16:56.177867 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:16:56.209428 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:16:56.215214 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 20:16:56.224147 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 20:16:56.236368 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:16:56.247113 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:16:56.252243 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 20:16:56.259834 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Jan 13 20:16:56.260025 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:16:56.263294 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:16:56.268424 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:16:56.275781 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:16:56.278013 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1301) Jan 13 20:16:56.278153 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:16:56.282229 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:16:56.282436 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:16:56.285875 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 20:16:56.290972 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:16:56.291142 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:16:56.295922 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:16:56.310540 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:16:56.311424 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:16:56.318876 systemd[1]: Finished ensure-sysext.service. Jan 13 20:16:56.376294 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 13 20:16:56.377448 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 20:16:56.405475 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 20:16:56.406584 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:16:56.408007 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:16:56.409120 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:16:56.409993 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:16:56.436672 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:16:56.437403 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:16:56.439301 augenrules[1402]: No rules Jan 13 20:16:56.439621 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:16:56.441011 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:16:56.452074 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:16:56.454042 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:16:56.455720 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 20:16:56.462009 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Jan 13 20:16:56.463928 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 13 20:16:56.464004 kernel: [drm] features: -context_init Jan 13 20:16:56.467964 kernel: [drm] number of scanouts: 1 Jan 13 20:16:56.468045 kernel: [drm] number of cap sets: 0 Jan 13 20:16:56.469781 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:16:56.469951 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:16:56.476877 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 20:16:56.480627 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:16:56.481353 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 20:16:56.485947 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Jan 13 20:16:56.515505 kernel: Console: switching to colour frame buffer device 160x50 Jan 13 20:16:56.520922 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 20:16:56.525468 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 20:16:56.530927 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 13 20:16:56.528761 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 13 20:16:56.547548 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 20:16:56.570357 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:16:56.570617 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:16:56.578871 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:16:56.594438 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 20:16:56.607272 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 20:16:56.608333 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 20:16:56.629557 systemd-networkd[1367]: lo: Link UP Jan 13 20:16:56.629947 systemd-networkd[1367]: lo: Gained carrier Jan 13 20:16:56.633017 systemd-networkd[1367]: Enumeration completed Jan 13 20:16:56.633389 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:16:56.634147 systemd-networkd[1367]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:16:56.634286 systemd-networkd[1367]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:16:56.635336 systemd-networkd[1367]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:16:56.635485 systemd-networkd[1367]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:16:56.636296 systemd-networkd[1367]: eth0: Link UP Jan 13 20:16:56.636304 systemd-networkd[1367]: eth0: Gained carrier Jan 13 20:16:56.636326 systemd-networkd[1367]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:16:56.644220 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 20:16:56.646166 systemd-networkd[1367]: eth1: Link UP Jan 13 20:16:56.646290 systemd-networkd[1367]: eth1: Gained carrier Jan 13 20:16:56.646317 systemd-networkd[1367]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:16:56.647053 lvm[1429]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:16:56.685166 systemd-networkd[1367]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 20:16:56.690320 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 20:16:56.691678 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:16:56.699162 systemd-networkd[1367]: eth0: DHCPv4 address 138.199.153.196/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 13 20:16:56.701442 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 20:16:56.711723 systemd-resolved[1370]: Positive Trust Anchors: Jan 13 20:16:56.711738 systemd-resolved[1370]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:16:56.711771 systemd-resolved[1370]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:16:56.719794 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:16:56.723325 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 13 20:16:56.725245 lvm[1436]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:16:56.724385 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 20:16:56.727613 systemd-resolved[1370]: Using system hostname 'ci-4186-1-0-0-2aa1049bb1'. Jan 13 20:16:56.730192 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:16:56.731268 systemd[1]: Reached target network.target - Network. Jan 13 20:16:56.731871 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:16:56.732733 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:16:56.733551 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 20:16:56.734414 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 20:16:56.735560 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 20:16:56.736509 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 20:16:56.737353 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 20:16:56.738140 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 20:16:56.738215 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:16:56.738858 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:16:56.742372 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 20:16:56.745469 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 20:16:56.750329 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 20:16:56.751847 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 20:16:56.752678 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:16:56.753287 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:16:56.753823 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:16:56.753861 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:16:56.761078 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 20:16:56.764841 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 13 20:16:56.769477 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 20:16:56.774185 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 20:16:56.779394 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 20:16:56.781147 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 20:16:56.784512 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 20:16:56.787865 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 20:16:56.792391 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jan 13 20:16:56.794070 systemd-timesyncd[1381]: Contacted time server 78.47.56.71:123 (0.flatcar.pool.ntp.org). Jan 13 20:16:56.794136 systemd-timesyncd[1381]: Initial clock synchronization to Mon 2025-01-13 20:16:56.484776 UTC. Jan 13 20:16:56.797816 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 20:16:56.802151 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 20:16:56.811124 jq[1448]: false Jan 13 20:16:56.809429 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 20:16:56.814233 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 20:16:56.814891 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 20:16:56.819287 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 20:16:56.832068 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 20:16:56.843999 coreos-metadata[1444]: Jan 13 20:16:56.834 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jan 13 20:16:56.843999 coreos-metadata[1444]: Jan 13 20:16:56.841 INFO Fetch successful Jan 13 20:16:56.843999 coreos-metadata[1444]: Jan 13 20:16:56.841 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jan 13 20:16:56.843999 coreos-metadata[1444]: Jan 13 20:16:56.843 INFO Fetch successful Jan 13 20:16:56.840282 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 20:16:56.849381 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 20:16:56.850820 jq[1457]: true Jan 13 20:16:56.849610 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 20:16:56.907470 update_engine[1456]: I20250113 20:16:56.904608 1456 main.cc:92] Flatcar Update Engine starting Jan 13 20:16:56.927069 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 20:16:56.927375 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 20:16:56.930281 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 20:16:56.933041 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 20:16:56.957977 systemd-logind[1455]: New seat seat0. Jan 13 20:16:56.975153 systemd-logind[1455]: Watching system buttons on /dev/input/event0 (Power Button) Jan 13 20:16:56.986056 jq[1465]: true Jan 13 20:16:56.975196 systemd-logind[1455]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Jan 13 20:16:56.990266 dbus-daemon[1445]: [system] SELinux support is enabled Jan 13 20:16:57.001890 update_engine[1456]: I20250113 20:16:56.996126 1456 update_check_scheduler.cc:74] Next update check in 3m3s Jan 13 20:16:57.001933 extend-filesystems[1449]: Found loop4 Jan 13 20:16:57.001933 extend-filesystems[1449]: Found loop5 Jan 13 20:16:57.001933 extend-filesystems[1449]: Found loop6 Jan 13 20:16:57.001933 extend-filesystems[1449]: Found loop7 Jan 13 20:16:57.001933 extend-filesystems[1449]: Found sda Jan 13 20:16:57.001933 extend-filesystems[1449]: Found sda1 Jan 13 20:16:57.001933 extend-filesystems[1449]: Found sda2 Jan 13 20:16:57.001933 extend-filesystems[1449]: Found sda3 Jan 13 20:16:57.001933 extend-filesystems[1449]: Found usr Jan 13 20:16:57.001933 extend-filesystems[1449]: Found sda4 Jan 13 20:16:57.001933 extend-filesystems[1449]: Found sda6 Jan 13 20:16:57.001933 extend-filesystems[1449]: Found sda7 Jan 13 20:16:57.001933 extend-filesystems[1449]: Found sda9 Jan 13 20:16:57.001933 extend-filesystems[1449]: Checking size of /dev/sda9 Jan 13 20:16:56.998826 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 20:16:57.031341 tar[1464]: linux-arm64/helm Jan 13 20:16:57.018621 dbus-daemon[1445]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 13 20:16:57.000565 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 20:16:57.002831 (ntainerd)[1481]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 20:16:57.009624 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 20:16:57.009665 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 20:16:57.011771 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 20:16:57.011797 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 20:16:57.030463 systemd[1]: Started update-engine.service - Update Engine. Jan 13 20:16:57.052339 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 20:16:57.063272 extend-filesystems[1449]: Resized partition /dev/sda9 Jan 13 20:16:57.077733 extend-filesystems[1497]: resize2fs 1.47.1 (20-May-2024) Jan 13 20:16:57.082490 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 13 20:16:57.084346 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 20:16:57.089412 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Jan 13 20:16:57.161950 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1301) Jan 13 20:16:57.222266 bash[1513]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:16:57.221466 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 20:16:57.242084 systemd[1]: Starting sshkeys.service... Jan 13 20:16:57.286534 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Jan 13 20:16:57.304446 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 13 20:16:57.320254 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 13 20:16:57.335410 extend-filesystems[1497]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 13 20:16:57.335410 extend-filesystems[1497]: old_desc_blocks = 1, new_desc_blocks = 5 Jan 13 20:16:57.335410 extend-filesystems[1497]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Jan 13 20:16:57.339943 extend-filesystems[1449]: Resized filesystem in /dev/sda9 Jan 13 20:16:57.339943 extend-filesystems[1449]: Found sr0 Jan 13 20:16:57.338497 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 20:16:57.338787 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 20:16:57.407735 coreos-metadata[1522]: Jan 13 20:16:57.407 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jan 13 20:16:57.413949 coreos-metadata[1522]: Jan 13 20:16:57.410 INFO Fetch successful Jan 13 20:16:57.416804 unknown[1522]: wrote ssh authorized keys file for user: core Jan 13 20:16:57.460562 containerd[1481]: time="2025-01-13T20:16:57.460378315Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 13 20:16:57.474274 update-ssh-keys[1530]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:16:57.477671 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 13 20:16:57.484938 systemd[1]: Finished sshkeys.service. Jan 13 20:16:57.499764 containerd[1481]: time="2025-01-13T20:16:57.499708224Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:16:57.502561 containerd[1481]: time="2025-01-13T20:16:57.502453862Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:16:57.502719 containerd[1481]: time="2025-01-13T20:16:57.502701955Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 20:16:57.502906 containerd[1481]: time="2025-01-13T20:16:57.502875812Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 20:16:57.503353 containerd[1481]: time="2025-01-13T20:16:57.503315456Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 20:16:57.503515 containerd[1481]: time="2025-01-13T20:16:57.503499391Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 20:16:57.503753 containerd[1481]: time="2025-01-13T20:16:57.503661709Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:16:57.503753 containerd[1481]: time="2025-01-13T20:16:57.503680633Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:16:57.504093 containerd[1481]: time="2025-01-13T20:16:57.504055003Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:16:57.504226 containerd[1481]: time="2025-01-13T20:16:57.504137009Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 20:16:57.504226 containerd[1481]: time="2025-01-13T20:16:57.504156356Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:16:57.504226 containerd[1481]: time="2025-01-13T20:16:57.504165972Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 20:16:57.504742 containerd[1481]: time="2025-01-13T20:16:57.504548920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:16:57.504968 containerd[1481]: time="2025-01-13T20:16:57.504940906Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:16:57.505296 containerd[1481]: time="2025-01-13T20:16:57.505276043Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:16:57.505432 containerd[1481]: time="2025-01-13T20:16:57.505345817Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 20:16:57.505575 containerd[1481]: time="2025-01-13T20:16:57.505542176Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 20:16:57.505826 containerd[1481]: time="2025-01-13T20:16:57.505680723Z" level=info msg="metadata content store policy set" policy=shared Jan 13 20:16:57.511664 locksmithd[1494]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 20:16:57.523126 containerd[1481]: time="2025-01-13T20:16:57.522980641Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 20:16:57.523921 containerd[1481]: time="2025-01-13T20:16:57.523373705Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 20:16:57.523921 containerd[1481]: time="2025-01-13T20:16:57.523404361Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 20:16:57.523921 containerd[1481]: time="2025-01-13T20:16:57.523421939Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 20:16:57.523921 containerd[1481]: time="2025-01-13T20:16:57.523436594Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 20:16:57.523921 containerd[1481]: time="2025-01-13T20:16:57.523647684Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 20:16:57.526298 containerd[1481]: time="2025-01-13T20:16:57.525163320Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 20:16:57.526298 containerd[1481]: time="2025-01-13T20:16:57.525422952Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 20:16:57.526298 containerd[1481]: time="2025-01-13T20:16:57.525442761Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 20:16:57.526298 containerd[1481]: time="2025-01-13T20:16:57.525461031Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 20:16:57.526298 containerd[1481]: time="2025-01-13T20:16:57.525475263Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 20:16:57.526298 containerd[1481]: time="2025-01-13T20:16:57.525488071Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 20:16:57.526298 containerd[1481]: time="2025-01-13T20:16:57.525500457Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 20:16:57.526298 containerd[1481]: time="2025-01-13T20:16:57.525515419Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 20:16:57.526298 containerd[1481]: time="2025-01-13T20:16:57.525530497Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 20:16:57.526298 containerd[1481]: time="2025-01-13T20:16:57.525547152Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 20:16:57.526298 containerd[1481]: time="2025-01-13T20:16:57.525560692Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 20:16:57.526298 containerd[1481]: time="2025-01-13T20:16:57.525572846Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 20:16:57.526298 containerd[1481]: time="2025-01-13T20:16:57.525596655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 20:16:57.526298 containerd[1481]: time="2025-01-13T20:16:57.525611541Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 20:16:57.526611 containerd[1481]: time="2025-01-13T20:16:57.525624849Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 20:16:57.526611 containerd[1481]: time="2025-01-13T20:16:57.525644197Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 20:16:57.526611 containerd[1481]: time="2025-01-13T20:16:57.525657082Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 20:16:57.526611 containerd[1481]: time="2025-01-13T20:16:57.525671506Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 20:16:57.526611 containerd[1481]: time="2025-01-13T20:16:57.525683699Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 20:16:57.526611 containerd[1481]: time="2025-01-13T20:16:57.525696239Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 20:16:57.526611 containerd[1481]: time="2025-01-13T20:16:57.525708239Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 20:16:57.526611 containerd[1481]: time="2025-01-13T20:16:57.525722548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 20:16:57.526611 containerd[1481]: time="2025-01-13T20:16:57.525734780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 20:16:57.526611 containerd[1481]: time="2025-01-13T20:16:57.525746588Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 20:16:57.526611 containerd[1481]: time="2025-01-13T20:16:57.525762012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 20:16:57.526611 containerd[1481]: time="2025-01-13T20:16:57.525778552Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 20:16:57.526611 containerd[1481]: time="2025-01-13T20:16:57.525801438Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 20:16:57.526611 containerd[1481]: time="2025-01-13T20:16:57.525814208Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 20:16:57.526611 containerd[1481]: time="2025-01-13T20:16:57.525825170Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 20:16:57.529269 containerd[1481]: time="2025-01-13T20:16:57.527940383Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 20:16:57.529269 containerd[1481]: time="2025-01-13T20:16:57.528067083Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 20:16:57.529269 containerd[1481]: time="2025-01-13T20:16:57.528082661Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 20:16:57.529269 containerd[1481]: time="2025-01-13T20:16:57.528095354Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 20:16:57.529269 containerd[1481]: time="2025-01-13T20:16:57.528104009Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 20:16:57.529269 containerd[1481]: time="2025-01-13T20:16:57.528171359Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 20:16:57.529269 containerd[1481]: time="2025-01-13T20:16:57.528186168Z" level=info msg="NRI interface is disabled by configuration." Jan 13 20:16:57.529269 containerd[1481]: time="2025-01-13T20:16:57.528198669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 20:16:57.529508 containerd[1481]: time="2025-01-13T20:16:57.528555691Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 20:16:57.529508 containerd[1481]: time="2025-01-13T20:16:57.528604387Z" level=info msg="Connect containerd service" Jan 13 20:16:57.529508 containerd[1481]: time="2025-01-13T20:16:57.528640312Z" level=info msg="using legacy CRI server" Jan 13 20:16:57.529508 containerd[1481]: time="2025-01-13T20:16:57.528647236Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 20:16:57.529508 containerd[1481]: time="2025-01-13T20:16:57.528923407Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 20:16:57.532181 containerd[1481]: time="2025-01-13T20:16:57.532128921Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:16:57.533796 containerd[1481]: time="2025-01-13T20:16:57.532830697Z" level=info msg="Start subscribing containerd event" Jan 13 20:16:57.533796 containerd[1481]: time="2025-01-13T20:16:57.532971206Z" level=info msg="Start recovering state" Jan 13 20:16:57.533796 containerd[1481]: time="2025-01-13T20:16:57.533065173Z" level=info msg="Start event monitor" Jan 13 20:16:57.533796 containerd[1481]: time="2025-01-13T20:16:57.533077905Z" level=info msg="Start snapshots syncer" Jan 13 20:16:57.533796 containerd[1481]: time="2025-01-13T20:16:57.533087482Z" level=info msg="Start cni network conf syncer for default" Jan 13 20:16:57.533796 containerd[1481]: time="2025-01-13T20:16:57.533096060Z" level=info msg="Start streaming server" Jan 13 20:16:57.535601 containerd[1481]: time="2025-01-13T20:16:57.535561295Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 20:16:57.535936 containerd[1481]: time="2025-01-13T20:16:57.535746884Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 20:16:57.540558 containerd[1481]: time="2025-01-13T20:16:57.540284675Z" level=info msg="containerd successfully booted in 0.082153s" Jan 13 20:16:57.540410 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 20:16:57.749071 tar[1464]: linux-arm64/LICENSE Jan 13 20:16:57.749071 tar[1464]: linux-arm64/README.md Jan 13 20:16:57.770207 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 20:16:58.305520 systemd-networkd[1367]: eth1: Gained IPv6LL Jan 13 20:16:58.312098 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 20:16:58.315072 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 20:16:58.327107 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:16:58.331248 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 20:16:58.381543 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 20:16:58.482734 sshd_keygen[1478]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 20:16:58.507435 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 20:16:58.520694 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 20:16:58.531204 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 20:16:58.531407 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 20:16:58.542982 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 20:16:58.554508 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 20:16:58.564059 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 20:16:58.573580 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 13 20:16:58.576878 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 20:16:58.625069 systemd-networkd[1367]: eth0: Gained IPv6LL Jan 13 20:16:59.241003 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:16:59.242382 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 20:16:59.248198 systemd[1]: Startup finished in 862ms (kernel) + 6.956s (initrd) + 5.635s (userspace) = 13.454s. Jan 13 20:16:59.253704 (kubelet)[1574]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:16:59.273109 agetty[1567]: failed to open credentials directory Jan 13 20:16:59.274281 agetty[1568]: failed to open credentials directory Jan 13 20:16:59.959384 kubelet[1574]: E0113 20:16:59.957814 1574 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:16:59.964960 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:16:59.965113 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:17:10.104946 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 20:17:10.118705 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:17:10.291285 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:17:10.292408 (kubelet)[1594]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:17:10.373918 kubelet[1594]: E0113 20:17:10.371824 1594 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:17:10.375098 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:17:10.375558 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:17:20.604948 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 20:17:20.616263 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:17:20.771622 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:17:20.784528 (kubelet)[1610]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:17:20.841398 kubelet[1610]: E0113 20:17:20.841334 1610 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:17:20.844141 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:17:20.844309 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:17:30.855344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 13 20:17:30.864351 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:17:31.000610 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:17:31.006790 (kubelet)[1627]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:17:31.082310 kubelet[1627]: E0113 20:17:31.082211 1627 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:17:31.085302 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:17:31.085508 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:17:41.104776 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 13 20:17:41.112334 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:17:41.283078 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:17:41.295528 (kubelet)[1643]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:17:41.349421 kubelet[1643]: E0113 20:17:41.349281 1643 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:17:41.352844 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:17:41.353068 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:17:41.796853 update_engine[1456]: I20250113 20:17:41.795003 1456 update_attempter.cc:509] Updating boot flags... Jan 13 20:17:41.847932 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1660) Jan 13 20:17:51.354962 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 13 20:17:51.362230 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:17:51.489415 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:17:51.504614 (kubelet)[1674]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:17:51.557808 kubelet[1674]: E0113 20:17:51.557659 1674 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:17:51.561124 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:17:51.561267 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:18:01.604761 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 13 20:18:01.611247 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:18:01.761196 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:18:01.761372 (kubelet)[1691]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:18:01.813274 kubelet[1691]: E0113 20:18:01.813211 1691 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:18:01.815801 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:18:01.816178 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:18:11.855292 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 13 20:18:11.864216 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:18:12.011280 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:18:12.014873 (kubelet)[1707]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:18:12.067749 kubelet[1707]: E0113 20:18:12.067684 1707 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:18:12.072487 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:18:12.072665 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:18:22.105067 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 13 20:18:22.115610 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:18:22.236282 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:18:22.242205 (kubelet)[1723]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:18:22.293258 kubelet[1723]: E0113 20:18:22.293204 1723 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:18:22.297872 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:18:22.298108 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:18:32.354891 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Jan 13 20:18:32.363615 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:18:32.511182 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:18:32.517251 (kubelet)[1739]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:18:32.565259 kubelet[1739]: E0113 20:18:32.565192 1739 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:18:32.568572 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:18:32.568836 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:18:42.605880 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Jan 13 20:18:42.615278 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:18:42.747084 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:18:42.753154 (kubelet)[1755]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:18:42.807616 kubelet[1755]: E0113 20:18:42.807548 1755 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:18:42.810756 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:18:42.810927 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:18:43.714430 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 20:18:43.721414 systemd[1]: Started sshd@0-138.199.153.196:22-139.178.89.65:60268.service - OpenSSH per-connection server daemon (139.178.89.65:60268). Jan 13 20:18:44.716343 sshd[1764]: Accepted publickey for core from 139.178.89.65 port 60268 ssh2: RSA SHA256:mP9Np05W8ayjbouGSSYjPkGP0Fk3DK/yb5iC6Sb3lHc Jan 13 20:18:44.721555 sshd-session[1764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:18:44.731316 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 20:18:44.739365 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 20:18:44.743026 systemd-logind[1455]: New session 1 of user core. Jan 13 20:18:44.757559 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 20:18:44.770343 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 20:18:44.775854 (systemd)[1768]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 20:18:44.893594 systemd[1768]: Queued start job for default target default.target. Jan 13 20:18:44.902656 systemd[1768]: Created slice app.slice - User Application Slice. Jan 13 20:18:44.903038 systemd[1768]: Reached target paths.target - Paths. Jan 13 20:18:44.903063 systemd[1768]: Reached target timers.target - Timers. Jan 13 20:18:44.906174 systemd[1768]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 20:18:44.933620 systemd[1768]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 20:18:44.934670 systemd[1768]: Reached target sockets.target - Sockets. Jan 13 20:18:44.934699 systemd[1768]: Reached target basic.target - Basic System. Jan 13 20:18:44.934758 systemd[1768]: Reached target default.target - Main User Target. Jan 13 20:18:44.934788 systemd[1768]: Startup finished in 150ms. Jan 13 20:18:44.934962 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 20:18:44.946141 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 20:18:45.647877 systemd[1]: Started sshd@1-138.199.153.196:22-139.178.89.65:60284.service - OpenSSH per-connection server daemon (139.178.89.65:60284). Jan 13 20:18:46.635935 sshd[1779]: Accepted publickey for core from 139.178.89.65 port 60284 ssh2: RSA SHA256:mP9Np05W8ayjbouGSSYjPkGP0Fk3DK/yb5iC6Sb3lHc Jan 13 20:18:46.638021 sshd-session[1779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:18:46.647813 systemd-logind[1455]: New session 2 of user core. Jan 13 20:18:46.655940 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 20:18:47.319439 sshd[1781]: Connection closed by 139.178.89.65 port 60284 Jan 13 20:18:47.320344 sshd-session[1779]: pam_unix(sshd:session): session closed for user core Jan 13 20:18:47.325224 systemd[1]: sshd@1-138.199.153.196:22-139.178.89.65:60284.service: Deactivated successfully. Jan 13 20:18:47.328861 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 20:18:47.340221 systemd-logind[1455]: Session 2 logged out. Waiting for processes to exit. Jan 13 20:18:47.342097 systemd-logind[1455]: Removed session 2. Jan 13 20:18:47.496379 systemd[1]: Started sshd@2-138.199.153.196:22-139.178.89.65:60300.service - OpenSSH per-connection server daemon (139.178.89.65:60300). Jan 13 20:18:48.491539 sshd[1786]: Accepted publickey for core from 139.178.89.65 port 60300 ssh2: RSA SHA256:mP9Np05W8ayjbouGSSYjPkGP0Fk3DK/yb5iC6Sb3lHc Jan 13 20:18:48.495027 sshd-session[1786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:18:48.503722 systemd-logind[1455]: New session 3 of user core. Jan 13 20:18:48.510630 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 20:18:49.176032 sshd[1788]: Connection closed by 139.178.89.65 port 60300 Jan 13 20:18:49.177375 sshd-session[1786]: pam_unix(sshd:session): session closed for user core Jan 13 20:18:49.183592 systemd-logind[1455]: Session 3 logged out. Waiting for processes to exit. Jan 13 20:18:49.184439 systemd[1]: sshd@2-138.199.153.196:22-139.178.89.65:60300.service: Deactivated successfully. Jan 13 20:18:49.186930 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 20:18:49.188595 systemd-logind[1455]: Removed session 3. Jan 13 20:18:49.362514 systemd[1]: Started sshd@3-138.199.153.196:22-139.178.89.65:60308.service - OpenSSH per-connection server daemon (139.178.89.65:60308). Jan 13 20:18:50.359535 sshd[1793]: Accepted publickey for core from 139.178.89.65 port 60308 ssh2: RSA SHA256:mP9Np05W8ayjbouGSSYjPkGP0Fk3DK/yb5iC6Sb3lHc Jan 13 20:18:50.362336 sshd-session[1793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:18:50.368677 systemd-logind[1455]: New session 4 of user core. Jan 13 20:18:50.378503 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 20:18:51.044961 sshd[1795]: Connection closed by 139.178.89.65 port 60308 Jan 13 20:18:51.045604 sshd-session[1793]: pam_unix(sshd:session): session closed for user core Jan 13 20:18:51.052048 systemd[1]: sshd@3-138.199.153.196:22-139.178.89.65:60308.service: Deactivated successfully. Jan 13 20:18:51.052231 systemd-logind[1455]: Session 4 logged out. Waiting for processes to exit. Jan 13 20:18:51.058022 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 20:18:51.062038 systemd-logind[1455]: Removed session 4. Jan 13 20:18:51.227863 systemd[1]: Started sshd@4-138.199.153.196:22-139.178.89.65:55038.service - OpenSSH per-connection server daemon (139.178.89.65:55038). Jan 13 20:18:52.232560 sshd[1800]: Accepted publickey for core from 139.178.89.65 port 55038 ssh2: RSA SHA256:mP9Np05W8ayjbouGSSYjPkGP0Fk3DK/yb5iC6Sb3lHc Jan 13 20:18:52.236044 sshd-session[1800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:18:52.243725 systemd-logind[1455]: New session 5 of user core. Jan 13 20:18:52.253381 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 20:18:52.772283 sudo[1803]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 20:18:52.772720 sudo[1803]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:18:52.855025 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Jan 13 20:18:52.866069 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:18:53.049207 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:18:53.053409 (kubelet)[1827]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:18:53.114017 kubelet[1827]: E0113 20:18:53.113479 1827 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:18:53.118842 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:18:53.119055 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:18:53.170434 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 20:18:53.179257 (dockerd)[1837]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 20:18:53.476249 dockerd[1837]: time="2025-01-13T20:18:53.476124546Z" level=info msg="Starting up" Jan 13 20:18:53.613408 dockerd[1837]: time="2025-01-13T20:18:53.613328433Z" level=info msg="Loading containers: start." Jan 13 20:18:53.929924 kernel: Initializing XFRM netlink socket Jan 13 20:18:54.051694 systemd-networkd[1367]: docker0: Link UP Jan 13 20:18:54.092830 dockerd[1837]: time="2025-01-13T20:18:54.092780918Z" level=info msg="Loading containers: done." Jan 13 20:18:54.114790 dockerd[1837]: time="2025-01-13T20:18:54.114061031Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 20:18:54.114790 dockerd[1837]: time="2025-01-13T20:18:54.114185630Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jan 13 20:18:54.114790 dockerd[1837]: time="2025-01-13T20:18:54.114478990Z" level=info msg="Daemon has completed initialization" Jan 13 20:18:54.164039 dockerd[1837]: time="2025-01-13T20:18:54.163927238Z" level=info msg="API listen on /run/docker.sock" Jan 13 20:18:54.164756 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 20:18:55.569529 containerd[1481]: time="2025-01-13T20:18:55.569478240Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\"" Jan 13 20:18:56.271951 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2834009192.mount: Deactivated successfully. Jan 13 20:18:58.341640 containerd[1481]: time="2025-01-13T20:18:58.340232118Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:18:58.343748 containerd[1481]: time="2025-01-13T20:18:58.343688358Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.8: active requests=0, bytes read=29864102" Jan 13 20:18:58.345765 containerd[1481]: time="2025-01-13T20:18:58.345712198Z" level=info msg="ImageCreate event name:\"sha256:8202e87ffef091fe4f11dd113ff6f2ab16c70279775d224ddd8aa95e2dd0b966\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:18:58.351168 containerd[1481]: time="2025-01-13T20:18:58.351083118Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:18:58.352026 containerd[1481]: time="2025-01-13T20:18:58.351979038Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.8\" with image id \"sha256:8202e87ffef091fe4f11dd113ff6f2ab16c70279775d224ddd8aa95e2dd0b966\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\", size \"29860810\" in 2.782449798s" Jan 13 20:18:58.352026 containerd[1481]: time="2025-01-13T20:18:58.352022078Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\" returns image reference \"sha256:8202e87ffef091fe4f11dd113ff6f2ab16c70279775d224ddd8aa95e2dd0b966\"" Jan 13 20:18:58.379041 containerd[1481]: time="2025-01-13T20:18:58.378643078Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\"" Jan 13 20:19:01.033840 containerd[1481]: time="2025-01-13T20:19:01.033630173Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:01.037553 containerd[1481]: time="2025-01-13T20:19:01.037350579Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.8: active requests=0, bytes read=26900714" Jan 13 20:19:01.041783 containerd[1481]: time="2025-01-13T20:19:01.041301625Z" level=info msg="ImageCreate event name:\"sha256:4b2191aa4d4d6ca9fbd7704b35401bfa6b0b90de75db22c425053e97fd5c8338\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:01.045677 containerd[1481]: time="2025-01-13T20:19:01.045616311Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:01.047306 containerd[1481]: time="2025-01-13T20:19:01.047189714Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.8\" with image id \"sha256:4b2191aa4d4d6ca9fbd7704b35401bfa6b0b90de75db22c425053e97fd5c8338\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\", size \"28303015\" in 2.668502756s" Jan 13 20:19:01.047306 containerd[1481]: time="2025-01-13T20:19:01.047299634Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\" returns image reference \"sha256:4b2191aa4d4d6ca9fbd7704b35401bfa6b0b90de75db22c425053e97fd5c8338\"" Jan 13 20:19:01.075414 containerd[1481]: time="2025-01-13T20:19:01.075199276Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\"" Jan 13 20:19:02.387858 containerd[1481]: time="2025-01-13T20:19:02.386647780Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:02.390023 containerd[1481]: time="2025-01-13T20:19:02.389925147Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.8: active requests=0, bytes read=16164352" Jan 13 20:19:02.392113 containerd[1481]: time="2025-01-13T20:19:02.392018831Z" level=info msg="ImageCreate event name:\"sha256:d43326c1723208785a33cdc1507082792eb041ca0d789c103c90180e31f65ca8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:02.396182 containerd[1481]: time="2025-01-13T20:19:02.396100279Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:02.397709 containerd[1481]: time="2025-01-13T20:19:02.397487482Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.8\" with image id \"sha256:d43326c1723208785a33cdc1507082792eb041ca0d789c103c90180e31f65ca8\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\", size \"17566671\" in 1.322215006s" Jan 13 20:19:02.397709 containerd[1481]: time="2025-01-13T20:19:02.397533882Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\" returns image reference \"sha256:d43326c1723208785a33cdc1507082792eb041ca0d789c103c90180e31f65ca8\"" Jan 13 20:19:02.426715 containerd[1481]: time="2025-01-13T20:19:02.426580380Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Jan 13 20:19:03.156443 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Jan 13 20:19:03.167661 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:19:03.349211 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:19:03.362776 (kubelet)[2115]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:19:03.422804 kubelet[2115]: E0113 20:19:03.422264 2115 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:19:03.424925 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:19:03.425131 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:19:03.745869 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount133688661.mount: Deactivated successfully. Jan 13 20:19:04.146555 containerd[1481]: time="2025-01-13T20:19:04.145494007Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:04.147695 containerd[1481]: time="2025-01-13T20:19:04.147611093Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=25662037" Jan 13 20:19:04.149726 containerd[1481]: time="2025-01-13T20:19:04.149634459Z" level=info msg="ImageCreate event name:\"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:04.155346 containerd[1481]: time="2025-01-13T20:19:04.155249516Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:04.156460 containerd[1481]: time="2025-01-13T20:19:04.156285239Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"25661030\" in 1.729656498s" Jan 13 20:19:04.156460 containerd[1481]: time="2025-01-13T20:19:04.156334679Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\"" Jan 13 20:19:04.190575 containerd[1481]: time="2025-01-13T20:19:04.190525738Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 13 20:19:04.820542 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4274218366.mount: Deactivated successfully. Jan 13 20:19:05.626960 containerd[1481]: time="2025-01-13T20:19:05.626859658Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:05.629710 containerd[1481]: time="2025-01-13T20:19:05.629613108Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485461" Jan 13 20:19:05.631132 containerd[1481]: time="2025-01-13T20:19:05.631073513Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:05.635438 containerd[1481]: time="2025-01-13T20:19:05.635363047Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:05.636813 containerd[1481]: time="2025-01-13T20:19:05.636607411Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.446025433s" Jan 13 20:19:05.636813 containerd[1481]: time="2025-01-13T20:19:05.636654051Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 13 20:19:05.662347 containerd[1481]: time="2025-01-13T20:19:05.662296777Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 13 20:19:06.217857 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount295094587.mount: Deactivated successfully. Jan 13 20:19:06.232128 containerd[1481]: time="2025-01-13T20:19:06.230755928Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:06.233012 containerd[1481]: time="2025-01-13T20:19:06.232876376Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268841" Jan 13 20:19:06.235882 containerd[1481]: time="2025-01-13T20:19:06.235831547Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:06.245788 containerd[1481]: time="2025-01-13T20:19:06.245733024Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:06.247312 containerd[1481]: time="2025-01-13T20:19:06.247258750Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 584.913773ms" Jan 13 20:19:06.247500 containerd[1481]: time="2025-01-13T20:19:06.247480590Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jan 13 20:19:06.271686 containerd[1481]: time="2025-01-13T20:19:06.271634921Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 13 20:19:06.932691 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3468718756.mount: Deactivated successfully. Jan 13 20:19:10.157234 containerd[1481]: time="2025-01-13T20:19:10.157098316Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:10.159715 containerd[1481]: time="2025-01-13T20:19:10.159376568Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191552" Jan 13 20:19:10.161545 containerd[1481]: time="2025-01-13T20:19:10.161492579Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:10.167233 containerd[1481]: time="2025-01-13T20:19:10.167147169Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:10.169234 containerd[1481]: time="2025-01-13T20:19:10.169144460Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 3.897451659s" Jan 13 20:19:10.169234 containerd[1481]: time="2025-01-13T20:19:10.169221060Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Jan 13 20:19:13.604673 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Jan 13 20:19:13.613373 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:19:13.782348 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:19:13.786238 (kubelet)[2300]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:19:13.845415 kubelet[2300]: E0113 20:19:13.845360 2300 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:19:13.849702 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:19:13.850743 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:19:17.037663 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:19:17.049790 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:19:17.076135 systemd[1]: Reloading requested from client PID 2316 ('systemctl') (unit session-5.scope)... Jan 13 20:19:17.076386 systemd[1]: Reloading... Jan 13 20:19:17.217931 zram_generator::config[2357]: No configuration found. Jan 13 20:19:17.328451 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:19:17.404225 systemd[1]: Reloading finished in 327 ms. Jan 13 20:19:17.462503 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 20:19:17.463032 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 20:19:17.465034 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:19:17.475580 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:19:17.631302 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:19:17.631379 (kubelet)[2404]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:19:17.709465 kubelet[2404]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:19:17.709465 kubelet[2404]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:19:17.709465 kubelet[2404]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:19:17.709465 kubelet[2404]: I0113 20:19:17.709419 2404 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:19:18.215026 kubelet[2404]: I0113 20:19:18.214847 2404 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 13 20:19:18.215298 kubelet[2404]: I0113 20:19:18.215273 2404 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:19:18.215855 kubelet[2404]: I0113 20:19:18.215827 2404 server.go:927] "Client rotation is on, will bootstrap in background" Jan 13 20:19:18.239453 kubelet[2404]: E0113 20:19:18.239118 2404 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://138.199.153.196:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 138.199.153.196:6443: connect: connection refused Jan 13 20:19:18.239453 kubelet[2404]: I0113 20:19:18.239276 2404 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:19:18.256327 kubelet[2404]: I0113 20:19:18.256258 2404 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:19:18.256926 kubelet[2404]: I0113 20:19:18.256877 2404 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:19:18.257388 kubelet[2404]: I0113 20:19:18.257017 2404 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186-1-0-0-2aa1049bb1","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 20:19:18.257637 kubelet[2404]: I0113 20:19:18.257624 2404 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:19:18.257693 kubelet[2404]: I0113 20:19:18.257686 2404 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 20:19:18.258055 kubelet[2404]: I0113 20:19:18.258042 2404 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:19:18.260471 kubelet[2404]: W0113 20:19:18.260290 2404 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://138.199.153.196:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-1-0-0-2aa1049bb1&limit=500&resourceVersion=0": dial tcp 138.199.153.196:6443: connect: connection refused Jan 13 20:19:18.260471 kubelet[2404]: E0113 20:19:18.260359 2404 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://138.199.153.196:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-1-0-0-2aa1049bb1&limit=500&resourceVersion=0": dial tcp 138.199.153.196:6443: connect: connection refused Jan 13 20:19:18.260671 kubelet[2404]: I0113 20:19:18.260655 2404 kubelet.go:400] "Attempting to sync node with API server" Jan 13 20:19:18.260730 kubelet[2404]: I0113 20:19:18.260721 2404 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:19:18.261066 kubelet[2404]: I0113 20:19:18.261053 2404 kubelet.go:312] "Adding apiserver pod source" Jan 13 20:19:18.262090 kubelet[2404]: I0113 20:19:18.262060 2404 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:19:18.269460 kubelet[2404]: W0113 20:19:18.269371 2404 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://138.199.153.196:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 138.199.153.196:6443: connect: connection refused Jan 13 20:19:18.269460 kubelet[2404]: E0113 20:19:18.269449 2404 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://138.199.153.196:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 138.199.153.196:6443: connect: connection refused Jan 13 20:19:18.270138 kubelet[2404]: I0113 20:19:18.270000 2404 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:19:18.271951 kubelet[2404]: I0113 20:19:18.270532 2404 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:19:18.271951 kubelet[2404]: W0113 20:19:18.270671 2404 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 20:19:18.273389 kubelet[2404]: I0113 20:19:18.273320 2404 server.go:1264] "Started kubelet" Jan 13 20:19:18.276277 kubelet[2404]: I0113 20:19:18.276100 2404 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:19:18.277873 kubelet[2404]: I0113 20:19:18.277653 2404 server.go:455] "Adding debug handlers to kubelet server" Jan 13 20:19:18.279271 kubelet[2404]: I0113 20:19:18.279189 2404 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:19:18.280756 kubelet[2404]: I0113 20:19:18.279508 2404 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:19:18.280756 kubelet[2404]: E0113 20:19:18.279806 2404 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://138.199.153.196:6443/api/v1/namespaces/default/events\": dial tcp 138.199.153.196:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4186-1-0-0-2aa1049bb1.181a59ff0cef18ac default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186-1-0-0-2aa1049bb1,UID:ci-4186-1-0-0-2aa1049bb1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4186-1-0-0-2aa1049bb1,},FirstTimestamp:2025-01-13 20:19:18.273276076 +0000 UTC m=+0.634261237,LastTimestamp:2025-01-13 20:19:18.273276076 +0000 UTC m=+0.634261237,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186-1-0-0-2aa1049bb1,}" Jan 13 20:19:18.282154 kubelet[2404]: I0113 20:19:18.281508 2404 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:19:18.289670 kubelet[2404]: E0113 20:19:18.289624 2404 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4186-1-0-0-2aa1049bb1\" not found" Jan 13 20:19:18.290248 kubelet[2404]: I0113 20:19:18.290225 2404 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 20:19:18.290555 kubelet[2404]: I0113 20:19:18.290535 2404 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 13 20:19:18.292270 kubelet[2404]: I0113 20:19:18.292235 2404 reconciler.go:26] "Reconciler: start to sync state" Jan 13 20:19:18.293920 kubelet[2404]: W0113 20:19:18.293836 2404 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://138.199.153.196:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 138.199.153.196:6443: connect: connection refused Jan 13 20:19:18.294249 kubelet[2404]: E0113 20:19:18.294226 2404 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://138.199.153.196:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 138.199.153.196:6443: connect: connection refused Jan 13 20:19:18.294433 kubelet[2404]: E0113 20:19:18.294404 2404 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.153.196:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-1-0-0-2aa1049bb1?timeout=10s\": dial tcp 138.199.153.196:6443: connect: connection refused" interval="200ms" Jan 13 20:19:18.295874 kubelet[2404]: I0113 20:19:18.294740 2404 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:19:18.295874 kubelet[2404]: I0113 20:19:18.294860 2404 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:19:18.295874 kubelet[2404]: E0113 20:19:18.295454 2404 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:19:18.299925 kubelet[2404]: I0113 20:19:18.298337 2404 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:19:18.306626 kubelet[2404]: I0113 20:19:18.306562 2404 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:19:18.308013 kubelet[2404]: I0113 20:19:18.307950 2404 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:19:18.308013 kubelet[2404]: I0113 20:19:18.308016 2404 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:19:18.308013 kubelet[2404]: I0113 20:19:18.308038 2404 kubelet.go:2337] "Starting kubelet main sync loop" Jan 13 20:19:18.308302 kubelet[2404]: E0113 20:19:18.308091 2404 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:19:18.324448 kubelet[2404]: W0113 20:19:18.324373 2404 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://138.199.153.196:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 138.199.153.196:6443: connect: connection refused Jan 13 20:19:18.324664 kubelet[2404]: E0113 20:19:18.324648 2404 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://138.199.153.196:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 138.199.153.196:6443: connect: connection refused Jan 13 20:19:18.340744 kubelet[2404]: I0113 20:19:18.340711 2404 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:19:18.340956 kubelet[2404]: I0113 20:19:18.340940 2404 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:19:18.341041 kubelet[2404]: I0113 20:19:18.341032 2404 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:19:18.344676 kubelet[2404]: I0113 20:19:18.344637 2404 policy_none.go:49] "None policy: Start" Jan 13 20:19:18.345805 kubelet[2404]: I0113 20:19:18.345778 2404 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:19:18.346060 kubelet[2404]: I0113 20:19:18.346049 2404 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:19:18.357691 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 20:19:18.374775 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 20:19:18.380588 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 20:19:18.394479 kubelet[2404]: I0113 20:19:18.394435 2404 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:19:18.394886 kubelet[2404]: I0113 20:19:18.394746 2404 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 20:19:18.395057 kubelet[2404]: I0113 20:19:18.395038 2404 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:19:18.395778 kubelet[2404]: I0113 20:19:18.395731 2404 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186-1-0-0-2aa1049bb1" Jan 13 20:19:18.396956 kubelet[2404]: E0113 20:19:18.396583 2404 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://138.199.153.196:6443/api/v1/nodes\": dial tcp 138.199.153.196:6443: connect: connection refused" node="ci-4186-1-0-0-2aa1049bb1" Jan 13 20:19:18.400698 kubelet[2404]: E0113 20:19:18.400666 2404 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4186-1-0-0-2aa1049bb1\" not found" Jan 13 20:19:18.409278 kubelet[2404]: I0113 20:19:18.408698 2404 topology_manager.go:215] "Topology Admit Handler" podUID="f665d2c69b61d5b75b6b5188f2e129ed" podNamespace="kube-system" podName="kube-apiserver-ci-4186-1-0-0-2aa1049bb1" Jan 13 20:19:18.411840 kubelet[2404]: I0113 20:19:18.411791 2404 topology_manager.go:215] "Topology Admit Handler" podUID="5a2229e1aabaf4a6e31f11d55b93a13e" podNamespace="kube-system" podName="kube-controller-manager-ci-4186-1-0-0-2aa1049bb1" Jan 13 20:19:18.414441 kubelet[2404]: I0113 20:19:18.414402 2404 topology_manager.go:215] "Topology Admit Handler" podUID="97ccea0c96acf8e80c4bf1c625f3eb44" podNamespace="kube-system" podName="kube-scheduler-ci-4186-1-0-0-2aa1049bb1" Jan 13 20:19:18.425565 systemd[1]: Created slice kubepods-burstable-podf665d2c69b61d5b75b6b5188f2e129ed.slice - libcontainer container kubepods-burstable-podf665d2c69b61d5b75b6b5188f2e129ed.slice. Jan 13 20:19:18.446303 systemd[1]: Created slice kubepods-burstable-pod5a2229e1aabaf4a6e31f11d55b93a13e.slice - libcontainer container kubepods-burstable-pod5a2229e1aabaf4a6e31f11d55b93a13e.slice. Jan 13 20:19:18.465083 systemd[1]: Created slice kubepods-burstable-pod97ccea0c96acf8e80c4bf1c625f3eb44.slice - libcontainer container kubepods-burstable-pod97ccea0c96acf8e80c4bf1c625f3eb44.slice. Jan 13 20:19:18.494356 kubelet[2404]: I0113 20:19:18.494011 2404 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5a2229e1aabaf4a6e31f11d55b93a13e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186-1-0-0-2aa1049bb1\" (UID: \"5a2229e1aabaf4a6e31f11d55b93a13e\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-0-2aa1049bb1" Jan 13 20:19:18.494356 kubelet[2404]: I0113 20:19:18.494062 2404 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f665d2c69b61d5b75b6b5188f2e129ed-ca-certs\") pod \"kube-apiserver-ci-4186-1-0-0-2aa1049bb1\" (UID: \"f665d2c69b61d5b75b6b5188f2e129ed\") " pod="kube-system/kube-apiserver-ci-4186-1-0-0-2aa1049bb1" Jan 13 20:19:18.494356 kubelet[2404]: I0113 20:19:18.494087 2404 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f665d2c69b61d5b75b6b5188f2e129ed-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186-1-0-0-2aa1049bb1\" (UID: \"f665d2c69b61d5b75b6b5188f2e129ed\") " pod="kube-system/kube-apiserver-ci-4186-1-0-0-2aa1049bb1" Jan 13 20:19:18.494356 kubelet[2404]: I0113 20:19:18.494107 2404 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5a2229e1aabaf4a6e31f11d55b93a13e-ca-certs\") pod \"kube-controller-manager-ci-4186-1-0-0-2aa1049bb1\" (UID: \"5a2229e1aabaf4a6e31f11d55b93a13e\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-0-2aa1049bb1" Jan 13 20:19:18.494356 kubelet[2404]: I0113 20:19:18.494126 2404 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5a2229e1aabaf4a6e31f11d55b93a13e-kubeconfig\") pod \"kube-controller-manager-ci-4186-1-0-0-2aa1049bb1\" (UID: \"5a2229e1aabaf4a6e31f11d55b93a13e\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-0-2aa1049bb1" Jan 13 20:19:18.494627 kubelet[2404]: I0113 20:19:18.494143 2404 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/97ccea0c96acf8e80c4bf1c625f3eb44-kubeconfig\") pod \"kube-scheduler-ci-4186-1-0-0-2aa1049bb1\" (UID: \"97ccea0c96acf8e80c4bf1c625f3eb44\") " pod="kube-system/kube-scheduler-ci-4186-1-0-0-2aa1049bb1" Jan 13 20:19:18.494627 kubelet[2404]: I0113 20:19:18.494159 2404 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f665d2c69b61d5b75b6b5188f2e129ed-k8s-certs\") pod \"kube-apiserver-ci-4186-1-0-0-2aa1049bb1\" (UID: \"f665d2c69b61d5b75b6b5188f2e129ed\") " pod="kube-system/kube-apiserver-ci-4186-1-0-0-2aa1049bb1" Jan 13 20:19:18.494627 kubelet[2404]: I0113 20:19:18.494190 2404 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5a2229e1aabaf4a6e31f11d55b93a13e-flexvolume-dir\") pod \"kube-controller-manager-ci-4186-1-0-0-2aa1049bb1\" (UID: \"5a2229e1aabaf4a6e31f11d55b93a13e\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-0-2aa1049bb1" Jan 13 20:19:18.494627 kubelet[2404]: I0113 20:19:18.494214 2404 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5a2229e1aabaf4a6e31f11d55b93a13e-k8s-certs\") pod \"kube-controller-manager-ci-4186-1-0-0-2aa1049bb1\" (UID: \"5a2229e1aabaf4a6e31f11d55b93a13e\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-0-2aa1049bb1" Jan 13 20:19:18.495324 kubelet[2404]: E0113 20:19:18.495222 2404 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.153.196:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-1-0-0-2aa1049bb1?timeout=10s\": dial tcp 138.199.153.196:6443: connect: connection refused" interval="400ms" Jan 13 20:19:18.600491 kubelet[2404]: I0113 20:19:18.600094 2404 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186-1-0-0-2aa1049bb1" Jan 13 20:19:18.605076 kubelet[2404]: E0113 20:19:18.605012 2404 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://138.199.153.196:6443/api/v1/nodes\": dial tcp 138.199.153.196:6443: connect: connection refused" node="ci-4186-1-0-0-2aa1049bb1" Jan 13 20:19:18.743635 containerd[1481]: time="2025-01-13T20:19:18.743148646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186-1-0-0-2aa1049bb1,Uid:f665d2c69b61d5b75b6b5188f2e129ed,Namespace:kube-system,Attempt:0,}" Jan 13 20:19:18.764058 containerd[1481]: time="2025-01-13T20:19:18.763615006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186-1-0-0-2aa1049bb1,Uid:5a2229e1aabaf4a6e31f11d55b93a13e,Namespace:kube-system,Attempt:0,}" Jan 13 20:19:18.773773 containerd[1481]: time="2025-01-13T20:19:18.772765238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186-1-0-0-2aa1049bb1,Uid:97ccea0c96acf8e80c4bf1c625f3eb44,Namespace:kube-system,Attempt:0,}" Jan 13 20:19:18.896867 kubelet[2404]: E0113 20:19:18.896711 2404 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.153.196:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-1-0-0-2aa1049bb1?timeout=10s\": dial tcp 138.199.153.196:6443: connect: connection refused" interval="800ms" Jan 13 20:19:19.008289 kubelet[2404]: I0113 20:19:19.008127 2404 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186-1-0-0-2aa1049bb1" Jan 13 20:19:19.009596 kubelet[2404]: E0113 20:19:19.009532 2404 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://138.199.153.196:6443/api/v1/nodes\": dial tcp 138.199.153.196:6443: connect: connection refused" node="ci-4186-1-0-0-2aa1049bb1" Jan 13 20:19:19.264426 kubelet[2404]: W0113 20:19:19.264015 2404 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://138.199.153.196:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 138.199.153.196:6443: connect: connection refused Jan 13 20:19:19.264426 kubelet[2404]: E0113 20:19:19.264092 2404 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://138.199.153.196:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 138.199.153.196:6443: connect: connection refused Jan 13 20:19:19.291571 kubelet[2404]: W0113 20:19:19.291443 2404 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://138.199.153.196:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-1-0-0-2aa1049bb1&limit=500&resourceVersion=0": dial tcp 138.199.153.196:6443: connect: connection refused Jan 13 20:19:19.291571 kubelet[2404]: E0113 20:19:19.291530 2404 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://138.199.153.196:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-1-0-0-2aa1049bb1&limit=500&resourceVersion=0": dial tcp 138.199.153.196:6443: connect: connection refused Jan 13 20:19:19.310377 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3030063491.mount: Deactivated successfully. Jan 13 20:19:19.334833 containerd[1481]: time="2025-01-13T20:19:19.334740583Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:19:19.343090 containerd[1481]: time="2025-01-13T20:19:19.342429725Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Jan 13 20:19:19.344867 containerd[1481]: time="2025-01-13T20:19:19.344773704Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:19:19.348999 containerd[1481]: time="2025-01-13T20:19:19.348011491Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:19:19.354131 containerd[1481]: time="2025-01-13T20:19:19.354047820Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:19:19.358497 containerd[1481]: time="2025-01-13T20:19:19.358401375Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:19:19.365673 containerd[1481]: time="2025-01-13T20:19:19.365608834Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:19:19.367414 containerd[1481]: time="2025-01-13T20:19:19.366831404Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 623.559676ms" Jan 13 20:19:19.369698 containerd[1481]: time="2025-01-13T20:19:19.368888460Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:19:19.373771 containerd[1481]: time="2025-01-13T20:19:19.373712219Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 609.993412ms" Jan 13 20:19:19.388948 containerd[1481]: time="2025-01-13T20:19:19.388873943Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 615.999584ms" Jan 13 20:19:19.510552 kubelet[2404]: W0113 20:19:19.510480 2404 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://138.199.153.196:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 138.199.153.196:6443: connect: connection refused Jan 13 20:19:19.510552 kubelet[2404]: E0113 20:19:19.510558 2404 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://138.199.153.196:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 138.199.153.196:6443: connect: connection refused Jan 13 20:19:19.525611 containerd[1481]: time="2025-01-13T20:19:19.524985969Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:19:19.525611 containerd[1481]: time="2025-01-13T20:19:19.525075970Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:19:19.525611 containerd[1481]: time="2025-01-13T20:19:19.525088730Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:19:19.525611 containerd[1481]: time="2025-01-13T20:19:19.525213651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:19:19.530993 containerd[1481]: time="2025-01-13T20:19:19.530685855Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:19:19.530993 containerd[1481]: time="2025-01-13T20:19:19.530763976Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:19:19.530993 containerd[1481]: time="2025-01-13T20:19:19.530793256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:19:19.531291 containerd[1481]: time="2025-01-13T20:19:19.530883897Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:19:19.536433 containerd[1481]: time="2025-01-13T20:19:19.536209780Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:19:19.536433 containerd[1481]: time="2025-01-13T20:19:19.536282981Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:19:19.536433 containerd[1481]: time="2025-01-13T20:19:19.536299341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:19:19.536433 containerd[1481]: time="2025-01-13T20:19:19.536393462Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:19:19.566156 systemd[1]: Started cri-containerd-323d24d823b6326cd5d189d026e7eb8c96b1ab3e34319b62d3b09c5192a77997.scope - libcontainer container 323d24d823b6326cd5d189d026e7eb8c96b1ab3e34319b62d3b09c5192a77997. Jan 13 20:19:19.580266 systemd[1]: Started cri-containerd-5afc0d77417cdf777e223c7647dc4ab1b3791cc6adf7486e5fe80d3fd5aa9d29.scope - libcontainer container 5afc0d77417cdf777e223c7647dc4ab1b3791cc6adf7486e5fe80d3fd5aa9d29. Jan 13 20:19:19.583512 systemd[1]: Started cri-containerd-aa3f20d1b5bd0d5bff822ccf152c93ec65785271bc8115cb349ab951a3f161a5.scope - libcontainer container aa3f20d1b5bd0d5bff822ccf152c93ec65785271bc8115cb349ab951a3f161a5. Jan 13 20:19:19.658252 containerd[1481]: time="2025-01-13T20:19:19.657967610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186-1-0-0-2aa1049bb1,Uid:5a2229e1aabaf4a6e31f11d55b93a13e,Namespace:kube-system,Attempt:0,} returns sandbox id \"323d24d823b6326cd5d189d026e7eb8c96b1ab3e34319b62d3b09c5192a77997\"" Jan 13 20:19:19.665811 containerd[1481]: time="2025-01-13T20:19:19.665729153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186-1-0-0-2aa1049bb1,Uid:f665d2c69b61d5b75b6b5188f2e129ed,Namespace:kube-system,Attempt:0,} returns sandbox id \"5afc0d77417cdf777e223c7647dc4ab1b3791cc6adf7486e5fe80d3fd5aa9d29\"" Jan 13 20:19:19.666838 kubelet[2404]: W0113 20:19:19.666598 2404 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://138.199.153.196:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 138.199.153.196:6443: connect: connection refused Jan 13 20:19:19.666838 kubelet[2404]: E0113 20:19:19.666703 2404 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://138.199.153.196:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 138.199.153.196:6443: connect: connection refused Jan 13 20:19:19.673073 containerd[1481]: time="2025-01-13T20:19:19.673014932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186-1-0-0-2aa1049bb1,Uid:97ccea0c96acf8e80c4bf1c625f3eb44,Namespace:kube-system,Attempt:0,} returns sandbox id \"aa3f20d1b5bd0d5bff822ccf152c93ec65785271bc8115cb349ab951a3f161a5\"" Jan 13 20:19:19.674702 containerd[1481]: time="2025-01-13T20:19:19.674643065Z" level=info msg="CreateContainer within sandbox \"323d24d823b6326cd5d189d026e7eb8c96b1ab3e34319b62d3b09c5192a77997\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 20:19:19.677523 containerd[1481]: time="2025-01-13T20:19:19.677438328Z" level=info msg="CreateContainer within sandbox \"5afc0d77417cdf777e223c7647dc4ab1b3791cc6adf7486e5fe80d3fd5aa9d29\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 20:19:19.681805 containerd[1481]: time="2025-01-13T20:19:19.681717603Z" level=info msg="CreateContainer within sandbox \"aa3f20d1b5bd0d5bff822ccf152c93ec65785271bc8115cb349ab951a3f161a5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 20:19:19.698724 kubelet[2404]: E0113 20:19:19.698614 2404 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.153.196:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-1-0-0-2aa1049bb1?timeout=10s\": dial tcp 138.199.153.196:6443: connect: connection refused" interval="1.6s" Jan 13 20:19:19.776877 containerd[1481]: time="2025-01-13T20:19:19.775457845Z" level=info msg="CreateContainer within sandbox \"323d24d823b6326cd5d189d026e7eb8c96b1ab3e34319b62d3b09c5192a77997\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"29a72c262563d73566ec2c804e5a4d6ec8e46baf834250244d408c606e0b8b94\"" Jan 13 20:19:19.781926 containerd[1481]: time="2025-01-13T20:19:19.780536926Z" level=info msg="StartContainer for \"29a72c262563d73566ec2c804e5a4d6ec8e46baf834250244d408c606e0b8b94\"" Jan 13 20:19:19.796706 containerd[1481]: time="2025-01-13T20:19:19.796629977Z" level=info msg="CreateContainer within sandbox \"5afc0d77417cdf777e223c7647dc4ab1b3791cc6adf7486e5fe80d3fd5aa9d29\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1a9badf2531e9f981d8db1c56b9aa476aaed9b63a59d670bf11cb0d6488efc4e\"" Jan 13 20:19:19.798937 containerd[1481]: time="2025-01-13T20:19:19.798886435Z" level=info msg="StartContainer for \"1a9badf2531e9f981d8db1c56b9aa476aaed9b63a59d670bf11cb0d6488efc4e\"" Jan 13 20:19:19.806324 containerd[1481]: time="2025-01-13T20:19:19.804155798Z" level=info msg="CreateContainer within sandbox \"aa3f20d1b5bd0d5bff822ccf152c93ec65785271bc8115cb349ab951a3f161a5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a7043f16d41b0fff9312dabbac296d890e3afcf700fad4db7f3e0d3676e90c67\"" Jan 13 20:19:19.806324 containerd[1481]: time="2025-01-13T20:19:19.805857492Z" level=info msg="StartContainer for \"a7043f16d41b0fff9312dabbac296d890e3afcf700fad4db7f3e0d3676e90c67\"" Jan 13 20:19:19.826953 kubelet[2404]: I0113 20:19:19.821964 2404 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186-1-0-0-2aa1049bb1" Jan 13 20:19:19.826953 kubelet[2404]: E0113 20:19:19.822382 2404 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://138.199.153.196:6443/api/v1/nodes\": dial tcp 138.199.153.196:6443: connect: connection refused" node="ci-4186-1-0-0-2aa1049bb1" Jan 13 20:19:19.893648 systemd[1]: Started cri-containerd-29a72c262563d73566ec2c804e5a4d6ec8e46baf834250244d408c606e0b8b94.scope - libcontainer container 29a72c262563d73566ec2c804e5a4d6ec8e46baf834250244d408c606e0b8b94. Jan 13 20:19:19.925278 systemd[1]: Started cri-containerd-1a9badf2531e9f981d8db1c56b9aa476aaed9b63a59d670bf11cb0d6488efc4e.scope - libcontainer container 1a9badf2531e9f981d8db1c56b9aa476aaed9b63a59d670bf11cb0d6488efc4e. Jan 13 20:19:19.928670 systemd[1]: Started cri-containerd-a7043f16d41b0fff9312dabbac296d890e3afcf700fad4db7f3e0d3676e90c67.scope - libcontainer container a7043f16d41b0fff9312dabbac296d890e3afcf700fad4db7f3e0d3676e90c67. Jan 13 20:19:20.019732 containerd[1481]: time="2025-01-13T20:19:20.019575514Z" level=info msg="StartContainer for \"29a72c262563d73566ec2c804e5a4d6ec8e46baf834250244d408c606e0b8b94\" returns successfully" Jan 13 20:19:20.048799 containerd[1481]: time="2025-01-13T20:19:20.048225994Z" level=info msg="StartContainer for \"a7043f16d41b0fff9312dabbac296d890e3afcf700fad4db7f3e0d3676e90c67\" returns successfully" Jan 13 20:19:20.052217 kubelet[2404]: E0113 20:19:20.051963 2404 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://138.199.153.196:6443/api/v1/namespaces/default/events\": dial tcp 138.199.153.196:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4186-1-0-0-2aa1049bb1.181a59ff0cef18ac default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186-1-0-0-2aa1049bb1,UID:ci-4186-1-0-0-2aa1049bb1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4186-1-0-0-2aa1049bb1,},FirstTimestamp:2025-01-13 20:19:18.273276076 +0000 UTC m=+0.634261237,LastTimestamp:2025-01-13 20:19:18.273276076 +0000 UTC m=+0.634261237,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186-1-0-0-2aa1049bb1,}" Jan 13 20:19:20.060563 containerd[1481]: time="2025-01-13T20:19:20.060411377Z" level=info msg="StartContainer for \"1a9badf2531e9f981d8db1c56b9aa476aaed9b63a59d670bf11cb0d6488efc4e\" returns successfully" Jan 13 20:19:21.432033 kubelet[2404]: I0113 20:19:21.429063 2404 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186-1-0-0-2aa1049bb1" Jan 13 20:19:22.792030 kubelet[2404]: I0113 20:19:22.791753 2404 kubelet_node_status.go:76] "Successfully registered node" node="ci-4186-1-0-0-2aa1049bb1" Jan 13 20:19:22.886243 kubelet[2404]: E0113 20:19:22.886141 2404 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Jan 13 20:19:23.267082 kubelet[2404]: I0113 20:19:23.266621 2404 apiserver.go:52] "Watching apiserver" Jan 13 20:19:23.291204 kubelet[2404]: I0113 20:19:23.291124 2404 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 13 20:19:25.604588 systemd[1]: Reloading requested from client PID 2676 ('systemctl') (unit session-5.scope)... Jan 13 20:19:25.604671 systemd[1]: Reloading... Jan 13 20:19:25.730936 zram_generator::config[2719]: No configuration found. Jan 13 20:19:25.840481 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:19:25.937761 systemd[1]: Reloading finished in 332 ms. Jan 13 20:19:25.975649 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:19:25.975937 kubelet[2404]: I0113 20:19:25.975664 2404 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:19:25.988330 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 20:19:25.988643 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:19:25.988697 systemd[1]: kubelet.service: Consumed 1.190s CPU time, 114.0M memory peak, 0B memory swap peak. Jan 13 20:19:25.996271 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:19:26.126228 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:19:26.142320 (kubelet)[2760]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:19:26.239967 kubelet[2760]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:19:26.239967 kubelet[2760]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:19:26.239967 kubelet[2760]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:19:26.239967 kubelet[2760]: I0113 20:19:26.239743 2760 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:19:26.246557 kubelet[2760]: I0113 20:19:26.246452 2760 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 13 20:19:26.246557 kubelet[2760]: I0113 20:19:26.246507 2760 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:19:26.247991 kubelet[2760]: I0113 20:19:26.246822 2760 server.go:927] "Client rotation is on, will bootstrap in background" Jan 13 20:19:26.249716 kubelet[2760]: I0113 20:19:26.249647 2760 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 20:19:26.254301 kubelet[2760]: I0113 20:19:26.254225 2760 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:19:26.264178 kubelet[2760]: I0113 20:19:26.264123 2760 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:19:26.264651 kubelet[2760]: I0113 20:19:26.264427 2760 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:19:26.264730 kubelet[2760]: I0113 20:19:26.264464 2760 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186-1-0-0-2aa1049bb1","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 20:19:26.264730 kubelet[2760]: I0113 20:19:26.264729 2760 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:19:26.264893 kubelet[2760]: I0113 20:19:26.264741 2760 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 20:19:26.264893 kubelet[2760]: I0113 20:19:26.264780 2760 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:19:26.265056 kubelet[2760]: I0113 20:19:26.265011 2760 kubelet.go:400] "Attempting to sync node with API server" Jan 13 20:19:26.265686 kubelet[2760]: I0113 20:19:26.265660 2760 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:19:26.265773 kubelet[2760]: I0113 20:19:26.265756 2760 kubelet.go:312] "Adding apiserver pod source" Jan 13 20:19:26.266696 kubelet[2760]: I0113 20:19:26.266667 2760 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:19:26.272596 kubelet[2760]: I0113 20:19:26.272432 2760 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:19:26.272993 kubelet[2760]: I0113 20:19:26.272758 2760 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:19:26.273872 kubelet[2760]: I0113 20:19:26.273702 2760 server.go:1264] "Started kubelet" Jan 13 20:19:26.279650 kubelet[2760]: I0113 20:19:26.279612 2760 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:19:26.291268 kubelet[2760]: I0113 20:19:26.290809 2760 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:19:26.292975 kubelet[2760]: I0113 20:19:26.292937 2760 server.go:455] "Adding debug handlers to kubelet server" Jan 13 20:19:26.295740 kubelet[2760]: I0113 20:19:26.294264 2760 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:19:26.295740 kubelet[2760]: I0113 20:19:26.294509 2760 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:19:26.297155 kubelet[2760]: I0113 20:19:26.297119 2760 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 20:19:26.300290 kubelet[2760]: I0113 20:19:26.300077 2760 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 13 20:19:26.300483 kubelet[2760]: I0113 20:19:26.300354 2760 reconciler.go:26] "Reconciler: start to sync state" Jan 13 20:19:26.305971 kubelet[2760]: I0113 20:19:26.304327 2760 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:19:26.306321 kubelet[2760]: I0113 20:19:26.306153 2760 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:19:26.306321 kubelet[2760]: I0113 20:19:26.306248 2760 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:19:26.306321 kubelet[2760]: I0113 20:19:26.306276 2760 kubelet.go:2337] "Starting kubelet main sync loop" Jan 13 20:19:26.306393 kubelet[2760]: E0113 20:19:26.306335 2760 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:19:26.307712 kubelet[2760]: I0113 20:19:26.307448 2760 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:19:26.309253 kubelet[2760]: I0113 20:19:26.308514 2760 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:19:26.321708 kubelet[2760]: I0113 20:19:26.321645 2760 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:19:26.348138 kubelet[2760]: E0113 20:19:26.347802 2760 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:19:26.401560 kubelet[2760]: I0113 20:19:26.401082 2760 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186-1-0-0-2aa1049bb1" Jan 13 20:19:26.407121 kubelet[2760]: E0113 20:19:26.407080 2760 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 20:19:26.413962 kubelet[2760]: I0113 20:19:26.412121 2760 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:19:26.413962 kubelet[2760]: I0113 20:19:26.412152 2760 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:19:26.413962 kubelet[2760]: I0113 20:19:26.412182 2760 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:19:26.414553 kubelet[2760]: I0113 20:19:26.414427 2760 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 20:19:26.414553 kubelet[2760]: I0113 20:19:26.414456 2760 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 20:19:26.414553 kubelet[2760]: I0113 20:19:26.414479 2760 policy_none.go:49] "None policy: Start" Jan 13 20:19:26.415837 kubelet[2760]: I0113 20:19:26.415809 2760 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:19:26.417963 kubelet[2760]: I0113 20:19:26.416273 2760 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:19:26.417963 kubelet[2760]: I0113 20:19:26.416472 2760 state_mem.go:75] "Updated machine memory state" Jan 13 20:19:26.419947 kubelet[2760]: I0113 20:19:26.419865 2760 kubelet_node_status.go:112] "Node was previously registered" node="ci-4186-1-0-0-2aa1049bb1" Jan 13 20:19:26.420291 kubelet[2760]: I0113 20:19:26.420098 2760 kubelet_node_status.go:76] "Successfully registered node" node="ci-4186-1-0-0-2aa1049bb1" Jan 13 20:19:26.436935 kubelet[2760]: I0113 20:19:26.436776 2760 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:19:26.437403 kubelet[2760]: I0113 20:19:26.437351 2760 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 20:19:26.437594 kubelet[2760]: I0113 20:19:26.437582 2760 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:19:26.610259 kubelet[2760]: I0113 20:19:26.609833 2760 topology_manager.go:215] "Topology Admit Handler" podUID="5a2229e1aabaf4a6e31f11d55b93a13e" podNamespace="kube-system" podName="kube-controller-manager-ci-4186-1-0-0-2aa1049bb1" Jan 13 20:19:26.610259 kubelet[2760]: I0113 20:19:26.610016 2760 topology_manager.go:215] "Topology Admit Handler" podUID="97ccea0c96acf8e80c4bf1c625f3eb44" podNamespace="kube-system" podName="kube-scheduler-ci-4186-1-0-0-2aa1049bb1" Jan 13 20:19:26.610259 kubelet[2760]: I0113 20:19:26.610064 2760 topology_manager.go:215] "Topology Admit Handler" podUID="f665d2c69b61d5b75b6b5188f2e129ed" podNamespace="kube-system" podName="kube-apiserver-ci-4186-1-0-0-2aa1049bb1" Jan 13 20:19:26.626699 kubelet[2760]: E0113 20:19:26.626417 2760 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4186-1-0-0-2aa1049bb1\" already exists" pod="kube-system/kube-controller-manager-ci-4186-1-0-0-2aa1049bb1" Jan 13 20:19:26.626699 kubelet[2760]: E0113 20:19:26.626524 2760 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4186-1-0-0-2aa1049bb1\" already exists" pod="kube-system/kube-scheduler-ci-4186-1-0-0-2aa1049bb1" Jan 13 20:19:26.629286 kubelet[2760]: E0113 20:19:26.627555 2760 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4186-1-0-0-2aa1049bb1\" already exists" pod="kube-system/kube-apiserver-ci-4186-1-0-0-2aa1049bb1" Jan 13 20:19:26.704588 kubelet[2760]: I0113 20:19:26.703272 2760 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/97ccea0c96acf8e80c4bf1c625f3eb44-kubeconfig\") pod \"kube-scheduler-ci-4186-1-0-0-2aa1049bb1\" (UID: \"97ccea0c96acf8e80c4bf1c625f3eb44\") " pod="kube-system/kube-scheduler-ci-4186-1-0-0-2aa1049bb1" Jan 13 20:19:26.704588 kubelet[2760]: I0113 20:19:26.703318 2760 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f665d2c69b61d5b75b6b5188f2e129ed-k8s-certs\") pod \"kube-apiserver-ci-4186-1-0-0-2aa1049bb1\" (UID: \"f665d2c69b61d5b75b6b5188f2e129ed\") " pod="kube-system/kube-apiserver-ci-4186-1-0-0-2aa1049bb1" Jan 13 20:19:26.704588 kubelet[2760]: I0113 20:19:26.703345 2760 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5a2229e1aabaf4a6e31f11d55b93a13e-ca-certs\") pod \"kube-controller-manager-ci-4186-1-0-0-2aa1049bb1\" (UID: \"5a2229e1aabaf4a6e31f11d55b93a13e\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-0-2aa1049bb1" Jan 13 20:19:26.704588 kubelet[2760]: I0113 20:19:26.703365 2760 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5a2229e1aabaf4a6e31f11d55b93a13e-kubeconfig\") pod \"kube-controller-manager-ci-4186-1-0-0-2aa1049bb1\" (UID: \"5a2229e1aabaf4a6e31f11d55b93a13e\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-0-2aa1049bb1" Jan 13 20:19:26.704588 kubelet[2760]: I0113 20:19:26.703383 2760 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5a2229e1aabaf4a6e31f11d55b93a13e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186-1-0-0-2aa1049bb1\" (UID: \"5a2229e1aabaf4a6e31f11d55b93a13e\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-0-2aa1049bb1" Jan 13 20:19:26.704831 kubelet[2760]: I0113 20:19:26.703401 2760 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f665d2c69b61d5b75b6b5188f2e129ed-ca-certs\") pod \"kube-apiserver-ci-4186-1-0-0-2aa1049bb1\" (UID: \"f665d2c69b61d5b75b6b5188f2e129ed\") " pod="kube-system/kube-apiserver-ci-4186-1-0-0-2aa1049bb1" Jan 13 20:19:26.704831 kubelet[2760]: I0113 20:19:26.703416 2760 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f665d2c69b61d5b75b6b5188f2e129ed-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186-1-0-0-2aa1049bb1\" (UID: \"f665d2c69b61d5b75b6b5188f2e129ed\") " pod="kube-system/kube-apiserver-ci-4186-1-0-0-2aa1049bb1" Jan 13 20:19:26.704831 kubelet[2760]: I0113 20:19:26.703432 2760 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5a2229e1aabaf4a6e31f11d55b93a13e-flexvolume-dir\") pod \"kube-controller-manager-ci-4186-1-0-0-2aa1049bb1\" (UID: \"5a2229e1aabaf4a6e31f11d55b93a13e\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-0-2aa1049bb1" Jan 13 20:19:26.704831 kubelet[2760]: I0113 20:19:26.703449 2760 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5a2229e1aabaf4a6e31f11d55b93a13e-k8s-certs\") pod \"kube-controller-manager-ci-4186-1-0-0-2aa1049bb1\" (UID: \"5a2229e1aabaf4a6e31f11d55b93a13e\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-0-2aa1049bb1" Jan 13 20:19:27.267929 kubelet[2760]: I0113 20:19:27.267853 2760 apiserver.go:52] "Watching apiserver" Jan 13 20:19:27.300498 kubelet[2760]: I0113 20:19:27.300438 2760 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 13 20:19:27.388348 kubelet[2760]: E0113 20:19:27.388294 2760 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4186-1-0-0-2aa1049bb1\" already exists" pod="kube-system/kube-controller-manager-ci-4186-1-0-0-2aa1049bb1" Jan 13 20:19:27.423880 kubelet[2760]: I0113 20:19:27.423795 2760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4186-1-0-0-2aa1049bb1" podStartSLOduration=2.423772424 podStartE2EDuration="2.423772424s" podCreationTimestamp="2025-01-13 20:19:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:19:27.423687023 +0000 UTC m=+1.272825129" watchObservedRunningTime="2025-01-13 20:19:27.423772424 +0000 UTC m=+1.272910530" Jan 13 20:19:27.424153 kubelet[2760]: I0113 20:19:27.423976 2760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4186-1-0-0-2aa1049bb1" podStartSLOduration=4.423970546 podStartE2EDuration="4.423970546s" podCreationTimestamp="2025-01-13 20:19:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:19:27.401160837 +0000 UTC m=+1.250298983" watchObservedRunningTime="2025-01-13 20:19:27.423970546 +0000 UTC m=+1.273108652" Jan 13 20:19:27.440679 kubelet[2760]: I0113 20:19:27.440590 2760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4186-1-0-0-2aa1049bb1" podStartSLOduration=4.440566433 podStartE2EDuration="4.440566433s" podCreationTimestamp="2025-01-13 20:19:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:19:27.439113098 +0000 UTC m=+1.288251204" watchObservedRunningTime="2025-01-13 20:19:27.440566433 +0000 UTC m=+1.289704579" Jan 13 20:19:27.760437 sudo[1803]: pam_unix(sudo:session): session closed for user root Jan 13 20:19:27.921832 sshd[1802]: Connection closed by 139.178.89.65 port 55038 Jan 13 20:19:27.926423 sshd-session[1800]: pam_unix(sshd:session): session closed for user core Jan 13 20:19:27.934555 systemd[1]: sshd@4-138.199.153.196:22-139.178.89.65:55038.service: Deactivated successfully. Jan 13 20:19:27.938393 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 20:19:27.938640 systemd[1]: session-5.scope: Consumed 8.001s CPU time, 188.9M memory peak, 0B memory swap peak. Jan 13 20:19:27.939715 systemd-logind[1455]: Session 5 logged out. Waiting for processes to exit. Jan 13 20:19:27.941608 systemd-logind[1455]: Removed session 5. Jan 13 20:19:38.429083 kubelet[2760]: I0113 20:19:38.427591 2760 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 20:19:38.429765 containerd[1481]: time="2025-01-13T20:19:38.429229104Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 20:19:38.430928 kubelet[2760]: I0113 20:19:38.430470 2760 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 20:19:38.953487 kubelet[2760]: I0113 20:19:38.952626 2760 topology_manager.go:215] "Topology Admit Handler" podUID="60093501-6eb0-4a0a-b185-9c060a46e8ff" podNamespace="kube-system" podName="kube-proxy-pcrxk" Jan 13 20:19:38.962325 kubelet[2760]: W0113 20:19:38.961578 2760 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4186-1-0-0-2aa1049bb1" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4186-1-0-0-2aa1049bb1' and this object Jan 13 20:19:38.962325 kubelet[2760]: E0113 20:19:38.961630 2760 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4186-1-0-0-2aa1049bb1" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4186-1-0-0-2aa1049bb1' and this object Jan 13 20:19:38.964088 kubelet[2760]: W0113 20:19:38.963994 2760 reflector.go:547] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4186-1-0-0-2aa1049bb1" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4186-1-0-0-2aa1049bb1' and this object Jan 13 20:19:38.964088 kubelet[2760]: E0113 20:19:38.964060 2760 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4186-1-0-0-2aa1049bb1" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4186-1-0-0-2aa1049bb1' and this object Jan 13 20:19:38.964782 systemd[1]: Created slice kubepods-besteffort-pod60093501_6eb0_4a0a_b185_9c060a46e8ff.slice - libcontainer container kubepods-besteffort-pod60093501_6eb0_4a0a_b185_9c060a46e8ff.slice. Jan 13 20:19:38.969041 kubelet[2760]: I0113 20:19:38.968990 2760 topology_manager.go:215] "Topology Admit Handler" podUID="c91b44e0-33dd-41c5-a409-5b8df430ba3f" podNamespace="kube-flannel" podName="kube-flannel-ds-ptmgh" Jan 13 20:19:38.983375 systemd[1]: Created slice kubepods-burstable-podc91b44e0_33dd_41c5_a409_5b8df430ba3f.slice - libcontainer container kubepods-burstable-podc91b44e0_33dd_41c5_a409_5b8df430ba3f.slice. Jan 13 20:19:38.986956 kubelet[2760]: I0113 20:19:38.986123 2760 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/60093501-6eb0-4a0a-b185-9c060a46e8ff-xtables-lock\") pod \"kube-proxy-pcrxk\" (UID: \"60093501-6eb0-4a0a-b185-9c060a46e8ff\") " pod="kube-system/kube-proxy-pcrxk" Jan 13 20:19:38.986956 kubelet[2760]: I0113 20:19:38.986166 2760 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/60093501-6eb0-4a0a-b185-9c060a46e8ff-lib-modules\") pod \"kube-proxy-pcrxk\" (UID: \"60093501-6eb0-4a0a-b185-9c060a46e8ff\") " pod="kube-system/kube-proxy-pcrxk" Jan 13 20:19:38.986956 kubelet[2760]: I0113 20:19:38.986188 2760 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9bxq\" (UniqueName: \"kubernetes.io/projected/60093501-6eb0-4a0a-b185-9c060a46e8ff-kube-api-access-s9bxq\") pod \"kube-proxy-pcrxk\" (UID: \"60093501-6eb0-4a0a-b185-9c060a46e8ff\") " pod="kube-system/kube-proxy-pcrxk" Jan 13 20:19:38.986956 kubelet[2760]: I0113 20:19:38.986206 2760 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c91b44e0-33dd-41c5-a409-5b8df430ba3f-run\") pod \"kube-flannel-ds-ptmgh\" (UID: \"c91b44e0-33dd-41c5-a409-5b8df430ba3f\") " pod="kube-flannel/kube-flannel-ds-ptmgh" Jan 13 20:19:38.986956 kubelet[2760]: I0113 20:19:38.986225 2760 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/c91b44e0-33dd-41c5-a409-5b8df430ba3f-cni-plugin\") pod \"kube-flannel-ds-ptmgh\" (UID: \"c91b44e0-33dd-41c5-a409-5b8df430ba3f\") " pod="kube-flannel/kube-flannel-ds-ptmgh" Jan 13 20:19:38.987184 kubelet[2760]: I0113 20:19:38.986261 2760 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c91b44e0-33dd-41c5-a409-5b8df430ba3f-xtables-lock\") pod \"kube-flannel-ds-ptmgh\" (UID: \"c91b44e0-33dd-41c5-a409-5b8df430ba3f\") " pod="kube-flannel/kube-flannel-ds-ptmgh" Jan 13 20:19:38.987184 kubelet[2760]: I0113 20:19:38.986281 2760 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/60093501-6eb0-4a0a-b185-9c060a46e8ff-kube-proxy\") pod \"kube-proxy-pcrxk\" (UID: \"60093501-6eb0-4a0a-b185-9c060a46e8ff\") " pod="kube-system/kube-proxy-pcrxk" Jan 13 20:19:38.987184 kubelet[2760]: I0113 20:19:38.986300 2760 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z875\" (UniqueName: \"kubernetes.io/projected/c91b44e0-33dd-41c5-a409-5b8df430ba3f-kube-api-access-7z875\") pod \"kube-flannel-ds-ptmgh\" (UID: \"c91b44e0-33dd-41c5-a409-5b8df430ba3f\") " pod="kube-flannel/kube-flannel-ds-ptmgh" Jan 13 20:19:38.987184 kubelet[2760]: I0113 20:19:38.986316 2760 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/c91b44e0-33dd-41c5-a409-5b8df430ba3f-cni\") pod \"kube-flannel-ds-ptmgh\" (UID: \"c91b44e0-33dd-41c5-a409-5b8df430ba3f\") " pod="kube-flannel/kube-flannel-ds-ptmgh" Jan 13 20:19:38.987184 kubelet[2760]: I0113 20:19:38.986333 2760 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/c91b44e0-33dd-41c5-a409-5b8df430ba3f-flannel-cfg\") pod \"kube-flannel-ds-ptmgh\" (UID: \"c91b44e0-33dd-41c5-a409-5b8df430ba3f\") " pod="kube-flannel/kube-flannel-ds-ptmgh" Jan 13 20:19:39.105982 kubelet[2760]: E0113 20:19:39.105135 2760 projected.go:294] Couldn't get configMap kube-flannel/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 13 20:19:39.105982 kubelet[2760]: E0113 20:19:39.105177 2760 projected.go:200] Error preparing data for projected volume kube-api-access-7z875 for pod kube-flannel/kube-flannel-ds-ptmgh: configmap "kube-root-ca.crt" not found Jan 13 20:19:39.105982 kubelet[2760]: E0113 20:19:39.105294 2760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c91b44e0-33dd-41c5-a409-5b8df430ba3f-kube-api-access-7z875 podName:c91b44e0-33dd-41c5-a409-5b8df430ba3f nodeName:}" failed. No retries permitted until 2025-01-13 20:19:39.605219636 +0000 UTC m=+13.454357742 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-7z875" (UniqueName: "kubernetes.io/projected/c91b44e0-33dd-41c5-a409-5b8df430ba3f-kube-api-access-7z875") pod "kube-flannel-ds-ptmgh" (UID: "c91b44e0-33dd-41c5-a409-5b8df430ba3f") : configmap "kube-root-ca.crt" not found Jan 13 20:19:39.888519 containerd[1481]: time="2025-01-13T20:19:39.888465275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-ptmgh,Uid:c91b44e0-33dd-41c5-a409-5b8df430ba3f,Namespace:kube-flannel,Attempt:0,}" Jan 13 20:19:39.922865 containerd[1481]: time="2025-01-13T20:19:39.922337487Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:19:39.922865 containerd[1481]: time="2025-01-13T20:19:39.922492569Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:19:39.922865 containerd[1481]: time="2025-01-13T20:19:39.922537969Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:19:39.922865 containerd[1481]: time="2025-01-13T20:19:39.922709931Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:19:39.946154 systemd[1]: Started cri-containerd-e198aceb97e180ea925b68ff0560cba84b80b674a2c4005d512f72836e8d15a8.scope - libcontainer container e198aceb97e180ea925b68ff0560cba84b80b674a2c4005d512f72836e8d15a8. Jan 13 20:19:39.979940 containerd[1481]: time="2025-01-13T20:19:39.979628183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-ptmgh,Uid:c91b44e0-33dd-41c5-a409-5b8df430ba3f,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"e198aceb97e180ea925b68ff0560cba84b80b674a2c4005d512f72836e8d15a8\"" Jan 13 20:19:39.982201 containerd[1481]: time="2025-01-13T20:19:39.982167094Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Jan 13 20:19:40.088977 kubelet[2760]: E0113 20:19:40.088628 2760 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Jan 13 20:19:40.088977 kubelet[2760]: E0113 20:19:40.088760 2760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/60093501-6eb0-4a0a-b185-9c060a46e8ff-kube-proxy podName:60093501-6eb0-4a0a-b185-9c060a46e8ff nodeName:}" failed. No retries permitted until 2025-01-13 20:19:40.588736562 +0000 UTC m=+14.437874668 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/60093501-6eb0-4a0a-b185-9c060a46e8ff-kube-proxy") pod "kube-proxy-pcrxk" (UID: "60093501-6eb0-4a0a-b185-9c060a46e8ff") : failed to sync configmap cache: timed out waiting for the condition Jan 13 20:19:40.107450 kubelet[2760]: E0113 20:19:40.106743 2760 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 13 20:19:40.107450 kubelet[2760]: E0113 20:19:40.106805 2760 projected.go:200] Error preparing data for projected volume kube-api-access-s9bxq for pod kube-system/kube-proxy-pcrxk: failed to sync configmap cache: timed out waiting for the condition Jan 13 20:19:40.107450 kubelet[2760]: E0113 20:19:40.106939 2760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/60093501-6eb0-4a0a-b185-9c060a46e8ff-kube-api-access-s9bxq podName:60093501-6eb0-4a0a-b185-9c060a46e8ff nodeName:}" failed. No retries permitted until 2025-01-13 20:19:40.606873825 +0000 UTC m=+14.456011931 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s9bxq" (UniqueName: "kubernetes.io/projected/60093501-6eb0-4a0a-b185-9c060a46e8ff-kube-api-access-s9bxq") pod "kube-proxy-pcrxk" (UID: "60093501-6eb0-4a0a-b185-9c060a46e8ff") : failed to sync configmap cache: timed out waiting for the condition Jan 13 20:19:40.778005 containerd[1481]: time="2025-01-13T20:19:40.777534111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pcrxk,Uid:60093501-6eb0-4a0a-b185-9c060a46e8ff,Namespace:kube-system,Attempt:0,}" Jan 13 20:19:40.816925 containerd[1481]: time="2025-01-13T20:19:40.816458469Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:19:40.816925 containerd[1481]: time="2025-01-13T20:19:40.816530070Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:19:40.816925 containerd[1481]: time="2025-01-13T20:19:40.816542510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:19:40.816925 containerd[1481]: time="2025-01-13T20:19:40.816643111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:19:40.843308 systemd[1]: Started cri-containerd-6a163ca0e48f1f81180cdc5e82df1fd1f3368c35101a5ac1196bf47003bad279.scope - libcontainer container 6a163ca0e48f1f81180cdc5e82df1fd1f3368c35101a5ac1196bf47003bad279. Jan 13 20:19:40.875640 containerd[1481]: time="2025-01-13T20:19:40.875594356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pcrxk,Uid:60093501-6eb0-4a0a-b185-9c060a46e8ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"6a163ca0e48f1f81180cdc5e82df1fd1f3368c35101a5ac1196bf47003bad279\"" Jan 13 20:19:40.882941 containerd[1481]: time="2025-01-13T20:19:40.882754844Z" level=info msg="CreateContainer within sandbox \"6a163ca0e48f1f81180cdc5e82df1fd1f3368c35101a5ac1196bf47003bad279\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 20:19:40.903012 containerd[1481]: time="2025-01-13T20:19:40.902959653Z" level=info msg="CreateContainer within sandbox \"6a163ca0e48f1f81180cdc5e82df1fd1f3368c35101a5ac1196bf47003bad279\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"234db8128dbbf3380146cfe8650ff27c3e6ecf019e3d3969b42a94d84a527f91\"" Jan 13 20:19:40.907937 containerd[1481]: time="2025-01-13T20:19:40.905933089Z" level=info msg="StartContainer for \"234db8128dbbf3380146cfe8650ff27c3e6ecf019e3d3969b42a94d84a527f91\"" Jan 13 20:19:40.944312 systemd[1]: Started cri-containerd-234db8128dbbf3380146cfe8650ff27c3e6ecf019e3d3969b42a94d84a527f91.scope - libcontainer container 234db8128dbbf3380146cfe8650ff27c3e6ecf019e3d3969b42a94d84a527f91. Jan 13 20:19:40.996617 containerd[1481]: time="2025-01-13T20:19:40.996556403Z" level=info msg="StartContainer for \"234db8128dbbf3380146cfe8650ff27c3e6ecf019e3d3969b42a94d84a527f91\" returns successfully" Jan 13 20:19:41.430983 kubelet[2760]: I0113 20:19:41.430852 2760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pcrxk" podStartSLOduration=3.430829602 podStartE2EDuration="3.430829602s" podCreationTimestamp="2025-01-13 20:19:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:19:41.430370316 +0000 UTC m=+15.279508462" watchObservedRunningTime="2025-01-13 20:19:41.430829602 +0000 UTC m=+15.279967668" Jan 13 20:19:42.690178 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3112861613.mount: Deactivated successfully. Jan 13 20:19:42.743459 containerd[1481]: time="2025-01-13T20:19:42.743355539Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:42.746520 containerd[1481]: time="2025-01-13T20:19:42.746237775Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3673532" Jan 13 20:19:42.750563 containerd[1481]: time="2025-01-13T20:19:42.749609857Z" level=info msg="ImageCreate event name:\"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:42.755711 containerd[1481]: time="2025-01-13T20:19:42.755653333Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:42.756216 containerd[1481]: time="2025-01-13T20:19:42.756171340Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3662650\" in 2.773965486s" Jan 13 20:19:42.756216 containerd[1481]: time="2025-01-13T20:19:42.756213100Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" Jan 13 20:19:42.762247 containerd[1481]: time="2025-01-13T20:19:42.762196615Z" level=info msg="CreateContainer within sandbox \"e198aceb97e180ea925b68ff0560cba84b80b674a2c4005d512f72836e8d15a8\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Jan 13 20:19:42.787303 containerd[1481]: time="2025-01-13T20:19:42.787239690Z" level=info msg="CreateContainer within sandbox \"e198aceb97e180ea925b68ff0560cba84b80b674a2c4005d512f72836e8d15a8\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"441ffc049243ddce9741a581d80420c16a35ad1443248fe2b7da41365b5400b1\"" Jan 13 20:19:42.788336 containerd[1481]: time="2025-01-13T20:19:42.788228943Z" level=info msg="StartContainer for \"441ffc049243ddce9741a581d80420c16a35ad1443248fe2b7da41365b5400b1\"" Jan 13 20:19:42.823179 systemd[1]: Started cri-containerd-441ffc049243ddce9741a581d80420c16a35ad1443248fe2b7da41365b5400b1.scope - libcontainer container 441ffc049243ddce9741a581d80420c16a35ad1443248fe2b7da41365b5400b1. Jan 13 20:19:42.852115 containerd[1481]: time="2025-01-13T20:19:42.852060025Z" level=info msg="StartContainer for \"441ffc049243ddce9741a581d80420c16a35ad1443248fe2b7da41365b5400b1\" returns successfully" Jan 13 20:19:42.854096 systemd[1]: cri-containerd-441ffc049243ddce9741a581d80420c16a35ad1443248fe2b7da41365b5400b1.scope: Deactivated successfully. Jan 13 20:19:42.881652 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-441ffc049243ddce9741a581d80420c16a35ad1443248fe2b7da41365b5400b1-rootfs.mount: Deactivated successfully. Jan 13 20:19:42.906365 containerd[1481]: time="2025-01-13T20:19:42.906070063Z" level=info msg="shim disconnected" id=441ffc049243ddce9741a581d80420c16a35ad1443248fe2b7da41365b5400b1 namespace=k8s.io Jan 13 20:19:42.906365 containerd[1481]: time="2025-01-13T20:19:42.906146984Z" level=warning msg="cleaning up after shim disconnected" id=441ffc049243ddce9741a581d80420c16a35ad1443248fe2b7da41365b5400b1 namespace=k8s.io Jan 13 20:19:42.906365 containerd[1481]: time="2025-01-13T20:19:42.906157824Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:19:43.429043 containerd[1481]: time="2025-01-13T20:19:43.428992409Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Jan 13 20:19:45.980533 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2680788213.mount: Deactivated successfully. Jan 13 20:19:46.771416 containerd[1481]: time="2025-01-13T20:19:46.771348810Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:46.772539 containerd[1481]: time="2025-01-13T20:19:46.772488665Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26874261" Jan 13 20:19:46.773402 containerd[1481]: time="2025-01-13T20:19:46.773362996Z" level=info msg="ImageCreate event name:\"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:46.779857 containerd[1481]: time="2025-01-13T20:19:46.779811321Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:46.783571 containerd[1481]: time="2025-01-13T20:19:46.782689318Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26863435\" in 3.353644708s" Jan 13 20:19:46.783571 containerd[1481]: time="2025-01-13T20:19:46.783041523Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\"" Jan 13 20:19:46.787474 containerd[1481]: time="2025-01-13T20:19:46.787434300Z" level=info msg="CreateContainer within sandbox \"e198aceb97e180ea925b68ff0560cba84b80b674a2c4005d512f72836e8d15a8\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 13 20:19:46.815032 containerd[1481]: time="2025-01-13T20:19:46.814837338Z" level=info msg="CreateContainer within sandbox \"e198aceb97e180ea925b68ff0560cba84b80b674a2c4005d512f72836e8d15a8\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"be0a5711ccc055eb34e54f7e721e7bd4e56e4dec24ea3d61812d82bfde1dab37\"" Jan 13 20:19:46.816646 containerd[1481]: time="2025-01-13T20:19:46.815613828Z" level=info msg="StartContainer for \"be0a5711ccc055eb34e54f7e721e7bd4e56e4dec24ea3d61812d82bfde1dab37\"" Jan 13 20:19:46.848425 systemd[1]: Started cri-containerd-be0a5711ccc055eb34e54f7e721e7bd4e56e4dec24ea3d61812d82bfde1dab37.scope - libcontainer container be0a5711ccc055eb34e54f7e721e7bd4e56e4dec24ea3d61812d82bfde1dab37. Jan 13 20:19:46.880890 systemd[1]: cri-containerd-be0a5711ccc055eb34e54f7e721e7bd4e56e4dec24ea3d61812d82bfde1dab37.scope: Deactivated successfully. Jan 13 20:19:46.883732 containerd[1481]: time="2025-01-13T20:19:46.883630916Z" level=info msg="StartContainer for \"be0a5711ccc055eb34e54f7e721e7bd4e56e4dec24ea3d61812d82bfde1dab37\" returns successfully" Jan 13 20:19:46.906352 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-be0a5711ccc055eb34e54f7e721e7bd4e56e4dec24ea3d61812d82bfde1dab37-rootfs.mount: Deactivated successfully. Jan 13 20:19:46.959008 kubelet[2760]: I0113 20:19:46.958965 2760 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 13 20:19:46.992179 containerd[1481]: time="2025-01-13T20:19:46.991835289Z" level=info msg="shim disconnected" id=be0a5711ccc055eb34e54f7e721e7bd4e56e4dec24ea3d61812d82bfde1dab37 namespace=k8s.io Jan 13 20:19:46.992179 containerd[1481]: time="2025-01-13T20:19:46.991989051Z" level=warning msg="cleaning up after shim disconnected" id=be0a5711ccc055eb34e54f7e721e7bd4e56e4dec24ea3d61812d82bfde1dab37 namespace=k8s.io Jan 13 20:19:46.992179 containerd[1481]: time="2025-01-13T20:19:46.991998691Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:19:47.002167 kubelet[2760]: I0113 20:19:47.001128 2760 topology_manager.go:215] "Topology Admit Handler" podUID="dfa86852-19c0-4e06-b0f1-607af6e3ef19" podNamespace="kube-system" podName="coredns-7db6d8ff4d-nxxkf" Jan 13 20:19:47.009938 kubelet[2760]: I0113 20:19:47.009792 2760 topology_manager.go:215] "Topology Admit Handler" podUID="75f8379c-0036-4bf8-9741-2cd0eafe4527" podNamespace="kube-system" podName="coredns-7db6d8ff4d-hzlks" Jan 13 20:19:47.016250 containerd[1481]: time="2025-01-13T20:19:47.014850071Z" level=warning msg="cleanup warnings time=\"2025-01-13T20:19:47Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 13 20:19:47.025679 systemd[1]: Created slice kubepods-burstable-poddfa86852_19c0_4e06_b0f1_607af6e3ef19.slice - libcontainer container kubepods-burstable-poddfa86852_19c0_4e06_b0f1_607af6e3ef19.slice. Jan 13 20:19:47.039405 systemd[1]: Created slice kubepods-burstable-pod75f8379c_0036_4bf8_9741_2cd0eafe4527.slice - libcontainer container kubepods-burstable-pod75f8379c_0036_4bf8_9741_2cd0eafe4527.slice. Jan 13 20:19:47.043082 kubelet[2760]: I0113 20:19:47.043002 2760 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dfa86852-19c0-4e06-b0f1-607af6e3ef19-config-volume\") pod \"coredns-7db6d8ff4d-nxxkf\" (UID: \"dfa86852-19c0-4e06-b0f1-607af6e3ef19\") " pod="kube-system/coredns-7db6d8ff4d-nxxkf" Jan 13 20:19:47.043082 kubelet[2760]: I0113 20:19:47.043040 2760 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75f8379c-0036-4bf8-9741-2cd0eafe4527-config-volume\") pod \"coredns-7db6d8ff4d-hzlks\" (UID: \"75f8379c-0036-4bf8-9741-2cd0eafe4527\") " pod="kube-system/coredns-7db6d8ff4d-hzlks" Jan 13 20:19:47.043476 kubelet[2760]: I0113 20:19:47.043059 2760 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8548\" (UniqueName: \"kubernetes.io/projected/75f8379c-0036-4bf8-9741-2cd0eafe4527-kube-api-access-k8548\") pod \"coredns-7db6d8ff4d-hzlks\" (UID: \"75f8379c-0036-4bf8-9741-2cd0eafe4527\") " pod="kube-system/coredns-7db6d8ff4d-hzlks" Jan 13 20:19:47.043476 kubelet[2760]: I0113 20:19:47.043370 2760 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2brqg\" (UniqueName: \"kubernetes.io/projected/dfa86852-19c0-4e06-b0f1-607af6e3ef19-kube-api-access-2brqg\") pod \"coredns-7db6d8ff4d-nxxkf\" (UID: \"dfa86852-19c0-4e06-b0f1-607af6e3ef19\") " pod="kube-system/coredns-7db6d8ff4d-nxxkf" Jan 13 20:19:47.335108 containerd[1481]: time="2025-01-13T20:19:47.334575962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nxxkf,Uid:dfa86852-19c0-4e06-b0f1-607af6e3ef19,Namespace:kube-system,Attempt:0,}" Jan 13 20:19:47.345752 containerd[1481]: time="2025-01-13T20:19:47.345650108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hzlks,Uid:75f8379c-0036-4bf8-9741-2cd0eafe4527,Namespace:kube-system,Attempt:0,}" Jan 13 20:19:47.411008 containerd[1481]: time="2025-01-13T20:19:47.410948728Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nxxkf,Uid:dfa86852-19c0-4e06-b0f1-607af6e3ef19,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0d01e89c92afe823e52e990d814c13a7d71b19c0519860fadb3684f8c8558252\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 13 20:19:47.411867 kubelet[2760]: E0113 20:19:47.411462 2760 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d01e89c92afe823e52e990d814c13a7d71b19c0519860fadb3684f8c8558252\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 13 20:19:47.411867 kubelet[2760]: E0113 20:19:47.411557 2760 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d01e89c92afe823e52e990d814c13a7d71b19c0519860fadb3684f8c8558252\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-nxxkf" Jan 13 20:19:47.411867 kubelet[2760]: E0113 20:19:47.411579 2760 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d01e89c92afe823e52e990d814c13a7d71b19c0519860fadb3684f8c8558252\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-nxxkf" Jan 13 20:19:47.411867 kubelet[2760]: E0113 20:19:47.411618 2760 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-nxxkf_kube-system(dfa86852-19c0-4e06-b0f1-607af6e3ef19)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-nxxkf_kube-system(dfa86852-19c0-4e06-b0f1-607af6e3ef19)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0d01e89c92afe823e52e990d814c13a7d71b19c0519860fadb3684f8c8558252\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-nxxkf" podUID="dfa86852-19c0-4e06-b0f1-607af6e3ef19" Jan 13 20:19:47.413443 containerd[1481]: time="2025-01-13T20:19:47.413370320Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hzlks,Uid:75f8379c-0036-4bf8-9741-2cd0eafe4527,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bf69d8ea65952e1169ec391d8244a9ab017fc091e317637ef03cdb90724ce173\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 13 20:19:47.413944 kubelet[2760]: E0113 20:19:47.413884 2760 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf69d8ea65952e1169ec391d8244a9ab017fc091e317637ef03cdb90724ce173\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 13 20:19:47.414041 kubelet[2760]: E0113 20:19:47.413970 2760 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf69d8ea65952e1169ec391d8244a9ab017fc091e317637ef03cdb90724ce173\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-hzlks" Jan 13 20:19:47.414041 kubelet[2760]: E0113 20:19:47.413993 2760 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf69d8ea65952e1169ec391d8244a9ab017fc091e317637ef03cdb90724ce173\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-hzlks" Jan 13 20:19:47.414129 kubelet[2760]: E0113 20:19:47.414048 2760 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-hzlks_kube-system(75f8379c-0036-4bf8-9741-2cd0eafe4527)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-hzlks_kube-system(75f8379c-0036-4bf8-9741-2cd0eafe4527)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bf69d8ea65952e1169ec391d8244a9ab017fc091e317637ef03cdb90724ce173\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-hzlks" podUID="75f8379c-0036-4bf8-9741-2cd0eafe4527" Jan 13 20:19:47.445222 containerd[1481]: time="2025-01-13T20:19:47.445179259Z" level=info msg="CreateContainer within sandbox \"e198aceb97e180ea925b68ff0560cba84b80b674a2c4005d512f72836e8d15a8\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Jan 13 20:19:47.467141 containerd[1481]: time="2025-01-13T20:19:47.467081668Z" level=info msg="CreateContainer within sandbox \"e198aceb97e180ea925b68ff0560cba84b80b674a2c4005d512f72836e8d15a8\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"3193ecf483f31029088b5dd4b3ef15c4d10813468af8af7bf1e074066ffe2927\"" Jan 13 20:19:47.468958 containerd[1481]: time="2025-01-13T20:19:47.468651888Z" level=info msg="StartContainer for \"3193ecf483f31029088b5dd4b3ef15c4d10813468af8af7bf1e074066ffe2927\"" Jan 13 20:19:47.502278 systemd[1]: Started cri-containerd-3193ecf483f31029088b5dd4b3ef15c4d10813468af8af7bf1e074066ffe2927.scope - libcontainer container 3193ecf483f31029088b5dd4b3ef15c4d10813468af8af7bf1e074066ffe2927. Jan 13 20:19:47.538320 containerd[1481]: time="2025-01-13T20:19:47.538142644Z" level=info msg="StartContainer for \"3193ecf483f31029088b5dd4b3ef15c4d10813468af8af7bf1e074066ffe2927\" returns successfully" Jan 13 20:19:47.871324 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0d01e89c92afe823e52e990d814c13a7d71b19c0519860fadb3684f8c8558252-shm.mount: Deactivated successfully. Jan 13 20:19:48.461968 kubelet[2760]: I0113 20:19:48.460508 2760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-ptmgh" podStartSLOduration=3.657587909 podStartE2EDuration="10.460485283s" podCreationTimestamp="2025-01-13 20:19:38 +0000 UTC" firstStartedPulling="2025-01-13 20:19:39.981579647 +0000 UTC m=+13.830717753" lastFinishedPulling="2025-01-13 20:19:46.784477021 +0000 UTC m=+20.633615127" observedRunningTime="2025-01-13 20:19:48.460228839 +0000 UTC m=+22.309366945" watchObservedRunningTime="2025-01-13 20:19:48.460485283 +0000 UTC m=+22.309623429" Jan 13 20:19:48.631438 systemd-networkd[1367]: flannel.1: Link UP Jan 13 20:19:48.631448 systemd-networkd[1367]: flannel.1: Gained carrier Jan 13 20:19:50.145584 systemd-networkd[1367]: flannel.1: Gained IPv6LL Jan 13 20:19:58.308546 containerd[1481]: time="2025-01-13T20:19:58.308026734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nxxkf,Uid:dfa86852-19c0-4e06-b0f1-607af6e3ef19,Namespace:kube-system,Attempt:0,}" Jan 13 20:19:58.381092 systemd-networkd[1367]: cni0: Link UP Jan 13 20:19:58.381173 systemd-networkd[1367]: cni0: Gained carrier Jan 13 20:19:58.385299 systemd-networkd[1367]: vethf3407cc6: Link UP Jan 13 20:19:58.386252 kernel: cni0: port 1(vethf3407cc6) entered blocking state Jan 13 20:19:58.387467 kernel: cni0: port 1(vethf3407cc6) entered disabled state Jan 13 20:19:58.389407 kernel: vethf3407cc6: entered allmulticast mode Jan 13 20:19:58.389531 kernel: vethf3407cc6: entered promiscuous mode Jan 13 20:19:58.389969 systemd-networkd[1367]: cni0: Lost carrier Jan 13 20:19:58.394956 kernel: cni0: port 1(vethf3407cc6) entered blocking state Jan 13 20:19:58.395072 kernel: cni0: port 1(vethf3407cc6) entered forwarding state Jan 13 20:19:58.395462 systemd-networkd[1367]: vethf3407cc6: Gained carrier Jan 13 20:19:58.397529 systemd-networkd[1367]: cni0: Gained carrier Jan 13 20:19:58.414247 containerd[1481]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40000928e8), "name":"cbr0", "type":"bridge"} Jan 13 20:19:58.414247 containerd[1481]: delegateAdd: netconf sent to delegate plugin: Jan 13 20:19:58.445474 containerd[1481]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-01-13T20:19:58.444574714Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:19:58.445474 containerd[1481]: time="2025-01-13T20:19:58.444691515Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:19:58.445474 containerd[1481]: time="2025-01-13T20:19:58.444718636Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:19:58.445474 containerd[1481]: time="2025-01-13T20:19:58.444854958Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:19:58.467139 systemd[1]: run-containerd-runc-k8s.io-ecacd8741059dc8e46ec99485843bed4ab292ae412fdec38a0e4452a3bbb68d0-runc.ym0Tww.mount: Deactivated successfully. Jan 13 20:19:58.478140 systemd[1]: Started cri-containerd-ecacd8741059dc8e46ec99485843bed4ab292ae412fdec38a0e4452a3bbb68d0.scope - libcontainer container ecacd8741059dc8e46ec99485843bed4ab292ae412fdec38a0e4452a3bbb68d0. Jan 13 20:19:58.516152 containerd[1481]: time="2025-01-13T20:19:58.516089290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nxxkf,Uid:dfa86852-19c0-4e06-b0f1-607af6e3ef19,Namespace:kube-system,Attempt:0,} returns sandbox id \"ecacd8741059dc8e46ec99485843bed4ab292ae412fdec38a0e4452a3bbb68d0\"" Jan 13 20:19:58.521391 containerd[1481]: time="2025-01-13T20:19:58.521245043Z" level=info msg="CreateContainer within sandbox \"ecacd8741059dc8e46ec99485843bed4ab292ae412fdec38a0e4452a3bbb68d0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:19:58.546179 containerd[1481]: time="2025-01-13T20:19:58.546036035Z" level=info msg="CreateContainer within sandbox \"ecacd8741059dc8e46ec99485843bed4ab292ae412fdec38a0e4452a3bbb68d0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"88f1af337245a047dd67661c554e28c8172bc77f9c08cc492385c80427275087\"" Jan 13 20:19:58.547884 containerd[1481]: time="2025-01-13T20:19:58.547839741Z" level=info msg="StartContainer for \"88f1af337245a047dd67661c554e28c8172bc77f9c08cc492385c80427275087\"" Jan 13 20:19:58.578153 systemd[1]: Started cri-containerd-88f1af337245a047dd67661c554e28c8172bc77f9c08cc492385c80427275087.scope - libcontainer container 88f1af337245a047dd67661c554e28c8172bc77f9c08cc492385c80427275087. Jan 13 20:19:58.614308 containerd[1481]: time="2025-01-13T20:19:58.614082522Z" level=info msg="StartContainer for \"88f1af337245a047dd67661c554e28c8172bc77f9c08cc492385c80427275087\" returns successfully" Jan 13 20:19:59.489439 systemd-networkd[1367]: cni0: Gained IPv6LL Jan 13 20:19:59.522949 kubelet[2760]: I0113 20:19:59.522576 2760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-nxxkf" podStartSLOduration=20.522541391 podStartE2EDuration="20.522541391s" podCreationTimestamp="2025-01-13 20:19:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:19:59.500331634 +0000 UTC m=+33.349469740" watchObservedRunningTime="2025-01-13 20:19:59.522541391 +0000 UTC m=+33.371679497" Jan 13 20:19:59.795028 update_engine[1456]: I20250113 20:19:59.794763 1456 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 13 20:19:59.795028 update_engine[1456]: I20250113 20:19:59.794828 1456 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 13 20:19:59.795448 update_engine[1456]: I20250113 20:19:59.795154 1456 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 13 20:19:59.795635 update_engine[1456]: I20250113 20:19:59.795559 1456 omaha_request_params.cc:62] Current group set to beta Jan 13 20:19:59.795860 update_engine[1456]: I20250113 20:19:59.795706 1456 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 13 20:19:59.795860 update_engine[1456]: I20250113 20:19:59.795720 1456 update_attempter.cc:643] Scheduling an action processor start. Jan 13 20:19:59.795860 update_engine[1456]: I20250113 20:19:59.795743 1456 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 13 20:19:59.795860 update_engine[1456]: I20250113 20:19:59.795773 1456 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 13 20:19:59.795860 update_engine[1456]: I20250113 20:19:59.795825 1456 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 13 20:19:59.795860 update_engine[1456]: I20250113 20:19:59.795833 1456 omaha_request_action.cc:272] Request: Jan 13 20:19:59.795860 update_engine[1456]: Jan 13 20:19:59.795860 update_engine[1456]: Jan 13 20:19:59.795860 update_engine[1456]: Jan 13 20:19:59.795860 update_engine[1456]: Jan 13 20:19:59.795860 update_engine[1456]: Jan 13 20:19:59.795860 update_engine[1456]: Jan 13 20:19:59.795860 update_engine[1456]: Jan 13 20:19:59.795860 update_engine[1456]: Jan 13 20:19:59.795860 update_engine[1456]: I20250113 20:19:59.795840 1456 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 13 20:19:59.798300 update_engine[1456]: I20250113 20:19:59.798247 1456 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 13 20:19:59.799008 locksmithd[1494]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 13 20:19:59.799293 update_engine[1456]: I20250113 20:19:59.798848 1456 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 13 20:19:59.801598 update_engine[1456]: E20250113 20:19:59.801512 1456 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 13 20:19:59.801759 update_engine[1456]: I20250113 20:19:59.801627 1456 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 13 20:20:00.129172 systemd-networkd[1367]: vethf3407cc6: Gained IPv6LL Jan 13 20:20:02.308879 containerd[1481]: time="2025-01-13T20:20:02.308043757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hzlks,Uid:75f8379c-0036-4bf8-9741-2cd0eafe4527,Namespace:kube-system,Attempt:0,}" Jan 13 20:20:02.350554 systemd-networkd[1367]: veth328d4c84: Link UP Jan 13 20:20:02.352424 kernel: cni0: port 2(veth328d4c84) entered blocking state Jan 13 20:20:02.352576 kernel: cni0: port 2(veth328d4c84) entered disabled state Jan 13 20:20:02.352638 kernel: veth328d4c84: entered allmulticast mode Jan 13 20:20:02.354846 kernel: veth328d4c84: entered promiscuous mode Jan 13 20:20:02.366925 kernel: cni0: port 2(veth328d4c84) entered blocking state Jan 13 20:20:02.367028 kernel: cni0: port 2(veth328d4c84) entered forwarding state Jan 13 20:20:02.367321 systemd-networkd[1367]: veth328d4c84: Gained carrier Jan 13 20:20:02.372111 containerd[1481]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40000928e8), "name":"cbr0", "type":"bridge"} Jan 13 20:20:02.372111 containerd[1481]: delegateAdd: netconf sent to delegate plugin: Jan 13 20:20:02.401061 containerd[1481]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-01-13T20:20:02.400882143Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:20:02.401061 containerd[1481]: time="2025-01-13T20:20:02.400985185Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:20:02.401061 containerd[1481]: time="2025-01-13T20:20:02.400998185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:20:02.401715 containerd[1481]: time="2025-01-13T20:20:02.401338390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:20:02.430657 systemd[1]: Started cri-containerd-91265fce5a52adf8b1071b3496199adb07ceee7c73b8980281720016e8278c3b.scope - libcontainer container 91265fce5a52adf8b1071b3496199adb07ceee7c73b8980281720016e8278c3b. Jan 13 20:20:02.492280 containerd[1481]: time="2025-01-13T20:20:02.492149427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hzlks,Uid:75f8379c-0036-4bf8-9741-2cd0eafe4527,Namespace:kube-system,Attempt:0,} returns sandbox id \"91265fce5a52adf8b1071b3496199adb07ceee7c73b8980281720016e8278c3b\"" Jan 13 20:20:02.497973 containerd[1481]: time="2025-01-13T20:20:02.497925391Z" level=info msg="CreateContainer within sandbox \"91265fce5a52adf8b1071b3496199adb07ceee7c73b8980281720016e8278c3b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:20:02.516330 containerd[1481]: time="2025-01-13T20:20:02.516277297Z" level=info msg="CreateContainer within sandbox \"91265fce5a52adf8b1071b3496199adb07ceee7c73b8980281720016e8278c3b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f981145b170674e5b7e48c89f741f26fc70e6ed837e4c56919268dd796aca0b1\"" Jan 13 20:20:02.517511 containerd[1481]: time="2025-01-13T20:20:02.517457634Z" level=info msg="StartContainer for \"f981145b170674e5b7e48c89f741f26fc70e6ed837e4c56919268dd796aca0b1\"" Jan 13 20:20:02.555134 systemd[1]: Started cri-containerd-f981145b170674e5b7e48c89f741f26fc70e6ed837e4c56919268dd796aca0b1.scope - libcontainer container f981145b170674e5b7e48c89f741f26fc70e6ed837e4c56919268dd796aca0b1. Jan 13 20:20:02.597017 containerd[1481]: time="2025-01-13T20:20:02.596862826Z" level=info msg="StartContainer for \"f981145b170674e5b7e48c89f741f26fc70e6ed837e4c56919268dd796aca0b1\" returns successfully" Jan 13 20:20:03.518038 kubelet[2760]: I0113 20:20:03.517277 2760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-hzlks" podStartSLOduration=24.517255812 podStartE2EDuration="24.517255812s" podCreationTimestamp="2025-01-13 20:19:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:20:03.516570642 +0000 UTC m=+37.365708748" watchObservedRunningTime="2025-01-13 20:20:03.517255812 +0000 UTC m=+37.366393958" Jan 13 20:20:04.097293 systemd-networkd[1367]: veth328d4c84: Gained IPv6LL Jan 13 20:20:09.802505 update_engine[1456]: I20250113 20:20:09.801275 1456 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 13 20:20:09.802505 update_engine[1456]: I20250113 20:20:09.801657 1456 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 13 20:20:09.802505 update_engine[1456]: I20250113 20:20:09.802281 1456 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 13 20:20:09.804187 update_engine[1456]: E20250113 20:20:09.803848 1456 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 13 20:20:09.804187 update_engine[1456]: I20250113 20:20:09.804136 1456 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 13 20:20:19.803149 update_engine[1456]: I20250113 20:20:19.803050 1456 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 13 20:20:19.803586 update_engine[1456]: I20250113 20:20:19.803313 1456 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 13 20:20:19.803586 update_engine[1456]: I20250113 20:20:19.803566 1456 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 13 20:20:19.804125 update_engine[1456]: E20250113 20:20:19.804085 1456 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 13 20:20:19.804203 update_engine[1456]: I20250113 20:20:19.804152 1456 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 13 20:20:29.795094 update_engine[1456]: I20250113 20:20:29.794978 1456 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 13 20:20:29.795742 update_engine[1456]: I20250113 20:20:29.795254 1456 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 13 20:20:29.795742 update_engine[1456]: I20250113 20:20:29.795547 1456 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 13 20:20:29.796265 update_engine[1456]: E20250113 20:20:29.796187 1456 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 13 20:20:29.796400 update_engine[1456]: I20250113 20:20:29.796281 1456 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 13 20:20:29.796400 update_engine[1456]: I20250113 20:20:29.796300 1456 omaha_request_action.cc:617] Omaha request response: Jan 13 20:20:29.796467 update_engine[1456]: E20250113 20:20:29.796417 1456 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 13 20:20:29.796467 update_engine[1456]: I20250113 20:20:29.796447 1456 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 13 20:20:29.796467 update_engine[1456]: I20250113 20:20:29.796458 1456 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 13 20:20:29.796567 update_engine[1456]: I20250113 20:20:29.796466 1456 update_attempter.cc:306] Processing Done. Jan 13 20:20:29.796567 update_engine[1456]: E20250113 20:20:29.796486 1456 update_attempter.cc:619] Update failed. Jan 13 20:20:29.796567 update_engine[1456]: I20250113 20:20:29.796509 1456 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 13 20:20:29.796567 update_engine[1456]: I20250113 20:20:29.796519 1456 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 13 20:20:29.796567 update_engine[1456]: I20250113 20:20:29.796530 1456 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 13 20:20:29.796775 update_engine[1456]: I20250113 20:20:29.796646 1456 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 13 20:20:29.796775 update_engine[1456]: I20250113 20:20:29.796682 1456 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 13 20:20:29.796775 update_engine[1456]: I20250113 20:20:29.796692 1456 omaha_request_action.cc:272] Request: Jan 13 20:20:29.796775 update_engine[1456]: Jan 13 20:20:29.796775 update_engine[1456]: Jan 13 20:20:29.796775 update_engine[1456]: Jan 13 20:20:29.796775 update_engine[1456]: Jan 13 20:20:29.796775 update_engine[1456]: Jan 13 20:20:29.796775 update_engine[1456]: Jan 13 20:20:29.796775 update_engine[1456]: I20250113 20:20:29.796703 1456 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 13 20:20:29.797216 update_engine[1456]: I20250113 20:20:29.797056 1456 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 13 20:20:29.797453 update_engine[1456]: I20250113 20:20:29.797371 1456 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 13 20:20:29.797770 locksmithd[1494]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 13 20:20:29.798703 update_engine[1456]: E20250113 20:20:29.798107 1456 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 13 20:20:29.798703 update_engine[1456]: I20250113 20:20:29.798192 1456 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 13 20:20:29.798703 update_engine[1456]: I20250113 20:20:29.798206 1456 omaha_request_action.cc:617] Omaha request response: Jan 13 20:20:29.798703 update_engine[1456]: I20250113 20:20:29.798220 1456 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 13 20:20:29.798703 update_engine[1456]: I20250113 20:20:29.798229 1456 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 13 20:20:29.798703 update_engine[1456]: I20250113 20:20:29.798238 1456 update_attempter.cc:306] Processing Done. Jan 13 20:20:29.798703 update_engine[1456]: I20250113 20:20:29.798249 1456 update_attempter.cc:310] Error event sent. Jan 13 20:20:29.798703 update_engine[1456]: I20250113 20:20:29.798264 1456 update_check_scheduler.cc:74] Next update check in 45m49s Jan 13 20:20:29.799258 locksmithd[1494]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 13 20:24:10.012374 systemd[1]: Started sshd@5-138.199.153.196:22-139.178.89.65:44276.service - OpenSSH per-connection server daemon (139.178.89.65:44276). Jan 13 20:24:11.005041 sshd[4725]: Accepted publickey for core from 139.178.89.65 port 44276 ssh2: RSA SHA256:mP9Np05W8ayjbouGSSYjPkGP0Fk3DK/yb5iC6Sb3lHc Jan 13 20:24:11.007523 sshd-session[4725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:24:11.016038 systemd-logind[1455]: New session 6 of user core. Jan 13 20:24:11.030286 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 20:24:11.777609 sshd[4742]: Connection closed by 139.178.89.65 port 44276 Jan 13 20:24:11.776835 sshd-session[4725]: pam_unix(sshd:session): session closed for user core Jan 13 20:24:11.785733 systemd[1]: sshd@5-138.199.153.196:22-139.178.89.65:44276.service: Deactivated successfully. Jan 13 20:24:11.791487 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 20:24:11.795124 systemd-logind[1455]: Session 6 logged out. Waiting for processes to exit. Jan 13 20:24:11.797456 systemd-logind[1455]: Removed session 6. Jan 13 20:24:16.958025 systemd[1]: Started sshd@6-138.199.153.196:22-139.178.89.65:38078.service - OpenSSH per-connection server daemon (139.178.89.65:38078). Jan 13 20:24:17.966139 sshd[4777]: Accepted publickey for core from 139.178.89.65 port 38078 ssh2: RSA SHA256:mP9Np05W8ayjbouGSSYjPkGP0Fk3DK/yb5iC6Sb3lHc Jan 13 20:24:17.968508 sshd-session[4777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:24:17.974872 systemd-logind[1455]: New session 7 of user core. Jan 13 20:24:17.982171 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 20:24:18.764917 sshd[4779]: Connection closed by 139.178.89.65 port 38078 Jan 13 20:24:18.765567 sshd-session[4777]: pam_unix(sshd:session): session closed for user core Jan 13 20:24:18.770863 systemd[1]: sshd@6-138.199.153.196:22-139.178.89.65:38078.service: Deactivated successfully. Jan 13 20:24:18.775551 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 20:24:18.777786 systemd-logind[1455]: Session 7 logged out. Waiting for processes to exit. Jan 13 20:24:18.782276 systemd-logind[1455]: Removed session 7. Jan 13 20:24:23.955056 systemd[1]: Started sshd@7-138.199.153.196:22-139.178.89.65:48250.service - OpenSSH per-connection server daemon (139.178.89.65:48250). Jan 13 20:24:24.951979 sshd[4812]: Accepted publickey for core from 139.178.89.65 port 48250 ssh2: RSA SHA256:mP9Np05W8ayjbouGSSYjPkGP0Fk3DK/yb5iC6Sb3lHc Jan 13 20:24:24.953285 sshd-session[4812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:24:24.963215 systemd-logind[1455]: New session 8 of user core. Jan 13 20:24:24.965073 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 20:24:25.714085 sshd[4820]: Connection closed by 139.178.89.65 port 48250 Jan 13 20:24:25.716192 sshd-session[4812]: pam_unix(sshd:session): session closed for user core Jan 13 20:24:25.721443 systemd[1]: sshd@7-138.199.153.196:22-139.178.89.65:48250.service: Deactivated successfully. Jan 13 20:24:25.727280 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 20:24:25.731424 systemd-logind[1455]: Session 8 logged out. Waiting for processes to exit. Jan 13 20:24:25.733529 systemd-logind[1455]: Removed session 8. Jan 13 20:24:25.892606 systemd[1]: Started sshd@8-138.199.153.196:22-139.178.89.65:48264.service - OpenSSH per-connection server daemon (139.178.89.65:48264). Jan 13 20:24:26.887467 sshd[4848]: Accepted publickey for core from 139.178.89.65 port 48264 ssh2: RSA SHA256:mP9Np05W8ayjbouGSSYjPkGP0Fk3DK/yb5iC6Sb3lHc Jan 13 20:24:26.890864 sshd-session[4848]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:24:26.899415 systemd-logind[1455]: New session 9 of user core. Jan 13 20:24:26.903135 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 20:24:27.730578 sshd[4852]: Connection closed by 139.178.89.65 port 48264 Jan 13 20:24:27.731386 sshd-session[4848]: pam_unix(sshd:session): session closed for user core Jan 13 20:24:27.736507 systemd[1]: sshd@8-138.199.153.196:22-139.178.89.65:48264.service: Deactivated successfully. Jan 13 20:24:27.739282 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 20:24:27.743299 systemd-logind[1455]: Session 9 logged out. Waiting for processes to exit. Jan 13 20:24:27.745074 systemd-logind[1455]: Removed session 9. Jan 13 20:24:27.910640 systemd[1]: Started sshd@9-138.199.153.196:22-139.178.89.65:48278.service - OpenSSH per-connection server daemon (139.178.89.65:48278). Jan 13 20:24:28.910261 sshd[4861]: Accepted publickey for core from 139.178.89.65 port 48278 ssh2: RSA SHA256:mP9Np05W8ayjbouGSSYjPkGP0Fk3DK/yb5iC6Sb3lHc Jan 13 20:24:28.912706 sshd-session[4861]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:24:28.923598 systemd-logind[1455]: New session 10 of user core. Jan 13 20:24:28.931976 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 20:24:29.670728 sshd[4863]: Connection closed by 139.178.89.65 port 48278 Jan 13 20:24:29.672487 sshd-session[4861]: pam_unix(sshd:session): session closed for user core Jan 13 20:24:29.680423 systemd-logind[1455]: Session 10 logged out. Waiting for processes to exit. Jan 13 20:24:29.680882 systemd[1]: sshd@9-138.199.153.196:22-139.178.89.65:48278.service: Deactivated successfully. Jan 13 20:24:29.684650 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 20:24:29.688334 systemd-logind[1455]: Removed session 10. Jan 13 20:24:34.843280 systemd[1]: Started sshd@10-138.199.153.196:22-139.178.89.65:50074.service - OpenSSH per-connection server daemon (139.178.89.65:50074). Jan 13 20:24:35.829910 sshd[4902]: Accepted publickey for core from 139.178.89.65 port 50074 ssh2: RSA SHA256:mP9Np05W8ayjbouGSSYjPkGP0Fk3DK/yb5iC6Sb3lHc Jan 13 20:24:35.832011 sshd-session[4902]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:24:35.851242 systemd-logind[1455]: New session 11 of user core. Jan 13 20:24:35.855355 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 20:24:36.607322 sshd[4919]: Connection closed by 139.178.89.65 port 50074 Jan 13 20:24:36.607393 sshd-session[4902]: pam_unix(sshd:session): session closed for user core Jan 13 20:24:36.612310 systemd[1]: sshd@10-138.199.153.196:22-139.178.89.65:50074.service: Deactivated successfully. Jan 13 20:24:36.616973 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 20:24:36.620649 systemd-logind[1455]: Session 11 logged out. Waiting for processes to exit. Jan 13 20:24:36.624172 systemd-logind[1455]: Removed session 11. Jan 13 20:24:36.785122 systemd[1]: Started sshd@11-138.199.153.196:22-139.178.89.65:50076.service - OpenSSH per-connection server daemon (139.178.89.65:50076). Jan 13 20:24:37.775827 sshd[4930]: Accepted publickey for core from 139.178.89.65 port 50076 ssh2: RSA SHA256:mP9Np05W8ayjbouGSSYjPkGP0Fk3DK/yb5iC6Sb3lHc Jan 13 20:24:37.778801 sshd-session[4930]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:24:37.790406 systemd-logind[1455]: New session 12 of user core. Jan 13 20:24:37.793481 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 20:24:38.597135 sshd[4932]: Connection closed by 139.178.89.65 port 50076 Jan 13 20:24:38.598138 sshd-session[4930]: pam_unix(sshd:session): session closed for user core Jan 13 20:24:38.602637 systemd[1]: sshd@11-138.199.153.196:22-139.178.89.65:50076.service: Deactivated successfully. Jan 13 20:24:38.605074 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 20:24:38.608063 systemd-logind[1455]: Session 12 logged out. Waiting for processes to exit. Jan 13 20:24:38.610026 systemd-logind[1455]: Removed session 12. Jan 13 20:24:38.771254 systemd[1]: Started sshd@12-138.199.153.196:22-139.178.89.65:50080.service - OpenSSH per-connection server daemon (139.178.89.65:50080). Jan 13 20:24:39.776491 sshd[4941]: Accepted publickey for core from 139.178.89.65 port 50080 ssh2: RSA SHA256:mP9Np05W8ayjbouGSSYjPkGP0Fk3DK/yb5iC6Sb3lHc Jan 13 20:24:39.779429 sshd-session[4941]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:24:39.787254 systemd-logind[1455]: New session 13 of user core. Jan 13 20:24:39.792226 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 20:24:42.122731 sshd[4949]: Connection closed by 139.178.89.65 port 50080 Jan 13 20:24:42.122579 sshd-session[4941]: pam_unix(sshd:session): session closed for user core Jan 13 20:24:42.128545 systemd[1]: sshd@12-138.199.153.196:22-139.178.89.65:50080.service: Deactivated successfully. Jan 13 20:24:42.134831 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 20:24:42.139012 systemd-logind[1455]: Session 13 logged out. Waiting for processes to exit. Jan 13 20:24:42.140449 systemd-logind[1455]: Removed session 13. Jan 13 20:24:42.299507 systemd[1]: Started sshd@13-138.199.153.196:22-139.178.89.65:54440.service - OpenSSH per-connection server daemon (139.178.89.65:54440). Jan 13 20:24:43.295767 sshd[4982]: Accepted publickey for core from 139.178.89.65 port 54440 ssh2: RSA SHA256:mP9Np05W8ayjbouGSSYjPkGP0Fk3DK/yb5iC6Sb3lHc Jan 13 20:24:43.298610 sshd-session[4982]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:24:43.309371 systemd-logind[1455]: New session 14 of user core. Jan 13 20:24:43.316741 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 20:24:44.218038 sshd[4984]: Connection closed by 139.178.89.65 port 54440 Jan 13 20:24:44.217425 sshd-session[4982]: pam_unix(sshd:session): session closed for user core Jan 13 20:24:44.226822 systemd-logind[1455]: Session 14 logged out. Waiting for processes to exit. Jan 13 20:24:44.228303 systemd[1]: sshd@13-138.199.153.196:22-139.178.89.65:54440.service: Deactivated successfully. Jan 13 20:24:44.234086 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 20:24:44.236744 systemd-logind[1455]: Removed session 14. Jan 13 20:24:44.391839 systemd[1]: Started sshd@14-138.199.153.196:22-139.178.89.65:54450.service - OpenSSH per-connection server daemon (139.178.89.65:54450). Jan 13 20:24:45.393871 sshd[4993]: Accepted publickey for core from 139.178.89.65 port 54450 ssh2: RSA SHA256:mP9Np05W8ayjbouGSSYjPkGP0Fk3DK/yb5iC6Sb3lHc Jan 13 20:24:45.396566 sshd-session[4993]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:24:45.405693 systemd-logind[1455]: New session 15 of user core. Jan 13 20:24:45.415007 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 20:24:46.160440 sshd[5001]: Connection closed by 139.178.89.65 port 54450 Jan 13 20:24:46.161316 sshd-session[4993]: pam_unix(sshd:session): session closed for user core Jan 13 20:24:46.165668 systemd[1]: sshd@14-138.199.153.196:22-139.178.89.65:54450.service: Deactivated successfully. Jan 13 20:24:46.169562 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 20:24:46.173297 systemd-logind[1455]: Session 15 logged out. Waiting for processes to exit. Jan 13 20:24:46.174733 systemd-logind[1455]: Removed session 15. Jan 13 20:24:51.337487 systemd[1]: Started sshd@15-138.199.153.196:22-139.178.89.65:50136.service - OpenSSH per-connection server daemon (139.178.89.65:50136). Jan 13 20:24:52.333572 sshd[5052]: Accepted publickey for core from 139.178.89.65 port 50136 ssh2: RSA SHA256:mP9Np05W8ayjbouGSSYjPkGP0Fk3DK/yb5iC6Sb3lHc Jan 13 20:24:52.334735 sshd-session[5052]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:24:52.345113 systemd-logind[1455]: New session 16 of user core. Jan 13 20:24:52.350176 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 20:24:53.078116 sshd[5054]: Connection closed by 139.178.89.65 port 50136 Jan 13 20:24:53.079080 sshd-session[5052]: pam_unix(sshd:session): session closed for user core Jan 13 20:24:53.087378 systemd[1]: sshd@15-138.199.153.196:22-139.178.89.65:50136.service: Deactivated successfully. Jan 13 20:24:53.092860 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 20:24:53.096079 systemd-logind[1455]: Session 16 logged out. Waiting for processes to exit. Jan 13 20:24:53.099717 systemd-logind[1455]: Removed session 16. Jan 13 20:24:58.258323 systemd[1]: Started sshd@16-138.199.153.196:22-139.178.89.65:50152.service - OpenSSH per-connection server daemon (139.178.89.65:50152). Jan 13 20:24:59.245245 sshd[5086]: Accepted publickey for core from 139.178.89.65 port 50152 ssh2: RSA SHA256:mP9Np05W8ayjbouGSSYjPkGP0Fk3DK/yb5iC6Sb3lHc Jan 13 20:24:59.247687 sshd-session[5086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:24:59.254677 systemd-logind[1455]: New session 17 of user core. Jan 13 20:24:59.261299 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 20:25:00.032509 sshd[5088]: Connection closed by 139.178.89.65 port 50152 Jan 13 20:25:00.034238 sshd-session[5086]: pam_unix(sshd:session): session closed for user core Jan 13 20:25:00.040990 systemd[1]: sshd@16-138.199.153.196:22-139.178.89.65:50152.service: Deactivated successfully. Jan 13 20:25:00.049824 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 20:25:00.053297 systemd-logind[1455]: Session 17 logged out. Waiting for processes to exit. Jan 13 20:25:00.055343 systemd-logind[1455]: Removed session 17. Jan 13 20:25:07.353236 kernel: hrtimer: interrupt took 2739275 ns