Jan 29 11:58:52.889802 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 29 11:58:52.889826 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Wed Jan 29 10:12:48 -00 2025 Jan 29 11:58:52.889836 kernel: KASLR enabled Jan 29 11:58:52.889842 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Jan 29 11:58:52.889847 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x138595418 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 Jan 29 11:58:52.889853 kernel: random: crng init done Jan 29 11:58:52.889860 kernel: ACPI: Early table checksum verification disabled Jan 29 11:58:52.889866 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Jan 29 11:58:52.889872 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Jan 29 11:58:52.889880 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:58:52.889886 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:58:52.889891 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:58:52.889897 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:58:52.889903 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:58:52.889911 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:58:52.889918 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:58:52.889925 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:58:52.889931 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:58:52.889938 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Jan 29 11:58:52.889944 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Jan 29 11:58:52.889950 kernel: NUMA: Failed to initialise from firmware Jan 29 11:58:52.889956 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Jan 29 11:58:52.889963 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] Jan 29 11:58:52.889969 kernel: Zone ranges: Jan 29 11:58:52.889975 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 29 11:58:52.889983 kernel: DMA32 empty Jan 29 11:58:52.889989 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Jan 29 11:58:52.889995 kernel: Movable zone start for each node Jan 29 11:58:52.890002 kernel: Early memory node ranges Jan 29 11:58:52.890008 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Jan 29 11:58:52.890014 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Jan 29 11:58:52.890021 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Jan 29 11:58:52.890027 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Jan 29 11:58:52.890033 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Jan 29 11:58:52.890039 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Jan 29 11:58:52.890046 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Jan 29 11:58:52.892076 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Jan 29 11:58:52.892125 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Jan 29 11:58:52.892133 kernel: psci: probing for conduit method from ACPI. Jan 29 11:58:52.892140 kernel: psci: PSCIv1.1 detected in firmware. Jan 29 11:58:52.892149 kernel: psci: Using standard PSCI v0.2 function IDs Jan 29 11:58:52.892156 kernel: psci: Trusted OS migration not required Jan 29 11:58:52.892163 kernel: psci: SMC Calling Convention v1.1 Jan 29 11:58:52.892171 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 29 11:58:52.892178 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 29 11:58:52.892185 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 29 11:58:52.892193 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 29 11:58:52.892200 kernel: Detected PIPT I-cache on CPU0 Jan 29 11:58:52.892207 kernel: CPU features: detected: GIC system register CPU interface Jan 29 11:58:52.892213 kernel: CPU features: detected: Hardware dirty bit management Jan 29 11:58:52.892220 kernel: CPU features: detected: Spectre-v4 Jan 29 11:58:52.892227 kernel: CPU features: detected: Spectre-BHB Jan 29 11:58:52.892234 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 29 11:58:52.892242 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 29 11:58:52.892249 kernel: CPU features: detected: ARM erratum 1418040 Jan 29 11:58:52.892256 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 29 11:58:52.892262 kernel: alternatives: applying boot alternatives Jan 29 11:58:52.892271 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=05d22c8845dec898f2b35f78b7d946edccf803dd23b974a9db2c3070ca1d8f8c Jan 29 11:58:52.892278 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 11:58:52.892285 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 11:58:52.892292 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 11:58:52.892298 kernel: Fallback order for Node 0: 0 Jan 29 11:58:52.892305 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Jan 29 11:58:52.892312 kernel: Policy zone: Normal Jan 29 11:58:52.892320 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 11:58:52.892327 kernel: software IO TLB: area num 2. Jan 29 11:58:52.892334 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Jan 29 11:58:52.892341 kernel: Memory: 3882936K/4096000K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39360K init, 897K bss, 213064K reserved, 0K cma-reserved) Jan 29 11:58:52.892349 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 29 11:58:52.892355 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 11:58:52.892363 kernel: rcu: RCU event tracing is enabled. Jan 29 11:58:52.892370 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 29 11:58:52.892377 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 11:58:52.892384 kernel: Tracing variant of Tasks RCU enabled. Jan 29 11:58:52.892390 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 11:58:52.892399 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 29 11:58:52.892406 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 29 11:58:52.892412 kernel: GICv3: 256 SPIs implemented Jan 29 11:58:52.892419 kernel: GICv3: 0 Extended SPIs implemented Jan 29 11:58:52.892426 kernel: Root IRQ handler: gic_handle_irq Jan 29 11:58:52.892433 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 29 11:58:52.892440 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 29 11:58:52.892446 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 29 11:58:52.892453 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Jan 29 11:58:52.892460 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Jan 29 11:58:52.892467 kernel: GICv3: using LPI property table @0x00000001000e0000 Jan 29 11:58:52.892474 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Jan 29 11:58:52.892482 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 11:58:52.892489 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 11:58:52.892496 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 29 11:58:52.892503 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 29 11:58:52.892510 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 29 11:58:52.892517 kernel: Console: colour dummy device 80x25 Jan 29 11:58:52.892524 kernel: ACPI: Core revision 20230628 Jan 29 11:58:52.892532 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 29 11:58:52.892539 kernel: pid_max: default: 32768 minimum: 301 Jan 29 11:58:52.892546 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 11:58:52.892555 kernel: landlock: Up and running. Jan 29 11:58:52.892561 kernel: SELinux: Initializing. Jan 29 11:58:52.892568 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 11:58:52.892576 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 11:58:52.892583 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 11:58:52.892590 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 11:58:52.892597 kernel: rcu: Hierarchical SRCU implementation. Jan 29 11:58:52.892604 kernel: rcu: Max phase no-delay instances is 400. Jan 29 11:58:52.892611 kernel: Platform MSI: ITS@0x8080000 domain created Jan 29 11:58:52.892619 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 29 11:58:52.892626 kernel: Remapping and enabling EFI services. Jan 29 11:58:52.892633 kernel: smp: Bringing up secondary CPUs ... Jan 29 11:58:52.892641 kernel: Detected PIPT I-cache on CPU1 Jan 29 11:58:52.892648 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 29 11:58:52.892655 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Jan 29 11:58:52.892671 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 11:58:52.892680 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 29 11:58:52.892687 kernel: smp: Brought up 1 node, 2 CPUs Jan 29 11:58:52.892694 kernel: SMP: Total of 2 processors activated. Jan 29 11:58:52.892704 kernel: CPU features: detected: 32-bit EL0 Support Jan 29 11:58:52.892711 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 29 11:58:52.892723 kernel: CPU features: detected: Common not Private translations Jan 29 11:58:52.892732 kernel: CPU features: detected: CRC32 instructions Jan 29 11:58:52.892740 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 29 11:58:52.892747 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 29 11:58:52.892755 kernel: CPU features: detected: LSE atomic instructions Jan 29 11:58:52.892762 kernel: CPU features: detected: Privileged Access Never Jan 29 11:58:52.892770 kernel: CPU features: detected: RAS Extension Support Jan 29 11:58:52.892779 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 29 11:58:52.892786 kernel: CPU: All CPU(s) started at EL1 Jan 29 11:58:52.892793 kernel: alternatives: applying system-wide alternatives Jan 29 11:58:52.892801 kernel: devtmpfs: initialized Jan 29 11:58:52.892808 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 11:58:52.892816 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 29 11:58:52.892823 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 11:58:52.892832 kernel: SMBIOS 3.0.0 present. Jan 29 11:58:52.892839 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Jan 29 11:58:52.892847 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 11:58:52.892854 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 29 11:58:52.892861 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 29 11:58:52.892869 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 29 11:58:52.892876 kernel: audit: initializing netlink subsys (disabled) Jan 29 11:58:52.892884 kernel: audit: type=2000 audit(0.015:1): state=initialized audit_enabled=0 res=1 Jan 29 11:58:52.892891 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 11:58:52.892899 kernel: cpuidle: using governor menu Jan 29 11:58:52.892907 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 29 11:58:52.892914 kernel: ASID allocator initialised with 32768 entries Jan 29 11:58:52.892921 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 11:58:52.892929 kernel: Serial: AMBA PL011 UART driver Jan 29 11:58:52.892936 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 29 11:58:52.892943 kernel: Modules: 0 pages in range for non-PLT usage Jan 29 11:58:52.892951 kernel: Modules: 509040 pages in range for PLT usage Jan 29 11:58:52.892958 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 11:58:52.892967 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 11:58:52.892974 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 29 11:58:52.892981 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 29 11:58:52.892989 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 11:58:52.892996 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 11:58:52.893003 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 29 11:58:52.893011 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 29 11:58:52.893018 kernel: ACPI: Added _OSI(Module Device) Jan 29 11:58:52.893025 kernel: ACPI: Added _OSI(Processor Device) Jan 29 11:58:52.893034 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 11:58:52.893042 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 11:58:52.893049 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 11:58:52.893292 kernel: ACPI: Interpreter enabled Jan 29 11:58:52.893303 kernel: ACPI: Using GIC for interrupt routing Jan 29 11:58:52.893310 kernel: ACPI: MCFG table detected, 1 entries Jan 29 11:58:52.893317 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 29 11:58:52.893325 kernel: printk: console [ttyAMA0] enabled Jan 29 11:58:52.893332 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 11:58:52.893490 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 11:58:52.893566 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 29 11:58:52.893631 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 29 11:58:52.893743 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 29 11:58:52.893813 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 29 11:58:52.893823 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 29 11:58:52.893831 kernel: PCI host bridge to bus 0000:00 Jan 29 11:58:52.893907 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 29 11:58:52.893969 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 29 11:58:52.894030 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 29 11:58:52.896190 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 11:58:52.896297 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 29 11:58:52.896377 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Jan 29 11:58:52.896454 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Jan 29 11:58:52.896522 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Jan 29 11:58:52.896598 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 29 11:58:52.896680 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Jan 29 11:58:52.896767 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 29 11:58:52.896835 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Jan 29 11:58:52.896909 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 29 11:58:52.896981 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Jan 29 11:58:52.897159 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 29 11:58:52.897244 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Jan 29 11:58:52.897324 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 29 11:58:52.897389 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Jan 29 11:58:52.897460 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 29 11:58:52.897530 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Jan 29 11:58:52.897602 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 29 11:58:52.897714 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Jan 29 11:58:52.897802 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 29 11:58:52.897870 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Jan 29 11:58:52.897944 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Jan 29 11:58:52.898016 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Jan 29 11:58:52.898106 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Jan 29 11:58:52.898176 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Jan 29 11:58:52.898253 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Jan 29 11:58:52.898325 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Jan 29 11:58:52.898396 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 29 11:58:52.898469 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 29 11:58:52.898547 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 29 11:58:52.898618 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Jan 29 11:58:52.898718 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Jan 29 11:58:52.898791 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Jan 29 11:58:52.898860 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Jan 29 11:58:52.898936 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Jan 29 11:58:52.899011 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Jan 29 11:58:52.901242 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 29 11:58:52.901339 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Jan 29 11:58:52.901412 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Jan 29 11:58:52.901559 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Jan 29 11:58:52.901643 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Jan 29 11:58:52.901738 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Jan 29 11:58:52.901819 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Jan 29 11:58:52.901890 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Jan 29 11:58:52.901959 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Jan 29 11:58:52.902027 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 29 11:58:52.902118 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jan 29 11:58:52.902193 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Jan 29 11:58:52.902262 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Jan 29 11:58:52.902334 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jan 29 11:58:52.902400 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jan 29 11:58:52.902465 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Jan 29 11:58:52.902534 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 29 11:58:52.902599 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Jan 29 11:58:52.902694 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Jan 29 11:58:52.902780 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 29 11:58:52.902850 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Jan 29 11:58:52.902915 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jan 29 11:58:52.902984 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 29 11:58:52.903050 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Jan 29 11:58:52.905982 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Jan 29 11:58:52.906082 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 29 11:58:52.906163 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Jan 29 11:58:52.906233 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Jan 29 11:58:52.906304 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 29 11:58:52.906370 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Jan 29 11:58:52.906435 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Jan 29 11:58:52.906504 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 29 11:58:52.906569 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Jan 29 11:58:52.906642 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Jan 29 11:58:52.906764 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 29 11:58:52.906836 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Jan 29 11:58:52.906904 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Jan 29 11:58:52.906975 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Jan 29 11:58:52.907043 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Jan 29 11:58:52.907125 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Jan 29 11:58:52.907192 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Jan 29 11:58:52.907265 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Jan 29 11:58:52.907332 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Jan 29 11:58:52.907401 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Jan 29 11:58:52.907467 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Jan 29 11:58:52.907533 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Jan 29 11:58:52.907598 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Jan 29 11:58:52.907673 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Jan 29 11:58:52.907754 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 29 11:58:52.907823 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Jan 29 11:58:52.907890 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 29 11:58:52.907956 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Jan 29 11:58:52.908023 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 29 11:58:52.908339 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Jan 29 11:58:52.908425 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Jan 29 11:58:52.908493 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Jan 29 11:58:52.908571 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Jan 29 11:58:52.908638 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Jan 29 11:58:52.908719 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jan 29 11:58:52.909597 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Jan 29 11:58:52.909732 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jan 29 11:58:52.909810 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Jan 29 11:58:52.909882 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jan 29 11:58:52.909949 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Jan 29 11:58:52.910013 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jan 29 11:58:52.910104 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Jan 29 11:58:52.910172 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jan 29 11:58:52.910238 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Jan 29 11:58:52.910302 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jan 29 11:58:52.910370 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Jan 29 11:58:52.910472 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jan 29 11:58:52.910545 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Jan 29 11:58:52.910609 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jan 29 11:58:52.910699 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Jan 29 11:58:52.910771 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Jan 29 11:58:52.910841 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Jan 29 11:58:52.910916 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Jan 29 11:58:52.910984 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 29 11:58:52.911777 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Jan 29 11:58:52.911894 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 29 11:58:52.911977 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 29 11:58:52.912050 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Jan 29 11:58:52.912135 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Jan 29 11:58:52.912210 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Jan 29 11:58:52.912286 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 29 11:58:52.912351 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 29 11:58:52.912416 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Jan 29 11:58:52.912479 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Jan 29 11:58:52.912552 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Jan 29 11:58:52.912620 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Jan 29 11:58:52.912707 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 29 11:58:52.912776 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 29 11:58:52.912841 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Jan 29 11:58:52.912906 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Jan 29 11:58:52.912980 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Jan 29 11:58:52.913048 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 29 11:58:52.913136 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 29 11:58:52.913202 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Jan 29 11:58:52.913271 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Jan 29 11:58:52.913345 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Jan 29 11:58:52.913414 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Jan 29 11:58:52.913480 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 29 11:58:52.913546 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 29 11:58:52.913611 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Jan 29 11:58:52.913687 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Jan 29 11:58:52.913766 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Jan 29 11:58:52.913840 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Jan 29 11:58:52.913911 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 29 11:58:52.913979 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 29 11:58:52.914044 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Jan 29 11:58:52.914247 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 29 11:58:52.914322 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Jan 29 11:58:52.914389 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Jan 29 11:58:52.914457 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Jan 29 11:58:52.914527 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 29 11:58:52.914590 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 29 11:58:52.914653 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Jan 29 11:58:52.914760 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 29 11:58:52.914831 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 29 11:58:52.914895 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 29 11:58:52.914957 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Jan 29 11:58:52.915021 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 29 11:58:52.915105 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 29 11:58:52.915171 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Jan 29 11:58:52.915235 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Jan 29 11:58:52.915298 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Jan 29 11:58:52.915363 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 29 11:58:52.915420 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 29 11:58:52.915477 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 29 11:58:52.915555 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 29 11:58:52.915619 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Jan 29 11:58:52.915694 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Jan 29 11:58:52.915765 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Jan 29 11:58:52.915850 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Jan 29 11:58:52.915912 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Jan 29 11:58:52.915979 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Jan 29 11:58:52.916043 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Jan 29 11:58:52.916163 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Jan 29 11:58:52.916243 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jan 29 11:58:52.916305 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Jan 29 11:58:52.916363 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Jan 29 11:58:52.916429 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Jan 29 11:58:52.916490 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Jan 29 11:58:52.916550 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Jan 29 11:58:52.916622 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Jan 29 11:58:52.916700 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Jan 29 11:58:52.916769 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 29 11:58:52.916838 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Jan 29 11:58:52.916899 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Jan 29 11:58:52.916960 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 29 11:58:52.917031 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Jan 29 11:58:52.917106 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Jan 29 11:58:52.917169 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 29 11:58:52.917239 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Jan 29 11:58:52.917300 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Jan 29 11:58:52.917362 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Jan 29 11:58:52.917372 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 29 11:58:52.917380 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 29 11:58:52.917388 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 29 11:58:52.917396 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 29 11:58:52.917404 kernel: iommu: Default domain type: Translated Jan 29 11:58:52.917417 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 29 11:58:52.917425 kernel: efivars: Registered efivars operations Jan 29 11:58:52.917433 kernel: vgaarb: loaded Jan 29 11:58:52.917441 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 29 11:58:52.917448 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 11:58:52.917456 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 11:58:52.917464 kernel: pnp: PnP ACPI init Jan 29 11:58:52.917537 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 29 11:58:52.917549 kernel: pnp: PnP ACPI: found 1 devices Jan 29 11:58:52.917559 kernel: NET: Registered PF_INET protocol family Jan 29 11:58:52.917567 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 11:58:52.917575 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 29 11:58:52.917583 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 11:58:52.917590 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 11:58:52.917598 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 29 11:58:52.917607 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 29 11:58:52.917614 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 11:58:52.917622 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 11:58:52.917632 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 11:58:52.917743 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Jan 29 11:58:52.917757 kernel: PCI: CLS 0 bytes, default 64 Jan 29 11:58:52.917765 kernel: kvm [1]: HYP mode not available Jan 29 11:58:52.917773 kernel: Initialise system trusted keyrings Jan 29 11:58:52.917784 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 29 11:58:52.917792 kernel: Key type asymmetric registered Jan 29 11:58:52.917800 kernel: Asymmetric key parser 'x509' registered Jan 29 11:58:52.917808 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 29 11:58:52.917817 kernel: io scheduler mq-deadline registered Jan 29 11:58:52.917825 kernel: io scheduler kyber registered Jan 29 11:58:52.917833 kernel: io scheduler bfq registered Jan 29 11:58:52.917841 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 29 11:58:52.917912 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Jan 29 11:58:52.917980 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Jan 29 11:58:52.918046 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 11:58:52.918162 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Jan 29 11:58:52.918233 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Jan 29 11:58:52.918299 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 11:58:52.918371 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Jan 29 11:58:52.918436 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Jan 29 11:58:52.918502 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 11:58:52.918574 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Jan 29 11:58:52.918639 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Jan 29 11:58:52.918721 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 11:58:52.918792 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Jan 29 11:58:52.918858 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Jan 29 11:58:52.918923 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 11:58:52.918993 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Jan 29 11:58:52.920153 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Jan 29 11:58:52.920263 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 11:58:52.920343 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Jan 29 11:58:52.920412 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Jan 29 11:58:52.920479 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 11:58:52.920551 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Jan 29 11:58:52.920616 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Jan 29 11:58:52.920697 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 11:58:52.920709 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Jan 29 11:58:52.920774 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Jan 29 11:58:52.920839 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Jan 29 11:58:52.920907 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 11:58:52.920917 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 29 11:58:52.920925 kernel: ACPI: button: Power Button [PWRB] Jan 29 11:58:52.920933 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 29 11:58:52.921002 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Jan 29 11:58:52.922169 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Jan 29 11:58:52.922191 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 11:58:52.922200 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 29 11:58:52.922281 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Jan 29 11:58:52.922301 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Jan 29 11:58:52.922311 kernel: thunder_xcv, ver 1.0 Jan 29 11:58:52.922319 kernel: thunder_bgx, ver 1.0 Jan 29 11:58:52.922328 kernel: nicpf, ver 1.0 Jan 29 11:58:52.922336 kernel: nicvf, ver 1.0 Jan 29 11:58:52.922414 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 29 11:58:52.922477 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-29T11:58:52 UTC (1738151932) Jan 29 11:58:52.922488 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 29 11:58:52.922498 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 29 11:58:52.922505 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 29 11:58:52.922513 kernel: watchdog: Hard watchdog permanently disabled Jan 29 11:58:52.922521 kernel: NET: Registered PF_INET6 protocol family Jan 29 11:58:52.922529 kernel: Segment Routing with IPv6 Jan 29 11:58:52.922537 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 11:58:52.922545 kernel: NET: Registered PF_PACKET protocol family Jan 29 11:58:52.922552 kernel: Key type dns_resolver registered Jan 29 11:58:52.922560 kernel: registered taskstats version 1 Jan 29 11:58:52.922569 kernel: Loading compiled-in X.509 certificates Jan 29 11:58:52.922577 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: f200c60883a4a38d496d9250faf693faee9d7415' Jan 29 11:58:52.922585 kernel: Key type .fscrypt registered Jan 29 11:58:52.922592 kernel: Key type fscrypt-provisioning registered Jan 29 11:58:52.922600 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 11:58:52.922608 kernel: ima: Allocated hash algorithm: sha1 Jan 29 11:58:52.922617 kernel: ima: No architecture policies found Jan 29 11:58:52.922625 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 29 11:58:52.922633 kernel: clk: Disabling unused clocks Jan 29 11:58:52.922642 kernel: Freeing unused kernel memory: 39360K Jan 29 11:58:52.922650 kernel: Run /init as init process Jan 29 11:58:52.922657 kernel: with arguments: Jan 29 11:58:52.922678 kernel: /init Jan 29 11:58:52.922686 kernel: with environment: Jan 29 11:58:52.922694 kernel: HOME=/ Jan 29 11:58:52.922701 kernel: TERM=linux Jan 29 11:58:52.922709 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 11:58:52.922719 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:58:52.922732 systemd[1]: Detected virtualization kvm. Jan 29 11:58:52.922740 systemd[1]: Detected architecture arm64. Jan 29 11:58:52.922748 systemd[1]: Running in initrd. Jan 29 11:58:52.922756 systemd[1]: No hostname configured, using default hostname. Jan 29 11:58:52.922763 systemd[1]: Hostname set to . Jan 29 11:58:52.922772 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:58:52.922780 systemd[1]: Queued start job for default target initrd.target. Jan 29 11:58:52.922790 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:58:52.922799 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:58:52.922807 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 11:58:52.922815 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:58:52.922824 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 11:58:52.922832 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 11:58:52.922842 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 11:58:52.922852 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 11:58:52.922860 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:58:52.922869 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:58:52.922877 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:58:52.922885 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:58:52.922893 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:58:52.922901 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:58:52.922909 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:58:52.922919 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:58:52.922927 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 11:58:52.922936 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 11:58:52.922944 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:58:52.922952 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:58:52.922960 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:58:52.922969 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:58:52.922977 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 11:58:52.922985 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:58:52.922995 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 11:58:52.923003 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 11:58:52.923011 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:58:52.923020 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:58:52.923028 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:58:52.923036 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 11:58:52.923044 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:58:52.923462 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 11:58:52.923486 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 11:58:52.923495 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:58:52.923504 kernel: Bridge firewalling registered Jan 29 11:58:52.923539 systemd-journald[236]: Collecting audit messages is disabled. Jan 29 11:58:52.923563 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:58:52.923571 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:58:52.923580 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:58:52.923589 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:58:52.923600 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:58:52.923608 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:58:52.923617 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:58:52.923626 systemd-journald[236]: Journal started Jan 29 11:58:52.923646 systemd-journald[236]: Runtime Journal (/run/log/journal/d475691b581943da8c6015bf15cac6ef) is 8.0M, max 76.6M, 68.6M free. Jan 29 11:58:52.869356 systemd-modules-load[237]: Inserted module 'overlay' Jan 29 11:58:52.884072 systemd-modules-load[237]: Inserted module 'br_netfilter' Jan 29 11:58:52.928077 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:58:52.933006 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:58:52.938273 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:58:52.939706 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:58:52.942723 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 11:58:52.957809 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:58:52.963813 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:58:52.973944 dracut-cmdline[270]: dracut-dracut-053 Jan 29 11:58:52.978630 dracut-cmdline[270]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=05d22c8845dec898f2b35f78b7d946edccf803dd23b974a9db2c3070ca1d8f8c Jan 29 11:58:53.004585 systemd-resolved[273]: Positive Trust Anchors: Jan 29 11:58:53.004599 systemd-resolved[273]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:58:53.004632 systemd-resolved[273]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:58:53.013924 systemd-resolved[273]: Defaulting to hostname 'linux'. Jan 29 11:58:53.016146 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:58:53.016761 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:58:53.066085 kernel: SCSI subsystem initialized Jan 29 11:58:53.070120 kernel: Loading iSCSI transport class v2.0-870. Jan 29 11:58:53.078097 kernel: iscsi: registered transport (tcp) Jan 29 11:58:53.092089 kernel: iscsi: registered transport (qla4xxx) Jan 29 11:58:53.092188 kernel: QLogic iSCSI HBA Driver Jan 29 11:58:53.137797 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 11:58:53.143237 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 11:58:53.177284 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 11:58:53.177350 kernel: device-mapper: uevent: version 1.0.3 Jan 29 11:58:53.178079 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 11:58:53.234110 kernel: raid6: neonx8 gen() 15287 MB/s Jan 29 11:58:53.251131 kernel: raid6: neonx4 gen() 15307 MB/s Jan 29 11:58:53.268099 kernel: raid6: neonx2 gen() 13072 MB/s Jan 29 11:58:53.285100 kernel: raid6: neonx1 gen() 10385 MB/s Jan 29 11:58:53.302144 kernel: raid6: int64x8 gen() 6911 MB/s Jan 29 11:58:53.319114 kernel: raid6: int64x4 gen() 7217 MB/s Jan 29 11:58:53.336101 kernel: raid6: int64x2 gen() 6055 MB/s Jan 29 11:58:53.353131 kernel: raid6: int64x1 gen() 4948 MB/s Jan 29 11:58:53.353215 kernel: raid6: using algorithm neonx4 gen() 15307 MB/s Jan 29 11:58:53.370848 kernel: raid6: .... xor() 11975 MB/s, rmw enabled Jan 29 11:58:53.370921 kernel: raid6: using neon recovery algorithm Jan 29 11:58:53.378283 kernel: xor: measuring software checksum speed Jan 29 11:58:53.378364 kernel: 8regs : 19740 MB/sec Jan 29 11:58:53.379546 kernel: 32regs : 8520 MB/sec Jan 29 11:58:53.380987 kernel: arm64_neon : 20413 MB/sec Jan 29 11:58:53.381050 kernel: xor: using function: arm64_neon (20413 MB/sec) Jan 29 11:58:53.435117 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 11:58:53.452168 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:58:53.461260 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:58:53.476969 systemd-udevd[454]: Using default interface naming scheme 'v255'. Jan 29 11:58:53.480189 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:58:53.488285 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 11:58:53.505559 dracut-pre-trigger[461]: rd.md=0: removing MD RAID activation Jan 29 11:58:53.542931 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:58:53.547223 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:58:53.597095 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:58:53.604634 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 11:58:53.623886 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 11:58:53.629172 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:58:53.629788 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:58:53.631704 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:58:53.638213 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 11:58:53.662749 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:58:53.695098 kernel: scsi host0: Virtio SCSI HBA Jan 29 11:58:53.707191 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 29 11:58:53.707266 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 29 11:58:53.710071 kernel: ACPI: bus type USB registered Jan 29 11:58:53.710126 kernel: usbcore: registered new interface driver usbfs Jan 29 11:58:53.710172 kernel: usbcore: registered new interface driver hub Jan 29 11:58:53.711094 kernel: usbcore: registered new device driver usb Jan 29 11:58:53.733574 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:58:53.733705 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:58:53.737208 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:58:53.737738 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:58:53.737878 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:58:53.738491 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:58:53.748298 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:58:53.764095 kernel: sd 0:0:0:1: Power-on or device reset occurred Jan 29 11:58:53.771407 kernel: sr 0:0:0:0: Power-on or device reset occurred Jan 29 11:58:53.771542 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Jan 29 11:58:53.771625 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 29 11:58:53.771643 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Jan 29 11:58:53.771758 kernel: sd 0:0:0:1: [sda] Write Protect is off Jan 29 11:58:53.771846 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Jan 29 11:58:53.771931 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 29 11:58:53.772017 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Jan 29 11:58:53.772141 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 11:58:53.772152 kernel: GPT:17805311 != 80003071 Jan 29 11:58:53.772161 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 11:58:53.772174 kernel: GPT:17805311 != 80003071 Jan 29 11:58:53.772183 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 11:58:53.772192 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 11:58:53.772201 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Jan 29 11:58:53.776131 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:58:53.781161 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 29 11:58:53.793170 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jan 29 11:58:53.793453 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 29 11:58:53.793792 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 29 11:58:53.793987 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jan 29 11:58:53.794121 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jan 29 11:58:53.794227 kernel: hub 1-0:1.0: USB hub found Jan 29 11:58:53.794347 kernel: hub 1-0:1.0: 4 ports detected Jan 29 11:58:53.794446 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 29 11:58:53.794571 kernel: hub 2-0:1.0: USB hub found Jan 29 11:58:53.794717 kernel: hub 2-0:1.0: 4 ports detected Jan 29 11:58:53.784313 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:58:53.815004 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:58:53.829079 kernel: BTRFS: device fsid f02ec3fd-6702-4c1a-b68e-9001713a3a08 devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (510) Jan 29 11:58:53.835077 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (507) Jan 29 11:58:53.845999 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 29 11:58:53.851986 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 29 11:58:53.858258 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 29 11:58:53.863232 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 29 11:58:53.863871 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jan 29 11:58:53.871403 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 11:58:53.878620 disk-uuid[571]: Primary Header is updated. Jan 29 11:58:53.878620 disk-uuid[571]: Secondary Entries is updated. Jan 29 11:58:53.878620 disk-uuid[571]: Secondary Header is updated. Jan 29 11:58:53.887079 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 11:58:53.894120 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 11:58:54.032288 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 29 11:58:54.275088 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Jan 29 11:58:54.410617 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Jan 29 11:58:54.410729 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jan 29 11:58:54.411811 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Jan 29 11:58:54.466113 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Jan 29 11:58:54.466481 kernel: usbcore: registered new interface driver usbhid Jan 29 11:58:54.467571 kernel: usbhid: USB HID core driver Jan 29 11:58:54.899157 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 11:58:54.901289 disk-uuid[573]: The operation has completed successfully. Jan 29 11:58:54.951208 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 11:58:54.951315 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 11:58:54.971320 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 11:58:54.976437 sh[587]: Success Jan 29 11:58:54.989096 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 29 11:58:55.047291 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 11:58:55.054175 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 11:58:55.055132 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 11:58:55.069183 kernel: BTRFS info (device dm-0): first mount of filesystem f02ec3fd-6702-4c1a-b68e-9001713a3a08 Jan 29 11:58:55.069253 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:58:55.069279 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 11:58:55.070194 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 11:58:55.070251 kernel: BTRFS info (device dm-0): using free space tree Jan 29 11:58:55.077073 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 29 11:58:55.079080 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 11:58:55.079707 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 11:58:55.092540 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 11:58:55.097336 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 11:58:55.115085 kernel: BTRFS info (device sda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 29 11:58:55.115167 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:58:55.115181 kernel: BTRFS info (device sda6): using free space tree Jan 29 11:58:55.120904 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 11:58:55.120992 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 11:58:55.132632 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 11:58:55.135106 kernel: BTRFS info (device sda6): last unmount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 29 11:58:55.143037 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 11:58:55.147291 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 11:58:55.231364 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:58:55.235177 ignition[685]: Ignition 2.19.0 Jan 29 11:58:55.235322 ignition[685]: Stage: fetch-offline Jan 29 11:58:55.238279 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:58:55.235361 ignition[685]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:58:55.239197 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:58:55.235369 ignition[685]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 11:58:55.235537 ignition[685]: parsed url from cmdline: "" Jan 29 11:58:55.235540 ignition[685]: no config URL provided Jan 29 11:58:55.235544 ignition[685]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 11:58:55.235552 ignition[685]: no config at "/usr/lib/ignition/user.ign" Jan 29 11:58:55.235556 ignition[685]: failed to fetch config: resource requires networking Jan 29 11:58:55.235748 ignition[685]: Ignition finished successfully Jan 29 11:58:55.259193 systemd-networkd[774]: lo: Link UP Jan 29 11:58:55.259206 systemd-networkd[774]: lo: Gained carrier Jan 29 11:58:55.260732 systemd-networkd[774]: Enumeration completed Jan 29 11:58:55.260825 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:58:55.262577 systemd-networkd[774]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:58:55.262580 systemd-networkd[774]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:58:55.262902 systemd[1]: Reached target network.target - Network. Jan 29 11:58:55.264579 systemd-networkd[774]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:58:55.264582 systemd-networkd[774]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:58:55.265147 systemd-networkd[774]: eth0: Link UP Jan 29 11:58:55.265150 systemd-networkd[774]: eth0: Gained carrier Jan 29 11:58:55.265157 systemd-networkd[774]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:58:55.268594 systemd-networkd[774]: eth1: Link UP Jan 29 11:58:55.268598 systemd-networkd[774]: eth1: Gained carrier Jan 29 11:58:55.268605 systemd-networkd[774]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:58:55.272279 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 29 11:58:55.285911 ignition[777]: Ignition 2.19.0 Jan 29 11:58:55.285924 ignition[777]: Stage: fetch Jan 29 11:58:55.286165 ignition[777]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:58:55.286230 ignition[777]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 11:58:55.286344 ignition[777]: parsed url from cmdline: "" Jan 29 11:58:55.286348 ignition[777]: no config URL provided Jan 29 11:58:55.286354 ignition[777]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 11:58:55.286364 ignition[777]: no config at "/usr/lib/ignition/user.ign" Jan 29 11:58:55.286385 ignition[777]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jan 29 11:58:55.287094 ignition[777]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 29 11:58:55.291115 systemd-networkd[774]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 11:58:55.317225 systemd-networkd[774]: eth0: DHCPv4 address 91.107.217.81/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 29 11:58:55.487308 ignition[777]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Jan 29 11:58:55.492663 ignition[777]: GET result: OK Jan 29 11:58:55.492785 ignition[777]: parsing config with SHA512: 036a6c59cd03be882a8c330ee83a874d7cc95711c67745bd54fafea96e205752d92b22dac0da0e304b53ee0efd9bc8f00e20d510b59bcf204680c46ade007208 Jan 29 11:58:55.498039 unknown[777]: fetched base config from "system" Jan 29 11:58:55.498065 unknown[777]: fetched base config from "system" Jan 29 11:58:55.498440 ignition[777]: fetch: fetch complete Jan 29 11:58:55.498071 unknown[777]: fetched user config from "hetzner" Jan 29 11:58:55.498445 ignition[777]: fetch: fetch passed Jan 29 11:58:55.501545 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 29 11:58:55.498483 ignition[777]: Ignition finished successfully Jan 29 11:58:55.508254 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 11:58:55.521583 ignition[785]: Ignition 2.19.0 Jan 29 11:58:55.521592 ignition[785]: Stage: kargs Jan 29 11:58:55.521785 ignition[785]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:58:55.521795 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 11:58:55.524951 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 11:58:55.522662 ignition[785]: kargs: kargs passed Jan 29 11:58:55.522708 ignition[785]: Ignition finished successfully Jan 29 11:58:55.531520 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 11:58:55.545308 ignition[792]: Ignition 2.19.0 Jan 29 11:58:55.545315 ignition[792]: Stage: disks Jan 29 11:58:55.545474 ignition[792]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:58:55.548331 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 11:58:55.545483 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 11:58:55.549531 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 11:58:55.546530 ignition[792]: disks: disks passed Jan 29 11:58:55.550337 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 11:58:55.546577 ignition[792]: Ignition finished successfully Jan 29 11:58:55.551020 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:58:55.552018 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:58:55.553048 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:58:55.560318 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 11:58:55.577649 systemd-fsck[800]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 29 11:58:55.581610 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 11:58:55.594347 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 11:58:55.646381 kernel: EXT4-fs (sda9): mounted filesystem 8499bb43-f860-448d-b3b8-5a1fc2b80abf r/w with ordered data mode. Quota mode: none. Jan 29 11:58:55.647782 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 11:58:55.649392 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 11:58:55.663275 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:58:55.667477 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 11:58:55.674161 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 29 11:58:55.676080 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 11:58:55.678304 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:58:55.680070 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (808) Jan 29 11:58:55.682122 kernel: BTRFS info (device sda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 29 11:58:55.682184 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:58:55.682212 kernel: BTRFS info (device sda6): using free space tree Jan 29 11:58:55.688080 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 11:58:55.688126 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 11:58:55.687731 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 11:58:55.699304 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 11:58:55.702235 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:58:55.741778 coreos-metadata[810]: Jan 29 11:58:55.741 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jan 29 11:58:55.743937 coreos-metadata[810]: Jan 29 11:58:55.743 INFO Fetch successful Jan 29 11:58:55.746681 coreos-metadata[810]: Jan 29 11:58:55.745 INFO wrote hostname ci-4081-3-0-9-89f64f6996 to /sysroot/etc/hostname Jan 29 11:58:55.750462 initrd-setup-root[835]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 11:58:55.751703 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 29 11:58:55.758238 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory Jan 29 11:58:55.763178 initrd-setup-root[850]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 11:58:55.768037 initrd-setup-root[857]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 11:58:55.863782 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 11:58:55.868303 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 11:58:55.872210 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 11:58:55.882114 kernel: BTRFS info (device sda6): last unmount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 29 11:58:55.907878 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 11:58:55.913855 ignition[925]: INFO : Ignition 2.19.0 Jan 29 11:58:55.915756 ignition[925]: INFO : Stage: mount Jan 29 11:58:55.915756 ignition[925]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:58:55.915756 ignition[925]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 11:58:55.915756 ignition[925]: INFO : mount: mount passed Jan 29 11:58:55.915756 ignition[925]: INFO : Ignition finished successfully Jan 29 11:58:55.919754 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 11:58:55.927202 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 11:58:56.069605 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 11:58:56.074227 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:58:56.084089 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (936) Jan 29 11:58:56.084149 kernel: BTRFS info (device sda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 29 11:58:56.085128 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:58:56.085162 kernel: BTRFS info (device sda6): using free space tree Jan 29 11:58:56.088280 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 11:58:56.088318 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 11:58:56.092547 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:58:56.113042 ignition[953]: INFO : Ignition 2.19.0 Jan 29 11:58:56.113814 ignition[953]: INFO : Stage: files Jan 29 11:58:56.114402 ignition[953]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:58:56.114929 ignition[953]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 11:58:56.116583 ignition[953]: DEBUG : files: compiled without relabeling support, skipping Jan 29 11:58:56.118287 ignition[953]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 11:58:56.118287 ignition[953]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 11:58:56.121259 ignition[953]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 11:58:56.122234 ignition[953]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 11:58:56.122234 ignition[953]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 11:58:56.121680 unknown[953]: wrote ssh authorized keys file for user: core Jan 29 11:58:56.124748 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 29 11:58:56.124748 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jan 29 11:58:56.187315 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 11:58:56.307730 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 29 11:58:56.307730 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 29 11:58:56.307730 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 11:58:56.307730 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:58:56.307730 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:58:56.307730 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:58:56.313291 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:58:56.313291 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:58:56.313291 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:58:56.313291 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:58:56.313291 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:58:56.313291 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Jan 29 11:58:56.313291 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Jan 29 11:58:56.313291 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Jan 29 11:58:56.313291 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 Jan 29 11:58:56.578405 systemd-networkd[774]: eth0: Gained IPv6LL Jan 29 11:58:56.887284 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 29 11:58:57.209007 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Jan 29 11:58:57.209007 ignition[953]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 29 11:58:57.211551 ignition[953]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:58:57.211551 ignition[953]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:58:57.211551 ignition[953]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 29 11:58:57.211551 ignition[953]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 29 11:58:57.211551 ignition[953]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 29 11:58:57.211551 ignition[953]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 29 11:58:57.211551 ignition[953]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 29 11:58:57.211551 ignition[953]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Jan 29 11:58:57.211551 ignition[953]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 11:58:57.211551 ignition[953]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:58:57.211551 ignition[953]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:58:57.211551 ignition[953]: INFO : files: files passed Jan 29 11:58:57.211551 ignition[953]: INFO : Ignition finished successfully Jan 29 11:58:57.213780 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 11:58:57.222542 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 11:58:57.224394 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 11:58:57.227304 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 11:58:57.227402 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 11:58:57.238111 initrd-setup-root-after-ignition[981]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:58:57.238111 initrd-setup-root-after-ignition[981]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:58:57.240895 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:58:57.244114 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:58:57.245174 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 11:58:57.249228 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 11:58:57.278152 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 11:58:57.279543 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 11:58:57.281039 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 11:58:57.283848 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 11:58:57.284868 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 11:58:57.285788 systemd-networkd[774]: eth1: Gained IPv6LL Jan 29 11:58:57.291322 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 11:58:57.303954 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:58:57.315364 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 11:58:57.330543 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:58:57.332153 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:58:57.333948 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 11:58:57.335595 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 11:58:57.335898 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:58:57.338030 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 11:58:57.338717 systemd[1]: Stopped target basic.target - Basic System. Jan 29 11:58:57.339743 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 11:58:57.340697 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:58:57.341681 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 11:58:57.342687 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 11:58:57.343658 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:58:57.344711 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 11:58:57.345716 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 11:58:57.346656 systemd[1]: Stopped target swap.target - Swaps. Jan 29 11:58:57.347478 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 11:58:57.347605 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:58:57.348773 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:58:57.349390 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:58:57.350373 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 11:58:57.350790 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:58:57.351459 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 11:58:57.351571 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 11:58:57.353085 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 11:58:57.353196 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:58:57.354542 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 11:58:57.354677 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 11:58:57.355469 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 29 11:58:57.355566 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 29 11:58:57.365377 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 11:58:57.365961 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 11:58:57.366115 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:58:57.371863 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 11:58:57.373441 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 11:58:57.373910 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:58:57.375805 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 11:58:57.376266 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:58:57.389531 ignition[1005]: INFO : Ignition 2.19.0 Jan 29 11:58:57.389531 ignition[1005]: INFO : Stage: umount Jan 29 11:58:57.389531 ignition[1005]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:58:57.389531 ignition[1005]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 11:58:57.393339 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 11:58:57.397452 ignition[1005]: INFO : umount: umount passed Jan 29 11:58:57.397452 ignition[1005]: INFO : Ignition finished successfully Jan 29 11:58:57.393424 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 11:58:57.398421 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 11:58:57.399019 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 11:58:57.399110 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 11:58:57.399989 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 11:58:57.400099 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 11:58:57.402763 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 11:58:57.402830 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 11:58:57.403869 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 11:58:57.403908 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 11:58:57.404999 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 29 11:58:57.405036 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 29 11:58:57.406083 systemd[1]: Stopped target network.target - Network. Jan 29 11:58:57.406872 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 11:58:57.406920 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:58:57.407875 systemd[1]: Stopped target paths.target - Path Units. Jan 29 11:58:57.408614 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 11:58:57.412119 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:58:57.412782 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 11:58:57.414129 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 11:58:57.415184 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 11:58:57.415224 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:58:57.416000 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 11:58:57.416028 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:58:57.416823 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 11:58:57.416861 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 11:58:57.417697 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 11:58:57.417735 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 11:58:57.418576 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 11:58:57.418613 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 11:58:57.419673 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 11:58:57.420589 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 11:58:57.425107 systemd-networkd[774]: eth0: DHCPv6 lease lost Jan 29 11:58:57.426372 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 11:58:57.426498 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 11:58:57.428519 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 11:58:57.428580 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:58:57.429277 systemd-networkd[774]: eth1: DHCPv6 lease lost Jan 29 11:58:57.430711 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 11:58:57.430883 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 11:58:57.432522 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 11:58:57.432579 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:58:57.441850 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 11:58:57.442321 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 11:58:57.442377 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:58:57.443842 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 11:58:57.443884 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:58:57.444925 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 11:58:57.444968 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 11:58:57.446140 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:58:57.460152 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 11:58:57.460367 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:58:57.462448 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 11:58:57.462581 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 11:58:57.464281 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 11:58:57.464363 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 11:58:57.465702 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 11:58:57.465742 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:58:57.466313 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 11:58:57.466354 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:58:57.467673 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 11:58:57.467714 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 11:58:57.469029 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:58:57.469089 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:58:57.475265 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 11:58:57.475786 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 11:58:57.475837 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:58:57.477480 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:58:57.477517 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:58:57.484735 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 11:58:57.484852 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 11:58:57.485594 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 11:58:57.493345 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 11:58:57.504930 systemd[1]: Switching root. Jan 29 11:58:57.536731 systemd-journald[236]: Journal stopped Jan 29 11:58:58.402540 systemd-journald[236]: Received SIGTERM from PID 1 (systemd). Jan 29 11:58:58.402599 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 11:58:58.402616 kernel: SELinux: policy capability open_perms=1 Jan 29 11:58:58.402661 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 11:58:58.402677 kernel: SELinux: policy capability always_check_network=0 Jan 29 11:58:58.402686 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 11:58:58.402696 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 11:58:58.402706 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 11:58:58.402716 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 11:58:58.402725 kernel: audit: type=1403 audit(1738151937.686:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 11:58:58.402739 systemd[1]: Successfully loaded SELinux policy in 34.789ms. Jan 29 11:58:58.402760 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.514ms. Jan 29 11:58:58.402772 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:58:58.402783 systemd[1]: Detected virtualization kvm. Jan 29 11:58:58.402794 systemd[1]: Detected architecture arm64. Jan 29 11:58:58.402805 systemd[1]: Detected first boot. Jan 29 11:58:58.402818 systemd[1]: Hostname set to . Jan 29 11:58:58.402829 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:58:58.402840 zram_generator::config[1047]: No configuration found. Jan 29 11:58:58.402855 systemd[1]: Populated /etc with preset unit settings. Jan 29 11:58:58.402866 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 11:58:58.402877 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 11:58:58.402887 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 11:58:58.402898 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 11:58:58.402909 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 11:58:58.402920 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 11:58:58.402931 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 11:58:58.402943 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 11:58:58.402954 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 11:58:58.402965 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 11:58:58.402975 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 11:58:58.402986 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:58:58.402996 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:58:58.403007 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 11:58:58.403018 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 11:58:58.403030 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 11:58:58.403041 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:58:58.403051 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 29 11:58:58.403168 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:58:58.403180 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 11:58:58.403191 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 11:58:58.403202 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 11:58:58.403215 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 11:58:58.403226 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:58:58.403237 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:58:58.403247 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:58:58.403258 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:58:58.403269 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 11:58:58.403280 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 11:58:58.403291 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:58:58.403302 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:58:58.403315 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:58:58.403326 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 11:58:58.403337 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 11:58:58.403348 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 11:58:58.403358 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 11:58:58.403370 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 11:58:58.403380 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 11:58:58.403392 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 11:58:58.403402 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 11:58:58.403414 systemd[1]: Reached target machines.target - Containers. Jan 29 11:58:58.403428 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 11:58:58.403439 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:58:58.403452 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:58:58.403464 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 11:58:58.403476 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:58:58.403487 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:58:58.403497 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:58:58.403508 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 11:58:58.403519 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:58:58.403530 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 11:58:58.403543 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 11:58:58.403554 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 11:58:58.403566 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 11:58:58.403577 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 11:58:58.403587 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:58:58.403598 kernel: fuse: init (API version 7.39) Jan 29 11:58:58.403609 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:58:58.403630 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 11:58:58.403644 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 11:58:58.403655 kernel: ACPI: bus type drm_connector registered Jan 29 11:58:58.403665 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:58:58.403679 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 11:58:58.403690 systemd[1]: Stopped verity-setup.service. Jan 29 11:58:58.403701 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 11:58:58.403711 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 11:58:58.403722 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 11:58:58.403734 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 11:58:58.403745 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 11:58:58.403755 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 11:58:58.403766 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:58:58.403777 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 11:58:58.403788 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 11:58:58.403799 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:58:58.403809 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:58:58.403820 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:58:58.403832 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:58:58.403843 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:58:58.403855 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:58:58.403867 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 11:58:58.403878 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 11:58:58.403889 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 11:58:58.403901 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 11:58:58.403911 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 11:58:58.403922 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:58:58.403933 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 11:58:58.403944 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 11:58:58.403954 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 11:58:58.403965 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 11:58:58.403976 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 11:58:58.403988 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 11:58:58.404027 systemd-journald[1110]: Collecting audit messages is disabled. Jan 29 11:58:58.404048 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:58:58.405106 kernel: loop: module loaded Jan 29 11:58:58.405129 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 11:58:58.405144 systemd-journald[1110]: Journal started Jan 29 11:58:58.405175 systemd-journald[1110]: Runtime Journal (/run/log/journal/d475691b581943da8c6015bf15cac6ef) is 8.0M, max 76.6M, 68.6M free. Jan 29 11:58:58.144103 systemd[1]: Queued start job for default target multi-user.target. Jan 29 11:58:58.161132 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 29 11:58:58.161673 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 11:58:58.410085 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 11:58:58.414155 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 11:58:58.416153 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:58:58.423147 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 11:58:58.425081 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:58:58.433084 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 11:58:58.435149 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:58:58.438303 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 11:58:58.442125 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 11:58:58.444089 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:58:58.444605 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:58:58.444778 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:58:58.447545 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 11:58:58.468518 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:58:58.492445 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 11:58:58.493864 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:58:58.506496 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 11:58:58.508756 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 11:58:58.512835 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 11:58:58.514291 kernel: loop0: detected capacity change from 0 to 114432 Jan 29 11:58:58.522563 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 11:58:58.525356 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:58:58.543641 systemd-journald[1110]: Time spent on flushing to /var/log/journal/d475691b581943da8c6015bf15cac6ef is 56.216ms for 1133 entries. Jan 29 11:58:58.543641 systemd-journald[1110]: System Journal (/var/log/journal/d475691b581943da8c6015bf15cac6ef) is 8.0M, max 584.8M, 576.8M free. Jan 29 11:58:58.613988 systemd-journald[1110]: Received client request to flush runtime journal. Jan 29 11:58:58.615403 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 11:58:58.615444 kernel: loop1: detected capacity change from 0 to 114328 Jan 29 11:58:58.560432 udevadm[1170]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 29 11:58:58.599354 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 11:58:58.614262 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:58:58.618909 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 11:58:58.632892 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 11:58:58.638513 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 11:58:58.654606 kernel: loop2: detected capacity change from 0 to 201592 Jan 29 11:58:58.662914 systemd-tmpfiles[1178]: ACLs are not supported, ignoring. Jan 29 11:58:58.662934 systemd-tmpfiles[1178]: ACLs are not supported, ignoring. Jan 29 11:58:58.671213 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:58:58.699537 kernel: loop3: detected capacity change from 0 to 8 Jan 29 11:58:58.720132 kernel: loop4: detected capacity change from 0 to 114432 Jan 29 11:58:58.738479 kernel: loop5: detected capacity change from 0 to 114328 Jan 29 11:58:58.755263 kernel: loop6: detected capacity change from 0 to 201592 Jan 29 11:58:58.786474 kernel: loop7: detected capacity change from 0 to 8 Jan 29 11:58:58.787715 (sd-merge)[1188]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Jan 29 11:58:58.789581 (sd-merge)[1188]: Merged extensions into '/usr'. Jan 29 11:58:58.794718 systemd[1]: Reloading requested from client PID 1143 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 11:58:58.794733 systemd[1]: Reloading... Jan 29 11:58:58.903090 zram_generator::config[1217]: No configuration found. Jan 29 11:58:58.991006 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:58:58.993242 ldconfig[1139]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 11:58:59.038535 systemd[1]: Reloading finished in 243 ms. Jan 29 11:58:59.063763 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 11:58:59.067115 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 11:58:59.080328 systemd[1]: Starting ensure-sysext.service... Jan 29 11:58:59.082962 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:58:59.096145 systemd[1]: Reloading requested from client PID 1251 ('systemctl') (unit ensure-sysext.service)... Jan 29 11:58:59.096166 systemd[1]: Reloading... Jan 29 11:58:59.115973 systemd-tmpfiles[1252]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 11:58:59.116338 systemd-tmpfiles[1252]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 11:58:59.117296 systemd-tmpfiles[1252]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 11:58:59.117503 systemd-tmpfiles[1252]: ACLs are not supported, ignoring. Jan 29 11:58:59.117555 systemd-tmpfiles[1252]: ACLs are not supported, ignoring. Jan 29 11:58:59.121165 systemd-tmpfiles[1252]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:58:59.121174 systemd-tmpfiles[1252]: Skipping /boot Jan 29 11:58:59.132013 systemd-tmpfiles[1252]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:58:59.132032 systemd-tmpfiles[1252]: Skipping /boot Jan 29 11:58:59.202096 zram_generator::config[1284]: No configuration found. Jan 29 11:58:59.296313 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:58:59.342697 systemd[1]: Reloading finished in 246 ms. Jan 29 11:58:59.362733 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 11:58:59.363948 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:58:59.377592 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 29 11:58:59.387375 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 11:58:59.392339 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 11:58:59.397278 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:58:59.401487 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:58:59.404391 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 11:58:59.412506 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 11:58:59.415553 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:58:59.416718 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:58:59.420198 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:58:59.421876 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:58:59.422665 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:58:59.426498 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:58:59.426656 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:58:59.431748 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:58:59.439356 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:58:59.439992 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:58:59.445198 systemd[1]: Finished ensure-sysext.service. Jan 29 11:58:59.461280 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 11:58:59.478560 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:58:59.478795 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:58:59.480005 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 11:58:59.484229 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 11:58:59.485208 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:58:59.485331 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:58:59.487567 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:58:59.489084 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:58:59.490133 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:58:59.490272 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:58:59.494151 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:58:59.494244 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:58:59.505355 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 11:58:59.508205 systemd-udevd[1326]: Using default interface naming scheme 'v255'. Jan 29 11:58:59.514981 augenrules[1352]: No rules Jan 29 11:58:59.519165 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 29 11:58:59.523546 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 11:58:59.537764 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:58:59.549315 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:58:59.549943 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 11:58:59.550951 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 11:58:59.554893 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 11:58:59.643706 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 11:58:59.644418 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 11:58:59.654050 systemd-networkd[1369]: lo: Link UP Jan 29 11:58:59.655748 systemd-networkd[1369]: lo: Gained carrier Jan 29 11:58:59.657628 systemd-networkd[1369]: Enumeration completed Jan 29 11:58:59.657733 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:58:59.659818 systemd-timesyncd[1339]: No network connectivity, watching for changes. Jan 29 11:58:59.662256 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 11:58:59.684148 systemd-resolved[1322]: Positive Trust Anchors: Jan 29 11:58:59.684454 systemd-resolved[1322]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:58:59.684490 systemd-resolved[1322]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:58:59.691563 systemd-resolved[1322]: Using system hostname 'ci-4081-3-0-9-89f64f6996'. Jan 29 11:58:59.695012 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:58:59.695696 systemd[1]: Reached target network.target - Network. Jan 29 11:58:59.696144 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:58:59.701131 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 29 11:58:59.768464 systemd-networkd[1369]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:58:59.768478 systemd-networkd[1369]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:58:59.769902 systemd-networkd[1369]: eth1: Link UP Jan 29 11:58:59.769914 systemd-networkd[1369]: eth1: Gained carrier Jan 29 11:58:59.769930 systemd-networkd[1369]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:58:59.784099 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1388) Jan 29 11:58:59.792151 systemd-networkd[1369]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:58:59.792163 systemd-networkd[1369]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:58:59.793549 systemd-networkd[1369]: eth0: Link UP Jan 29 11:58:59.793557 systemd-networkd[1369]: eth0: Gained carrier Jan 29 11:58:59.793574 systemd-networkd[1369]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:58:59.799075 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 11:58:59.817374 systemd-networkd[1369]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 11:58:59.818503 systemd-timesyncd[1339]: Network configuration changed, trying to establish connection. Jan 29 11:58:59.846646 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Jan 29 11:58:59.846788 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:58:59.855222 systemd-networkd[1369]: eth0: DHCPv4 address 91.107.217.81/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 29 11:58:59.855634 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:58:59.860243 systemd-timesyncd[1339]: Network configuration changed, trying to establish connection. Jan 29 11:58:59.860383 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:58:59.862156 systemd-timesyncd[1339]: Network configuration changed, trying to establish connection. Jan 29 11:58:59.865273 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:58:59.865895 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:58:59.865937 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 11:58:59.869444 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:58:59.869668 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:58:59.885332 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Jan 29 11:58:59.885418 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 29 11:58:59.885447 kernel: [drm] features: -context_init Jan 29 11:58:59.887227 kernel: [drm] number of scanouts: 1 Jan 29 11:58:59.888427 kernel: [drm] number of cap sets: 0 Jan 29 11:58:59.888488 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Jan 29 11:58:59.890878 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:58:59.892513 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:58:59.894245 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:58:59.894510 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:58:59.896364 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:58:59.896413 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:58:59.913602 kernel: Console: switching to colour frame buffer device 160x50 Jan 29 11:58:59.922589 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 29 11:58:59.923074 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 29 11:58:59.933477 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 11:58:59.940256 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:58:59.946307 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:58:59.948369 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:58:59.952271 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 11:58:59.957307 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:59:00.022881 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:59:00.086398 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 11:59:00.094339 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 11:59:00.107277 lvm[1431]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:59:00.139595 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 11:59:00.142124 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:59:00.143195 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:59:00.143940 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 11:59:00.144884 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 11:59:00.145937 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 11:59:00.146739 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 11:59:00.147399 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 11:59:00.148001 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 11:59:00.148034 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:59:00.148517 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:59:00.150657 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 11:59:00.152941 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 11:59:00.158379 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 11:59:00.160763 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 11:59:00.161903 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 11:59:00.162593 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:59:00.163113 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:59:00.163660 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:59:00.163697 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:59:00.166207 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 11:59:00.170391 lvm[1435]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:59:00.172237 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 29 11:59:00.176678 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 11:59:00.182480 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 11:59:00.184202 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 11:59:00.184730 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 11:59:00.185791 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 11:59:00.192211 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 11:59:00.194273 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jan 29 11:59:00.199265 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 11:59:00.207262 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 11:59:00.212277 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 11:59:00.213543 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 11:59:00.215828 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 11:59:00.218888 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 11:59:00.224319 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 11:59:00.245137 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 11:59:00.256993 jq[1439]: false Jan 29 11:59:00.258168 (ntainerd)[1455]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 11:59:00.261385 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 11:59:00.262582 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 11:59:00.277859 jq[1450]: true Jan 29 11:59:00.288390 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 11:59:00.288557 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 11:59:00.290694 dbus-daemon[1438]: [system] SELinux support is enabled Jan 29 11:59:00.290945 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 11:59:00.295692 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 11:59:00.295724 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 11:59:00.299143 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 11:59:00.299174 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 11:59:00.316173 extend-filesystems[1442]: Found loop4 Jan 29 11:59:00.324929 extend-filesystems[1442]: Found loop5 Jan 29 11:59:00.324929 extend-filesystems[1442]: Found loop6 Jan 29 11:59:00.324929 extend-filesystems[1442]: Found loop7 Jan 29 11:59:00.324929 extend-filesystems[1442]: Found sda Jan 29 11:59:00.324929 extend-filesystems[1442]: Found sda1 Jan 29 11:59:00.324929 extend-filesystems[1442]: Found sda2 Jan 29 11:59:00.324929 extend-filesystems[1442]: Found sda3 Jan 29 11:59:00.324929 extend-filesystems[1442]: Found usr Jan 29 11:59:00.324929 extend-filesystems[1442]: Found sda4 Jan 29 11:59:00.324929 extend-filesystems[1442]: Found sda6 Jan 29 11:59:00.324929 extend-filesystems[1442]: Found sda7 Jan 29 11:59:00.324929 extend-filesystems[1442]: Found sda9 Jan 29 11:59:00.324929 extend-filesystems[1442]: Checking size of /dev/sda9 Jan 29 11:59:00.352305 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 11:59:00.363427 tar[1454]: linux-arm64/LICENSE Jan 29 11:59:00.363427 tar[1454]: linux-arm64/helm Jan 29 11:59:00.363693 coreos-metadata[1437]: Jan 29 11:59:00.325 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jan 29 11:59:00.363693 coreos-metadata[1437]: Jan 29 11:59:00.325 INFO Fetch successful Jan 29 11:59:00.363693 coreos-metadata[1437]: Jan 29 11:59:00.325 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jan 29 11:59:00.363693 coreos-metadata[1437]: Jan 29 11:59:00.325 INFO Fetch successful Jan 29 11:59:00.363823 jq[1465]: true Jan 29 11:59:00.352494 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 11:59:00.368233 extend-filesystems[1442]: Resized partition /dev/sda9 Jan 29 11:59:00.370367 update_engine[1449]: I20250129 11:59:00.367491 1449 main.cc:92] Flatcar Update Engine starting Jan 29 11:59:00.374900 extend-filesystems[1487]: resize2fs 1.47.1 (20-May-2024) Jan 29 11:59:00.378468 systemd[1]: Started update-engine.service - Update Engine. Jan 29 11:59:00.382081 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Jan 29 11:59:00.382146 update_engine[1449]: I20250129 11:59:00.379343 1449 update_check_scheduler.cc:74] Next update check in 6m36s Jan 29 11:59:00.388532 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 11:59:00.413740 systemd-logind[1448]: New seat seat0. Jan 29 11:59:00.419103 systemd-logind[1448]: Watching system buttons on /dev/input/event0 (Power Button) Jan 29 11:59:00.419131 systemd-logind[1448]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Jan 29 11:59:00.419310 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 11:59:00.453530 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 29 11:59:00.455924 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 11:59:00.463091 bash[1503]: Updated "/home/core/.ssh/authorized_keys" Jan 29 11:59:00.464902 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 11:59:00.470974 systemd[1]: Starting sshkeys.service... Jan 29 11:59:00.508083 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Jan 29 11:59:00.511528 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 29 11:59:00.522079 extend-filesystems[1487]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 29 11:59:00.522079 extend-filesystems[1487]: old_desc_blocks = 1, new_desc_blocks = 5 Jan 29 11:59:00.522079 extend-filesystems[1487]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Jan 29 11:59:00.529561 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1373) Jan 29 11:59:00.529590 extend-filesystems[1442]: Resized filesystem in /dev/sda9 Jan 29 11:59:00.529590 extend-filesystems[1442]: Found sr0 Jan 29 11:59:00.532426 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 29 11:59:00.533728 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 11:59:00.534783 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 11:59:00.639861 coreos-metadata[1514]: Jan 29 11:59:00.636 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jan 29 11:59:00.650593 coreos-metadata[1514]: Jan 29 11:59:00.648 INFO Fetch successful Jan 29 11:59:00.650824 unknown[1514]: wrote ssh authorized keys file for user: core Jan 29 11:59:00.695105 containerd[1455]: time="2025-01-29T11:59:00.693386960Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 29 11:59:00.699834 update-ssh-keys[1522]: Updated "/home/core/.ssh/authorized_keys" Jan 29 11:59:00.702421 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 29 11:59:00.706439 systemd[1]: Finished sshkeys.service. Jan 29 11:59:00.760934 containerd[1455]: time="2025-01-29T11:59:00.760504440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:59:00.763656 containerd[1455]: time="2025-01-29T11:59:00.761894880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:59:00.763656 containerd[1455]: time="2025-01-29T11:59:00.761934360Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 11:59:00.763656 containerd[1455]: time="2025-01-29T11:59:00.761952680Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 11:59:00.763656 containerd[1455]: time="2025-01-29T11:59:00.762125360Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 11:59:00.763656 containerd[1455]: time="2025-01-29T11:59:00.762143000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 11:59:00.763656 containerd[1455]: time="2025-01-29T11:59:00.762205360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:59:00.763656 containerd[1455]: time="2025-01-29T11:59:00.762217200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:59:00.763656 containerd[1455]: time="2025-01-29T11:59:00.762379200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:59:00.763656 containerd[1455]: time="2025-01-29T11:59:00.762393840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 11:59:00.763656 containerd[1455]: time="2025-01-29T11:59:00.762406120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:59:00.763656 containerd[1455]: time="2025-01-29T11:59:00.762415360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 11:59:00.763960 containerd[1455]: time="2025-01-29T11:59:00.762479040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:59:00.763960 containerd[1455]: time="2025-01-29T11:59:00.762701480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:59:00.763960 containerd[1455]: time="2025-01-29T11:59:00.762804680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:59:00.763960 containerd[1455]: time="2025-01-29T11:59:00.762820880Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 11:59:00.763960 containerd[1455]: time="2025-01-29T11:59:00.762898360Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 11:59:00.763960 containerd[1455]: time="2025-01-29T11:59:00.762937520Z" level=info msg="metadata content store policy set" policy=shared Jan 29 11:59:00.770316 containerd[1455]: time="2025-01-29T11:59:00.770279200Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 11:59:00.770464 containerd[1455]: time="2025-01-29T11:59:00.770450000Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 11:59:00.770570 containerd[1455]: time="2025-01-29T11:59:00.770557800Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 11:59:00.770675 containerd[1455]: time="2025-01-29T11:59:00.770661160Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 11:59:00.770782 containerd[1455]: time="2025-01-29T11:59:00.770767240Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 11:59:00.771202 containerd[1455]: time="2025-01-29T11:59:00.771028000Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 11:59:00.771398 containerd[1455]: time="2025-01-29T11:59:00.771382000Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 11:59:00.772036 containerd[1455]: time="2025-01-29T11:59:00.771573320Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 11:59:00.772036 containerd[1455]: time="2025-01-29T11:59:00.771593240Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 11:59:00.772151 containerd[1455]: time="2025-01-29T11:59:00.772134440Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 11:59:00.772217 containerd[1455]: time="2025-01-29T11:59:00.772204920Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 11:59:00.772300 containerd[1455]: time="2025-01-29T11:59:00.772286400Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 11:59:00.772496 containerd[1455]: time="2025-01-29T11:59:00.772357480Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 11:59:00.772496 containerd[1455]: time="2025-01-29T11:59:00.772376040Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 11:59:00.772496 containerd[1455]: time="2025-01-29T11:59:00.772391920Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 11:59:00.772496 containerd[1455]: time="2025-01-29T11:59:00.772406040Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 11:59:00.772989 containerd[1455]: time="2025-01-29T11:59:00.772419560Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 11:59:00.772989 containerd[1455]: time="2025-01-29T11:59:00.772648720Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 11:59:00.772989 containerd[1455]: time="2025-01-29T11:59:00.772936000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 11:59:00.772989 containerd[1455]: time="2025-01-29T11:59:00.772956080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 11:59:00.773130 containerd[1455]: time="2025-01-29T11:59:00.772974600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 11:59:00.773233 containerd[1455]: time="2025-01-29T11:59:00.773173000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 11:59:00.773653 containerd[1455]: time="2025-01-29T11:59:00.773300960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 11:59:00.773653 containerd[1455]: time="2025-01-29T11:59:00.773324680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 11:59:00.773653 containerd[1455]: time="2025-01-29T11:59:00.773337480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 11:59:00.773653 containerd[1455]: time="2025-01-29T11:59:00.773349840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 11:59:00.773653 containerd[1455]: time="2025-01-29T11:59:00.773618160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 11:59:00.773849 containerd[1455]: time="2025-01-29T11:59:00.773637760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 11:59:00.773849 containerd[1455]: time="2025-01-29T11:59:00.773785400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 11:59:00.773849 containerd[1455]: time="2025-01-29T11:59:00.773812280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 11:59:00.773849 containerd[1455]: time="2025-01-29T11:59:00.773825360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 11:59:00.774225 containerd[1455]: time="2025-01-29T11:59:00.774207480Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 11:59:00.774309 containerd[1455]: time="2025-01-29T11:59:00.774296800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 11:59:00.774377 containerd[1455]: time="2025-01-29T11:59:00.774363640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 11:59:00.774453 containerd[1455]: time="2025-01-29T11:59:00.774440880Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 11:59:00.774648 containerd[1455]: time="2025-01-29T11:59:00.774628120Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 11:59:00.774841 containerd[1455]: time="2025-01-29T11:59:00.774823440Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 11:59:00.774910 containerd[1455]: time="2025-01-29T11:59:00.774895480Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 11:59:00.774971 containerd[1455]: time="2025-01-29T11:59:00.774949640Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 11:59:00.775916 containerd[1455]: time="2025-01-29T11:59:00.775007520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 11:59:00.775916 containerd[1455]: time="2025-01-29T11:59:00.775027480Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 11:59:00.775916 containerd[1455]: time="2025-01-29T11:59:00.775682600Z" level=info msg="NRI interface is disabled by configuration." Jan 29 11:59:00.775916 containerd[1455]: time="2025-01-29T11:59:00.775707720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 11:59:00.776682 containerd[1455]: time="2025-01-29T11:59:00.776243760Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 11:59:00.776682 containerd[1455]: time="2025-01-29T11:59:00.776320520Z" level=info msg="Connect containerd service" Jan 29 11:59:00.776682 containerd[1455]: time="2025-01-29T11:59:00.776356680Z" level=info msg="using legacy CRI server" Jan 29 11:59:00.776887 containerd[1455]: time="2025-01-29T11:59:00.776865120Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 11:59:00.777792 containerd[1455]: time="2025-01-29T11:59:00.777025440Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 11:59:00.778960 containerd[1455]: time="2025-01-29T11:59:00.778936680Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 11:59:00.779177 containerd[1455]: time="2025-01-29T11:59:00.779145560Z" level=info msg="Start subscribing containerd event" Jan 29 11:59:00.779253 containerd[1455]: time="2025-01-29T11:59:00.779240400Z" level=info msg="Start recovering state" Jan 29 11:59:00.779366 containerd[1455]: time="2025-01-29T11:59:00.779352480Z" level=info msg="Start event monitor" Jan 29 11:59:00.779441 containerd[1455]: time="2025-01-29T11:59:00.779427040Z" level=info msg="Start snapshots syncer" Jan 29 11:59:00.779514 containerd[1455]: time="2025-01-29T11:59:00.779502200Z" level=info msg="Start cni network conf syncer for default" Jan 29 11:59:00.779567 containerd[1455]: time="2025-01-29T11:59:00.779555200Z" level=info msg="Start streaming server" Jan 29 11:59:00.781724 containerd[1455]: time="2025-01-29T11:59:00.781701120Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 11:59:00.782106 containerd[1455]: time="2025-01-29T11:59:00.781974360Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 11:59:00.782815 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 11:59:00.787809 containerd[1455]: time="2025-01-29T11:59:00.787780240Z" level=info msg="containerd successfully booted in 0.095503s" Jan 29 11:59:00.870837 locksmithd[1491]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 11:59:01.058204 systemd-networkd[1369]: eth1: Gained IPv6LL Jan 29 11:59:01.059280 systemd-timesyncd[1339]: Network configuration changed, trying to establish connection. Jan 29 11:59:01.061696 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 11:59:01.063401 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 11:59:01.072295 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:59:01.078350 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 11:59:01.117559 tar[1454]: linux-arm64/README.md Jan 29 11:59:01.131424 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 11:59:01.134184 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 11:59:01.186206 systemd-networkd[1369]: eth0: Gained IPv6LL Jan 29 11:59:01.187231 systemd-timesyncd[1339]: Network configuration changed, trying to establish connection. Jan 29 11:59:01.565352 sshd_keygen[1468]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 11:59:01.603007 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 11:59:01.612560 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 11:59:01.619505 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 11:59:01.620345 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 11:59:01.630274 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 11:59:01.639856 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 11:59:01.649039 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 11:59:01.655605 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 29 11:59:01.657021 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 11:59:01.865775 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:59:01.870732 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 11:59:01.876570 (kubelet)[1565]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:59:01.878167 systemd[1]: Startup finished in 782ms (kernel) + 4.994s (initrd) + 4.225s (userspace) = 10.003s. Jan 29 11:59:02.370714 kubelet[1565]: E0129 11:59:02.370668 1565 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:59:02.375660 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:59:02.375808 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:59:12.515742 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 11:59:12.530446 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:59:12.644230 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:59:12.648731 (kubelet)[1585]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:59:12.700269 kubelet[1585]: E0129 11:59:12.700196 1585 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:59:12.703839 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:59:12.703969 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:59:22.765091 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 11:59:22.774407 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:59:22.889413 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:59:22.894482 (kubelet)[1600]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:59:22.934710 kubelet[1600]: E0129 11:59:22.934630 1600 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:59:22.937141 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:59:22.937352 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:59:31.599143 systemd-timesyncd[1339]: Contacted time server 158.220.97.17:123 (2.flatcar.pool.ntp.org). Jan 29 11:59:31.599245 systemd-timesyncd[1339]: Initial clock synchronization to Wed 2025-01-29 11:59:31.996158 UTC. Jan 29 11:59:33.016040 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 29 11:59:33.023388 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:59:33.150332 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:59:33.166693 (kubelet)[1614]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:59:33.213986 kubelet[1614]: E0129 11:59:33.213914 1614 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:59:33.216586 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:59:33.216771 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:59:43.266290 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 29 11:59:43.285455 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:59:43.394339 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:59:43.397316 (kubelet)[1630]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:59:43.443087 kubelet[1630]: E0129 11:59:43.443030 1630 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:59:43.446102 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:59:43.446400 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:59:45.920964 update_engine[1449]: I20250129 11:59:45.920823 1449 update_attempter.cc:509] Updating boot flags... Jan 29 11:59:45.966117 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1647) Jan 29 11:59:46.035632 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1643) Jan 29 11:59:46.080119 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1643) Jan 29 11:59:53.515726 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 29 11:59:53.523380 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:59:53.635862 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:59:53.640642 (kubelet)[1667]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:59:53.683179 kubelet[1667]: E0129 11:59:53.683099 1667 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:59:53.686510 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:59:53.686714 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:00:03.765095 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 29 12:00:03.771411 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:00:03.885476 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:00:03.890356 (kubelet)[1681]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:00:03.938982 kubelet[1681]: E0129 12:00:03.938903 1681 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:00:03.942651 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:00:03.943198 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:00:14.015474 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 29 12:00:14.025407 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:00:14.147221 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:00:14.159583 (kubelet)[1698]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:00:14.206145 kubelet[1698]: E0129 12:00:14.206080 1698 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:00:14.209050 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:00:14.209220 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:00:24.265464 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 29 12:00:24.275385 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:00:24.390298 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:00:24.400609 (kubelet)[1713]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:00:24.445344 kubelet[1713]: E0129 12:00:24.445279 1713 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:00:24.448293 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:00:24.448422 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:00:34.515718 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Jan 29 12:00:34.527446 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:00:34.638168 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:00:34.648657 (kubelet)[1728]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:00:34.690555 kubelet[1728]: E0129 12:00:34.690454 1728 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:00:34.692726 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:00:34.692890 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:00:44.765351 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Jan 29 12:00:44.775316 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:00:44.888336 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:00:44.892731 (kubelet)[1743]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:00:44.931784 kubelet[1743]: E0129 12:00:44.931726 1743 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:00:44.934361 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:00:44.934635 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:00:55.015052 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Jan 29 12:00:55.021374 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:00:55.138799 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:00:55.147439 (kubelet)[1758]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:00:55.191952 kubelet[1758]: E0129 12:00:55.191852 1758 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:00:55.194204 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:00:55.194373 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:00:57.956770 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 12:00:57.963509 systemd[1]: Started sshd@0-91.107.217.81:22-139.178.89.65:34856.service - OpenSSH per-connection server daemon (139.178.89.65:34856). Jan 29 12:00:58.949232 sshd[1766]: Accepted publickey for core from 139.178.89.65 port 34856 ssh2: RSA SHA256:7wq88Y6mZHPWeloslPJpjPR/GjZkKRbv3BUAF2pnzNA Jan 29 12:00:58.952091 sshd[1766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:00:58.964847 systemd-logind[1448]: New session 1 of user core. Jan 29 12:00:58.966427 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 12:00:58.974916 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 12:00:58.988984 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 12:00:58.998577 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 12:00:59.002749 (systemd)[1770]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 12:00:59.117701 systemd[1770]: Queued start job for default target default.target. Jan 29 12:00:59.130296 systemd[1770]: Created slice app.slice - User Application Slice. Jan 29 12:00:59.130362 systemd[1770]: Reached target paths.target - Paths. Jan 29 12:00:59.130395 systemd[1770]: Reached target timers.target - Timers. Jan 29 12:00:59.132817 systemd[1770]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 12:00:59.147197 systemd[1770]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 12:00:59.147344 systemd[1770]: Reached target sockets.target - Sockets. Jan 29 12:00:59.147358 systemd[1770]: Reached target basic.target - Basic System. Jan 29 12:00:59.147404 systemd[1770]: Reached target default.target - Main User Target. Jan 29 12:00:59.147431 systemd[1770]: Startup finished in 136ms. Jan 29 12:00:59.147562 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 12:00:59.155460 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 12:00:59.857761 systemd[1]: Started sshd@1-91.107.217.81:22-139.178.89.65:34864.service - OpenSSH per-connection server daemon (139.178.89.65:34864). Jan 29 12:01:00.858741 sshd[1781]: Accepted publickey for core from 139.178.89.65 port 34864 ssh2: RSA SHA256:7wq88Y6mZHPWeloslPJpjPR/GjZkKRbv3BUAF2pnzNA Jan 29 12:01:00.861232 sshd[1781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:01:00.867413 systemd-logind[1448]: New session 2 of user core. Jan 29 12:01:00.876359 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 12:01:01.547715 sshd[1781]: pam_unix(sshd:session): session closed for user core Jan 29 12:01:01.554302 systemd-logind[1448]: Session 2 logged out. Waiting for processes to exit. Jan 29 12:01:01.554436 systemd[1]: sshd@1-91.107.217.81:22-139.178.89.65:34864.service: Deactivated successfully. Jan 29 12:01:01.556510 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 12:01:01.559940 systemd-logind[1448]: Removed session 2. Jan 29 12:01:01.721806 systemd[1]: Started sshd@2-91.107.217.81:22-139.178.89.65:48750.service - OpenSSH per-connection server daemon (139.178.89.65:48750). Jan 29 12:01:02.705999 sshd[1789]: Accepted publickey for core from 139.178.89.65 port 48750 ssh2: RSA SHA256:7wq88Y6mZHPWeloslPJpjPR/GjZkKRbv3BUAF2pnzNA Jan 29 12:01:02.708368 sshd[1789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:01:02.712966 systemd-logind[1448]: New session 3 of user core. Jan 29 12:01:02.725404 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 12:01:03.386457 sshd[1789]: pam_unix(sshd:session): session closed for user core Jan 29 12:01:03.390747 systemd[1]: sshd@2-91.107.217.81:22-139.178.89.65:48750.service: Deactivated successfully. Jan 29 12:01:03.394571 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 12:01:03.395437 systemd-logind[1448]: Session 3 logged out. Waiting for processes to exit. Jan 29 12:01:03.396957 systemd-logind[1448]: Removed session 3. Jan 29 12:01:03.567351 systemd[1]: Started sshd@3-91.107.217.81:22-139.178.89.65:48754.service - OpenSSH per-connection server daemon (139.178.89.65:48754). Jan 29 12:01:04.537694 sshd[1796]: Accepted publickey for core from 139.178.89.65 port 48754 ssh2: RSA SHA256:7wq88Y6mZHPWeloslPJpjPR/GjZkKRbv3BUAF2pnzNA Jan 29 12:01:04.540379 sshd[1796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:01:04.546621 systemd-logind[1448]: New session 4 of user core. Jan 29 12:01:04.552504 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 12:01:05.214550 sshd[1796]: pam_unix(sshd:session): session closed for user core Jan 29 12:01:05.219551 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Jan 29 12:01:05.220625 systemd[1]: sshd@3-91.107.217.81:22-139.178.89.65:48754.service: Deactivated successfully. Jan 29 12:01:05.222570 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 12:01:05.224594 systemd-logind[1448]: Session 4 logged out. Waiting for processes to exit. Jan 29 12:01:05.229722 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:01:05.231206 systemd-logind[1448]: Removed session 4. Jan 29 12:01:05.337156 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:01:05.341808 (kubelet)[1810]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:01:05.385557 kubelet[1810]: E0129 12:01:05.384633 1810 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:01:05.385348 systemd[1]: Started sshd@4-91.107.217.81:22-139.178.89.65:48758.service - OpenSSH per-connection server daemon (139.178.89.65:48758). Jan 29 12:01:05.387758 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:01:05.387899 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:01:06.382157 sshd[1816]: Accepted publickey for core from 139.178.89.65 port 48758 ssh2: RSA SHA256:7wq88Y6mZHPWeloslPJpjPR/GjZkKRbv3BUAF2pnzNA Jan 29 12:01:06.385003 sshd[1816]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:01:06.389904 systemd-logind[1448]: New session 5 of user core. Jan 29 12:01:06.398380 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 12:01:06.916835 sudo[1820]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 12:01:06.917135 sudo[1820]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:01:07.226530 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 12:01:07.226610 (dockerd)[1835]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 12:01:07.485505 dockerd[1835]: time="2025-01-29T12:01:07.485119000Z" level=info msg="Starting up" Jan 29 12:01:07.573506 dockerd[1835]: time="2025-01-29T12:01:07.573461954Z" level=info msg="Loading containers: start." Jan 29 12:01:07.684296 kernel: Initializing XFRM netlink socket Jan 29 12:01:07.759246 systemd-networkd[1369]: docker0: Link UP Jan 29 12:01:07.774433 dockerd[1835]: time="2025-01-29T12:01:07.774333235Z" level=info msg="Loading containers: done." Jan 29 12:01:07.790793 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1653721641-merged.mount: Deactivated successfully. Jan 29 12:01:07.793290 dockerd[1835]: time="2025-01-29T12:01:07.793154959Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 12:01:07.795402 dockerd[1835]: time="2025-01-29T12:01:07.794964686Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 29 12:01:07.795402 dockerd[1835]: time="2025-01-29T12:01:07.795151191Z" level=info msg="Daemon has completed initialization" Jan 29 12:01:07.834343 dockerd[1835]: time="2025-01-29T12:01:07.834174827Z" level=info msg="API listen on /run/docker.sock" Jan 29 12:01:07.834537 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 12:01:08.595298 containerd[1455]: time="2025-01-29T12:01:08.595246630Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.1\"" Jan 29 12:01:09.258684 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount165234323.mount: Deactivated successfully. Jan 29 12:01:11.095244 containerd[1455]: time="2025-01-29T12:01:11.095003264Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:11.097573 containerd[1455]: time="2025-01-29T12:01:11.097505873Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.1: active requests=0, bytes read=26221040" Jan 29 12:01:11.099210 containerd[1455]: time="2025-01-29T12:01:11.099160468Z" level=info msg="ImageCreate event name:\"sha256:265c2dedf28ab9b88c7910c1643e210ad62483867f2bab88f56919a6e49a0d19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:11.102915 containerd[1455]: time="2025-01-29T12:01:11.102835771Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:11.104409 containerd[1455]: time="2025-01-29T12:01:11.104123012Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.1\" with image id \"sha256:265c2dedf28ab9b88c7910c1643e210ad62483867f2bab88f56919a6e49a0d19\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac\", size \"26217748\" in 2.508819978s" Jan 29 12:01:11.104409 containerd[1455]: time="2025-01-29T12:01:11.104171366Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.1\" returns image reference \"sha256:265c2dedf28ab9b88c7910c1643e210ad62483867f2bab88f56919a6e49a0d19\"" Jan 29 12:01:11.105149 containerd[1455]: time="2025-01-29T12:01:11.105113489Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.1\"" Jan 29 12:01:13.565188 containerd[1455]: time="2025-01-29T12:01:13.565052150Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:13.566241 containerd[1455]: time="2025-01-29T12:01:13.565904735Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.1: active requests=0, bytes read=22527127" Jan 29 12:01:13.567081 containerd[1455]: time="2025-01-29T12:01:13.567023851Z" level=info msg="ImageCreate event name:\"sha256:2933761aa7adae93679cdde1c0bf457bd4dc4b53f95fc066a4c50aa9c375ea13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:13.570773 containerd[1455]: time="2025-01-29T12:01:13.570714681Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:13.572377 containerd[1455]: time="2025-01-29T12:01:13.571810920Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.1\" with image id \"sha256:2933761aa7adae93679cdde1c0bf457bd4dc4b53f95fc066a4c50aa9c375ea13\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954\", size \"23968433\" in 2.466659036s" Jan 29 12:01:13.572377 containerd[1455]: time="2025-01-29T12:01:13.571855555Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.1\" returns image reference \"sha256:2933761aa7adae93679cdde1c0bf457bd4dc4b53f95fc066a4c50aa9c375ea13\"" Jan 29 12:01:13.572513 containerd[1455]: time="2025-01-29T12:01:13.572423612Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.1\"" Jan 29 12:01:15.514913 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Jan 29 12:01:15.521388 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:01:15.570132 containerd[1455]: time="2025-01-29T12:01:15.567682712Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:15.574156 containerd[1455]: time="2025-01-29T12:01:15.574099280Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.1: active requests=0, bytes read=17481133" Jan 29 12:01:15.576408 containerd[1455]: time="2025-01-29T12:01:15.576246349Z" level=info msg="ImageCreate event name:\"sha256:ddb38cac617cb18802e09e448db4b3aa70e9e469b02defa76e6de7192847a71c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:15.580005 containerd[1455]: time="2025-01-29T12:01:15.579956903Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:15.582920 containerd[1455]: time="2025-01-29T12:01:15.582875135Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.1\" with image id \"sha256:ddb38cac617cb18802e09e448db4b3aa70e9e469b02defa76e6de7192847a71c\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e\", size \"18922457\" in 2.010425487s" Jan 29 12:01:15.583103 containerd[1455]: time="2025-01-29T12:01:15.583048318Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.1\" returns image reference \"sha256:ddb38cac617cb18802e09e448db4b3aa70e9e469b02defa76e6de7192847a71c\"" Jan 29 12:01:15.583824 containerd[1455]: time="2025-01-29T12:01:15.583799164Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\"" Jan 29 12:01:15.653315 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:01:15.653568 (kubelet)[2044]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:01:15.699199 kubelet[2044]: E0129 12:01:15.699117 2044 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:01:15.702012 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:01:15.702287 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:01:17.131762 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2230023684.mount: Deactivated successfully. Jan 29 12:01:17.477156 containerd[1455]: time="2025-01-29T12:01:17.476919538Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:17.478567 containerd[1455]: time="2025-01-29T12:01:17.478483722Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.1: active requests=0, bytes read=27364423" Jan 29 12:01:17.479721 containerd[1455]: time="2025-01-29T12:01:17.479641382Z" level=info msg="ImageCreate event name:\"sha256:e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:17.483215 containerd[1455]: time="2025-01-29T12:01:17.483103561Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:17.484990 containerd[1455]: time="2025-01-29T12:01:17.484759697Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.1\" with image id \"sha256:e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0\", repo tag \"registry.k8s.io/kube-proxy:v1.32.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\", size \"27363416\" in 1.900819787s" Jan 29 12:01:17.484990 containerd[1455]: time="2025-01-29T12:01:17.484826931Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\" returns image reference \"sha256:e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0\"" Jan 29 12:01:17.486399 containerd[1455]: time="2025-01-29T12:01:17.486014268Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 29 12:01:18.097291 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1845610909.mount: Deactivated successfully. Jan 29 12:01:19.223722 containerd[1455]: time="2025-01-29T12:01:19.221980382Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:19.223722 containerd[1455]: time="2025-01-29T12:01:19.223484308Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951714" Jan 29 12:01:19.223722 containerd[1455]: time="2025-01-29T12:01:19.223659734Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:19.230787 containerd[1455]: time="2025-01-29T12:01:19.230682801Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:19.232066 containerd[1455]: time="2025-01-29T12:01:19.231923267Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.745835405s" Jan 29 12:01:19.232066 containerd[1455]: time="2025-01-29T12:01:19.231961464Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jan 29 12:01:19.232514 containerd[1455]: time="2025-01-29T12:01:19.232485944Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 29 12:01:19.693289 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1525237989.mount: Deactivated successfully. Jan 29 12:01:19.698431 containerd[1455]: time="2025-01-29T12:01:19.697594442Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:19.699520 containerd[1455]: time="2025-01-29T12:01:19.699488779Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Jan 29 12:01:19.700748 containerd[1455]: time="2025-01-29T12:01:19.700715006Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:19.703885 containerd[1455]: time="2025-01-29T12:01:19.703851168Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:19.704868 containerd[1455]: time="2025-01-29T12:01:19.704518477Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 471.948059ms" Jan 29 12:01:19.705303 containerd[1455]: time="2025-01-29T12:01:19.705282579Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 29 12:01:19.705952 containerd[1455]: time="2025-01-29T12:01:19.705811779Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 29 12:01:20.323250 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount826276593.mount: Deactivated successfully. Jan 29 12:01:23.260326 containerd[1455]: time="2025-01-29T12:01:23.260261856Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:23.262024 containerd[1455]: time="2025-01-29T12:01:23.261704976Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812491" Jan 29 12:01:23.263698 containerd[1455]: time="2025-01-29T12:01:23.263524834Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:23.269965 containerd[1455]: time="2025-01-29T12:01:23.269144119Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:23.269965 containerd[1455]: time="2025-01-29T12:01:23.269453102Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.563314067s" Jan 29 12:01:23.269965 containerd[1455]: time="2025-01-29T12:01:23.269491820Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jan 29 12:01:25.764869 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 14. Jan 29 12:01:25.774571 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:01:25.897789 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:01:25.907540 (kubelet)[2194]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:01:25.953600 kubelet[2194]: E0129 12:01:25.953544 2194 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:01:25.956204 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:01:25.956335 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:01:27.413505 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:01:27.421488 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:01:27.454969 systemd[1]: Reloading requested from client PID 2209 ('systemctl') (unit session-5.scope)... Jan 29 12:01:27.455129 systemd[1]: Reloading... Jan 29 12:01:27.568161 zram_generator::config[2249]: No configuration found. Jan 29 12:01:27.667211 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:01:27.736266 systemd[1]: Reloading finished in 280 ms. Jan 29 12:01:27.798776 (kubelet)[2290]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 12:01:27.800852 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:01:27.802255 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 12:01:27.802626 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:01:27.808507 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:01:27.915346 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:01:27.925741 (kubelet)[2300]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 12:01:27.973749 kubelet[2300]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:01:27.973749 kubelet[2300]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 29 12:01:27.973749 kubelet[2300]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:01:27.973749 kubelet[2300]: I0129 12:01:27.973548 2300 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 12:01:29.111103 kubelet[2300]: I0129 12:01:29.110719 2300 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 29 12:01:29.111103 kubelet[2300]: I0129 12:01:29.110761 2300 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 12:01:29.111103 kubelet[2300]: I0129 12:01:29.111093 2300 server.go:954] "Client rotation is on, will bootstrap in background" Jan 29 12:01:29.137628 kubelet[2300]: E0129 12:01:29.137581 2300 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://91.107.217.81:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 91.107.217.81:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:01:29.140916 kubelet[2300]: I0129 12:01:29.140801 2300 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 12:01:29.149971 kubelet[2300]: E0129 12:01:29.149894 2300 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 12:01:29.149971 kubelet[2300]: I0129 12:01:29.149942 2300 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 12:01:29.155500 kubelet[2300]: I0129 12:01:29.155451 2300 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 12:01:29.156596 kubelet[2300]: I0129 12:01:29.156527 2300 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 12:01:29.156775 kubelet[2300]: I0129 12:01:29.156580 2300 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-0-9-89f64f6996","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 12:01:29.156876 kubelet[2300]: I0129 12:01:29.156836 2300 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 12:01:29.156876 kubelet[2300]: I0129 12:01:29.156847 2300 container_manager_linux.go:304] "Creating device plugin manager" Jan 29 12:01:29.157120 kubelet[2300]: I0129 12:01:29.157088 2300 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:01:29.160794 kubelet[2300]: I0129 12:01:29.160585 2300 kubelet.go:446] "Attempting to sync node with API server" Jan 29 12:01:29.160794 kubelet[2300]: I0129 12:01:29.160649 2300 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 12:01:29.160794 kubelet[2300]: I0129 12:01:29.160672 2300 kubelet.go:352] "Adding apiserver pod source" Jan 29 12:01:29.160794 kubelet[2300]: I0129 12:01:29.160684 2300 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 12:01:29.164957 kubelet[2300]: W0129 12:01:29.164585 2300 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://91.107.217.81:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-9-89f64f6996&limit=500&resourceVersion=0": dial tcp 91.107.217.81:6443: connect: connection refused Jan 29 12:01:29.165198 kubelet[2300]: E0129 12:01:29.165168 2300 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://91.107.217.81:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-9-89f64f6996&limit=500&resourceVersion=0\": dial tcp 91.107.217.81:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:01:29.165412 kubelet[2300]: I0129 12:01:29.165391 2300 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 29 12:01:29.166735 kubelet[2300]: I0129 12:01:29.166319 2300 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 12:01:29.166735 kubelet[2300]: W0129 12:01:29.166519 2300 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 12:01:29.168530 kubelet[2300]: I0129 12:01:29.168509 2300 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 29 12:01:29.168706 kubelet[2300]: I0129 12:01:29.168693 2300 server.go:1287] "Started kubelet" Jan 29 12:01:29.170658 kubelet[2300]: W0129 12:01:29.170592 2300 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://91.107.217.81:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 91.107.217.81:6443: connect: connection refused Jan 29 12:01:29.170744 kubelet[2300]: E0129 12:01:29.170660 2300 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://91.107.217.81:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 91.107.217.81:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:01:29.174822 kubelet[2300]: I0129 12:01:29.174797 2300 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 12:01:29.177465 kubelet[2300]: I0129 12:01:29.177409 2300 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 12:01:29.178631 kubelet[2300]: E0129 12:01:29.178383 2300 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://91.107.217.81:6443/api/v1/namespaces/default/events\": dial tcp 91.107.217.81:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-0-9-89f64f6996.181f281db0804e28 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-0-9-89f64f6996,UID:ci-4081-3-0-9-89f64f6996,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-9-89f64f6996,},FirstTimestamp:2025-01-29 12:01:29.168662056 +0000 UTC m=+1.237976724,LastTimestamp:2025-01-29 12:01:29.168662056 +0000 UTC m=+1.237976724,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-9-89f64f6996,}" Jan 29 12:01:29.178631 kubelet[2300]: I0129 12:01:29.178593 2300 server.go:490] "Adding debug handlers to kubelet server" Jan 29 12:01:29.179724 kubelet[2300]: I0129 12:01:29.179498 2300 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 12:01:29.179929 kubelet[2300]: I0129 12:01:29.179914 2300 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 12:01:29.180267 kubelet[2300]: I0129 12:01:29.180249 2300 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 12:01:29.181552 kubelet[2300]: E0129 12:01:29.181402 2300 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081-3-0-9-89f64f6996\" not found" Jan 29 12:01:29.181552 kubelet[2300]: I0129 12:01:29.181441 2300 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 29 12:01:29.181667 kubelet[2300]: I0129 12:01:29.181604 2300 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 12:01:29.181667 kubelet[2300]: I0129 12:01:29.181649 2300 reconciler.go:26] "Reconciler: start to sync state" Jan 29 12:01:29.182313 kubelet[2300]: W0129 12:01:29.182022 2300 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://91.107.217.81:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 91.107.217.81:6443: connect: connection refused Jan 29 12:01:29.182313 kubelet[2300]: E0129 12:01:29.182121 2300 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://91.107.217.81:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 91.107.217.81:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:01:29.182673 kubelet[2300]: I0129 12:01:29.182646 2300 factory.go:221] Registration of the systemd container factory successfully Jan 29 12:01:29.182923 kubelet[2300]: I0129 12:01:29.182745 2300 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 12:01:29.184080 kubelet[2300]: I0129 12:01:29.183831 2300 factory.go:221] Registration of the containerd container factory successfully Jan 29 12:01:29.189523 kubelet[2300]: E0129 12:01:29.189479 2300 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 12:01:29.202567 kubelet[2300]: I0129 12:01:29.202521 2300 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 12:01:29.204105 kubelet[2300]: I0129 12:01:29.203680 2300 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 12:01:29.204105 kubelet[2300]: I0129 12:01:29.203703 2300 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 29 12:01:29.204105 kubelet[2300]: I0129 12:01:29.203725 2300 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 29 12:01:29.204105 kubelet[2300]: I0129 12:01:29.203731 2300 kubelet.go:2388] "Starting kubelet main sync loop" Jan 29 12:01:29.204105 kubelet[2300]: E0129 12:01:29.203805 2300 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 12:01:29.207776 kubelet[2300]: E0129 12:01:29.207695 2300 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.107.217.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-9-89f64f6996?timeout=10s\": dial tcp 91.107.217.81:6443: connect: connection refused" interval="200ms" Jan 29 12:01:29.210926 kubelet[2300]: W0129 12:01:29.210852 2300 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://91.107.217.81:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 91.107.217.81:6443: connect: connection refused Jan 29 12:01:29.211024 kubelet[2300]: E0129 12:01:29.210931 2300 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://91.107.217.81:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 91.107.217.81:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:01:29.218308 kubelet[2300]: I0129 12:01:29.218285 2300 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 29 12:01:29.218706 kubelet[2300]: I0129 12:01:29.218454 2300 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 29 12:01:29.218706 kubelet[2300]: I0129 12:01:29.218479 2300 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:01:29.221898 kubelet[2300]: I0129 12:01:29.221570 2300 policy_none.go:49] "None policy: Start" Jan 29 12:01:29.221898 kubelet[2300]: I0129 12:01:29.221600 2300 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 29 12:01:29.221898 kubelet[2300]: I0129 12:01:29.221613 2300 state_mem.go:35] "Initializing new in-memory state store" Jan 29 12:01:29.229854 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 12:01:29.246711 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 12:01:29.251448 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 12:01:29.262936 kubelet[2300]: I0129 12:01:29.262836 2300 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 12:01:29.263230 kubelet[2300]: I0129 12:01:29.263196 2300 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 12:01:29.263230 kubelet[2300]: I0129 12:01:29.263226 2300 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 12:01:29.264499 kubelet[2300]: I0129 12:01:29.264445 2300 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 12:01:29.266748 kubelet[2300]: E0129 12:01:29.266566 2300 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 29 12:01:29.267252 kubelet[2300]: E0129 12:01:29.267195 2300 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-0-9-89f64f6996\" not found" Jan 29 12:01:29.318304 systemd[1]: Created slice kubepods-burstable-pod21f311286ec187d4a317e32171965203.slice - libcontainer container kubepods-burstable-pod21f311286ec187d4a317e32171965203.slice. Jan 29 12:01:29.338644 kubelet[2300]: E0129 12:01:29.338226 2300 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-0-9-89f64f6996\" not found" node="ci-4081-3-0-9-89f64f6996" Jan 29 12:01:29.343482 systemd[1]: Created slice kubepods-burstable-pod6a5419136cb7bfc86cf2e6a6431e1d9f.slice - libcontainer container kubepods-burstable-pod6a5419136cb7bfc86cf2e6a6431e1d9f.slice. Jan 29 12:01:29.346856 kubelet[2300]: E0129 12:01:29.346621 2300 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-0-9-89f64f6996\" not found" node="ci-4081-3-0-9-89f64f6996" Jan 29 12:01:29.347618 systemd[1]: Created slice kubepods-burstable-pod5ca2ceaf4778ea915f76a4a82d4db39f.slice - libcontainer container kubepods-burstable-pod5ca2ceaf4778ea915f76a4a82d4db39f.slice. Jan 29 12:01:29.350262 kubelet[2300]: E0129 12:01:29.350219 2300 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-0-9-89f64f6996\" not found" node="ci-4081-3-0-9-89f64f6996" Jan 29 12:01:29.368016 kubelet[2300]: I0129 12:01:29.367899 2300 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081-3-0-9-89f64f6996" Jan 29 12:01:29.369398 kubelet[2300]: E0129 12:01:29.369369 2300 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://91.107.217.81:6443/api/v1/nodes\": dial tcp 91.107.217.81:6443: connect: connection refused" node="ci-4081-3-0-9-89f64f6996" Jan 29 12:01:29.408541 kubelet[2300]: E0129 12:01:29.408473 2300 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.107.217.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-9-89f64f6996?timeout=10s\": dial tcp 91.107.217.81:6443: connect: connection refused" interval="400ms" Jan 29 12:01:29.484177 kubelet[2300]: I0129 12:01:29.483444 2300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/21f311286ec187d4a317e32171965203-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-9-89f64f6996\" (UID: \"21f311286ec187d4a317e32171965203\") " pod="kube-system/kube-apiserver-ci-4081-3-0-9-89f64f6996" Jan 29 12:01:29.484177 kubelet[2300]: I0129 12:01:29.483595 2300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/21f311286ec187d4a317e32171965203-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-9-89f64f6996\" (UID: \"21f311286ec187d4a317e32171965203\") " pod="kube-system/kube-apiserver-ci-4081-3-0-9-89f64f6996" Jan 29 12:01:29.484177 kubelet[2300]: I0129 12:01:29.483691 2300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6a5419136cb7bfc86cf2e6a6431e1d9f-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-9-89f64f6996\" (UID: \"6a5419136cb7bfc86cf2e6a6431e1d9f\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-9-89f64f6996" Jan 29 12:01:29.484177 kubelet[2300]: I0129 12:01:29.483816 2300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6a5419136cb7bfc86cf2e6a6431e1d9f-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-9-89f64f6996\" (UID: \"6a5419136cb7bfc86cf2e6a6431e1d9f\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-9-89f64f6996" Jan 29 12:01:29.484177 kubelet[2300]: I0129 12:01:29.483857 2300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6a5419136cb7bfc86cf2e6a6431e1d9f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-9-89f64f6996\" (UID: \"6a5419136cb7bfc86cf2e6a6431e1d9f\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-9-89f64f6996" Jan 29 12:01:29.484582 kubelet[2300]: I0129 12:01:29.483889 2300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5ca2ceaf4778ea915f76a4a82d4db39f-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-9-89f64f6996\" (UID: \"5ca2ceaf4778ea915f76a4a82d4db39f\") " pod="kube-system/kube-scheduler-ci-4081-3-0-9-89f64f6996" Jan 29 12:01:29.484582 kubelet[2300]: I0129 12:01:29.483912 2300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/21f311286ec187d4a317e32171965203-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-9-89f64f6996\" (UID: \"21f311286ec187d4a317e32171965203\") " pod="kube-system/kube-apiserver-ci-4081-3-0-9-89f64f6996" Jan 29 12:01:29.484582 kubelet[2300]: I0129 12:01:29.483982 2300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6a5419136cb7bfc86cf2e6a6431e1d9f-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-9-89f64f6996\" (UID: \"6a5419136cb7bfc86cf2e6a6431e1d9f\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-9-89f64f6996" Jan 29 12:01:29.484582 kubelet[2300]: I0129 12:01:29.484012 2300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6a5419136cb7bfc86cf2e6a6431e1d9f-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-9-89f64f6996\" (UID: \"6a5419136cb7bfc86cf2e6a6431e1d9f\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-9-89f64f6996" Jan 29 12:01:29.572911 kubelet[2300]: I0129 12:01:29.572865 2300 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081-3-0-9-89f64f6996" Jan 29 12:01:29.576079 kubelet[2300]: E0129 12:01:29.574324 2300 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://91.107.217.81:6443/api/v1/nodes\": dial tcp 91.107.217.81:6443: connect: connection refused" node="ci-4081-3-0-9-89f64f6996" Jan 29 12:01:29.640517 containerd[1455]: time="2025-01-29T12:01:29.640394286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-9-89f64f6996,Uid:21f311286ec187d4a317e32171965203,Namespace:kube-system,Attempt:0,}" Jan 29 12:01:29.649163 containerd[1455]: time="2025-01-29T12:01:29.649115340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-9-89f64f6996,Uid:6a5419136cb7bfc86cf2e6a6431e1d9f,Namespace:kube-system,Attempt:0,}" Jan 29 12:01:29.651843 containerd[1455]: time="2025-01-29T12:01:29.651799138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-9-89f64f6996,Uid:5ca2ceaf4778ea915f76a4a82d4db39f,Namespace:kube-system,Attempt:0,}" Jan 29 12:01:29.809826 kubelet[2300]: E0129 12:01:29.809699 2300 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.107.217.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-9-89f64f6996?timeout=10s\": dial tcp 91.107.217.81:6443: connect: connection refused" interval="800ms" Jan 29 12:01:29.976692 kubelet[2300]: I0129 12:01:29.976571 2300 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081-3-0-9-89f64f6996" Jan 29 12:01:29.977453 kubelet[2300]: E0129 12:01:29.977417 2300 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://91.107.217.81:6443/api/v1/nodes\": dial tcp 91.107.217.81:6443: connect: connection refused" node="ci-4081-3-0-9-89f64f6996" Jan 29 12:01:30.213088 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4063408159.mount: Deactivated successfully. Jan 29 12:01:30.221516 containerd[1455]: time="2025-01-29T12:01:30.221316586Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:01:30.223012 containerd[1455]: time="2025-01-29T12:01:30.222906504Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:01:30.224460 containerd[1455]: time="2025-01-29T12:01:30.224300786Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Jan 29 12:01:30.226306 containerd[1455]: time="2025-01-29T12:01:30.226262574Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:01:30.227410 containerd[1455]: time="2025-01-29T12:01:30.227304626Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 12:01:30.229297 containerd[1455]: time="2025-01-29T12:01:30.228546553Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:01:30.238945 containerd[1455]: time="2025-01-29T12:01:30.238856038Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:01:30.239860 containerd[1455]: time="2025-01-29T12:01:30.239776294Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 598.574832ms" Jan 29 12:01:30.241416 containerd[1455]: time="2025-01-29T12:01:30.240587712Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 12:01:30.244343 containerd[1455]: time="2025-01-29T12:01:30.244165177Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 592.288561ms" Jan 29 12:01:30.247458 containerd[1455]: time="2025-01-29T12:01:30.247421490Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 597.986879ms" Jan 29 12:01:30.302770 kubelet[2300]: W0129 12:01:30.302578 2300 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://91.107.217.81:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 91.107.217.81:6443: connect: connection refused Jan 29 12:01:30.302770 kubelet[2300]: E0129 12:01:30.302661 2300 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://91.107.217.81:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 91.107.217.81:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:01:30.348427 kubelet[2300]: W0129 12:01:30.348291 2300 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://91.107.217.81:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 91.107.217.81:6443: connect: connection refused Jan 29 12:01:30.348427 kubelet[2300]: E0129 12:01:30.348361 2300 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://91.107.217.81:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 91.107.217.81:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:01:30.401407 containerd[1455]: time="2025-01-29T12:01:30.401044114Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:01:30.401648 containerd[1455]: time="2025-01-29T12:01:30.401421464Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:01:30.402173 containerd[1455]: time="2025-01-29T12:01:30.402103485Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:01:30.402397 containerd[1455]: time="2025-01-29T12:01:30.402347279Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:01:30.404697 containerd[1455]: time="2025-01-29T12:01:30.404598139Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:01:30.404697 containerd[1455]: time="2025-01-29T12:01:30.404646818Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:01:30.404697 containerd[1455]: time="2025-01-29T12:01:30.404658257Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:01:30.405841 containerd[1455]: time="2025-01-29T12:01:30.405133565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:01:30.406162 containerd[1455]: time="2025-01-29T12:01:30.405532674Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:01:30.406162 containerd[1455]: time="2025-01-29T12:01:30.405575633Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:01:30.406162 containerd[1455]: time="2025-01-29T12:01:30.405599192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:01:30.406162 containerd[1455]: time="2025-01-29T12:01:30.405766188Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:01:30.416525 kubelet[2300]: W0129 12:01:30.416465 2300 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://91.107.217.81:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-9-89f64f6996&limit=500&resourceVersion=0": dial tcp 91.107.217.81:6443: connect: connection refused Jan 29 12:01:30.416656 kubelet[2300]: E0129 12:01:30.416533 2300 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://91.107.217.81:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-9-89f64f6996&limit=500&resourceVersion=0\": dial tcp 91.107.217.81:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:01:30.426277 systemd[1]: Started cri-containerd-f8b9658e19e4e56fd02e9223779b04406f32c7214fbe1cff2cc60f8fe5cb9a48.scope - libcontainer container f8b9658e19e4e56fd02e9223779b04406f32c7214fbe1cff2cc60f8fe5cb9a48. Jan 29 12:01:30.442257 systemd[1]: Started cri-containerd-3a4aa691cfa823e55c0db9434bae4f4dbb3a04f66e4cfeb5880c1b79a3c41990.scope - libcontainer container 3a4aa691cfa823e55c0db9434bae4f4dbb3a04f66e4cfeb5880c1b79a3c41990. Jan 29 12:01:30.444080 systemd[1]: Started cri-containerd-db6f444f31e9d8c378ca1fd3a42541b3bc8c9b1cda2ef0f7856114af53f46ea4.scope - libcontainer container db6f444f31e9d8c378ca1fd3a42541b3bc8c9b1cda2ef0f7856114af53f46ea4. Jan 29 12:01:30.490228 kubelet[2300]: W0129 12:01:30.489865 2300 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://91.107.217.81:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 91.107.217.81:6443: connect: connection refused Jan 29 12:01:30.490228 kubelet[2300]: E0129 12:01:30.489936 2300 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://91.107.217.81:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 91.107.217.81:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:01:30.495036 containerd[1455]: time="2025-01-29T12:01:30.494860532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-9-89f64f6996,Uid:21f311286ec187d4a317e32171965203,Namespace:kube-system,Attempt:0,} returns sandbox id \"f8b9658e19e4e56fd02e9223779b04406f32c7214fbe1cff2cc60f8fe5cb9a48\"" Jan 29 12:01:30.500926 containerd[1455]: time="2025-01-29T12:01:30.500888372Z" level=info msg="CreateContainer within sandbox \"f8b9658e19e4e56fd02e9223779b04406f32c7214fbe1cff2cc60f8fe5cb9a48\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 12:01:30.508244 containerd[1455]: time="2025-01-29T12:01:30.508134938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-9-89f64f6996,Uid:6a5419136cb7bfc86cf2e6a6431e1d9f,Namespace:kube-system,Attempt:0,} returns sandbox id \"3a4aa691cfa823e55c0db9434bae4f4dbb3a04f66e4cfeb5880c1b79a3c41990\"" Jan 29 12:01:30.511529 containerd[1455]: time="2025-01-29T12:01:30.511385372Z" level=info msg="CreateContainer within sandbox \"3a4aa691cfa823e55c0db9434bae4f4dbb3a04f66e4cfeb5880c1b79a3c41990\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 12:01:30.515299 containerd[1455]: time="2025-01-29T12:01:30.515211830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-9-89f64f6996,Uid:5ca2ceaf4778ea915f76a4a82d4db39f,Namespace:kube-system,Attempt:0,} returns sandbox id \"db6f444f31e9d8c378ca1fd3a42541b3bc8c9b1cda2ef0f7856114af53f46ea4\"" Jan 29 12:01:30.520462 containerd[1455]: time="2025-01-29T12:01:30.520313814Z" level=info msg="CreateContainer within sandbox \"db6f444f31e9d8c378ca1fd3a42541b3bc8c9b1cda2ef0f7856114af53f46ea4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 12:01:30.530691 containerd[1455]: time="2025-01-29T12:01:30.530500662Z" level=info msg="CreateContainer within sandbox \"f8b9658e19e4e56fd02e9223779b04406f32c7214fbe1cff2cc60f8fe5cb9a48\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e31f82994a21a558de2c8d414219c25d57f6a9ae42fc9ed2f424aa68fe4a511a\"" Jan 29 12:01:30.532784 containerd[1455]: time="2025-01-29T12:01:30.531639752Z" level=info msg="StartContainer for \"e31f82994a21a558de2c8d414219c25d57f6a9ae42fc9ed2f424aa68fe4a511a\"" Jan 29 12:01:30.535976 containerd[1455]: time="2025-01-29T12:01:30.535944437Z" level=info msg="CreateContainer within sandbox \"3a4aa691cfa823e55c0db9434bae4f4dbb3a04f66e4cfeb5880c1b79a3c41990\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a53226c7703385c2152a7868f3477d532f7bd8ec518bee57fd3e66ae6ccedbc0\"" Jan 29 12:01:30.536897 containerd[1455]: time="2025-01-29T12:01:30.536874372Z" level=info msg="StartContainer for \"a53226c7703385c2152a7868f3477d532f7bd8ec518bee57fd3e66ae6ccedbc0\"" Jan 29 12:01:30.540412 containerd[1455]: time="2025-01-29T12:01:30.540331120Z" level=info msg="CreateContainer within sandbox \"db6f444f31e9d8c378ca1fd3a42541b3bc8c9b1cda2ef0f7856114af53f46ea4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"157df7441b04e3d41364b2b569d451228d4259a9a4f548f81d38f46f9472e2d8\"" Jan 29 12:01:30.542828 containerd[1455]: time="2025-01-29T12:01:30.542490462Z" level=info msg="StartContainer for \"157df7441b04e3d41364b2b569d451228d4259a9a4f548f81d38f46f9472e2d8\"" Jan 29 12:01:30.573214 systemd[1]: Started cri-containerd-e31f82994a21a558de2c8d414219c25d57f6a9ae42fc9ed2f424aa68fe4a511a.scope - libcontainer container e31f82994a21a558de2c8d414219c25d57f6a9ae42fc9ed2f424aa68fe4a511a. Jan 29 12:01:30.584283 systemd[1]: Started cri-containerd-157df7441b04e3d41364b2b569d451228d4259a9a4f548f81d38f46f9472e2d8.scope - libcontainer container 157df7441b04e3d41364b2b569d451228d4259a9a4f548f81d38f46f9472e2d8. Jan 29 12:01:30.585921 systemd[1]: Started cri-containerd-a53226c7703385c2152a7868f3477d532f7bd8ec518bee57fd3e66ae6ccedbc0.scope - libcontainer container a53226c7703385c2152a7868f3477d532f7bd8ec518bee57fd3e66ae6ccedbc0. Jan 29 12:01:30.611050 kubelet[2300]: E0129 12:01:30.611013 2300 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.107.217.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-9-89f64f6996?timeout=10s\": dial tcp 91.107.217.81:6443: connect: connection refused" interval="1.6s" Jan 29 12:01:30.641680 containerd[1455]: time="2025-01-29T12:01:30.641638859Z" level=info msg="StartContainer for \"e31f82994a21a558de2c8d414219c25d57f6a9ae42fc9ed2f424aa68fe4a511a\" returns successfully" Jan 29 12:01:30.652383 containerd[1455]: time="2025-01-29T12:01:30.651571394Z" level=info msg="StartContainer for \"a53226c7703385c2152a7868f3477d532f7bd8ec518bee57fd3e66ae6ccedbc0\" returns successfully" Jan 29 12:01:30.655149 containerd[1455]: time="2025-01-29T12:01:30.655109499Z" level=info msg="StartContainer for \"157df7441b04e3d41364b2b569d451228d4259a9a4f548f81d38f46f9472e2d8\" returns successfully" Jan 29 12:01:30.780244 kubelet[2300]: I0129 12:01:30.780187 2300 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081-3-0-9-89f64f6996" Jan 29 12:01:31.226295 kubelet[2300]: E0129 12:01:31.226200 2300 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-0-9-89f64f6996\" not found" node="ci-4081-3-0-9-89f64f6996" Jan 29 12:01:31.227037 kubelet[2300]: E0129 12:01:31.226239 2300 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-0-9-89f64f6996\" not found" node="ci-4081-3-0-9-89f64f6996" Jan 29 12:01:31.232378 kubelet[2300]: E0129 12:01:31.232352 2300 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-0-9-89f64f6996\" not found" node="ci-4081-3-0-9-89f64f6996" Jan 29 12:01:32.233677 kubelet[2300]: E0129 12:01:32.233435 2300 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-0-9-89f64f6996\" not found" node="ci-4081-3-0-9-89f64f6996" Jan 29 12:01:32.233677 kubelet[2300]: E0129 12:01:32.233538 2300 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-0-9-89f64f6996\" not found" node="ci-4081-3-0-9-89f64f6996" Jan 29 12:01:32.882087 kubelet[2300]: E0129 12:01:32.881271 2300 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-0-9-89f64f6996\" not found" node="ci-4081-3-0-9-89f64f6996" Jan 29 12:01:32.949432 kubelet[2300]: I0129 12:01:32.949391 2300 kubelet_node_status.go:79] "Successfully registered node" node="ci-4081-3-0-9-89f64f6996" Jan 29 12:01:32.949432 kubelet[2300]: E0129 12:01:32.949431 2300 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"ci-4081-3-0-9-89f64f6996\": node \"ci-4081-3-0-9-89f64f6996\" not found" Jan 29 12:01:32.953526 kubelet[2300]: E0129 12:01:32.953491 2300 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081-3-0-9-89f64f6996\" not found" Jan 29 12:01:33.054671 kubelet[2300]: E0129 12:01:33.054603 2300 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081-3-0-9-89f64f6996\" not found" Jan 29 12:01:33.155533 kubelet[2300]: E0129 12:01:33.155423 2300 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081-3-0-9-89f64f6996\" not found" Jan 29 12:01:33.284517 kubelet[2300]: I0129 12:01:33.284344 2300 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-0-9-89f64f6996" Jan 29 12:01:33.297575 kubelet[2300]: E0129 12:01:33.297223 2300 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-0-9-89f64f6996\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-0-9-89f64f6996" Jan 29 12:01:33.297575 kubelet[2300]: I0129 12:01:33.297258 2300 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-0-9-89f64f6996" Jan 29 12:01:33.299889 kubelet[2300]: E0129 12:01:33.299742 2300 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-0-9-89f64f6996\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-0-9-89f64f6996" Jan 29 12:01:33.299889 kubelet[2300]: I0129 12:01:33.299819 2300 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-0-9-89f64f6996" Jan 29 12:01:33.302050 kubelet[2300]: E0129 12:01:33.302015 2300 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-0-9-89f64f6996\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-0-9-89f64f6996" Jan 29 12:01:34.172087 kubelet[2300]: I0129 12:01:34.171944 2300 apiserver.go:52] "Watching apiserver" Jan 29 12:01:34.181786 kubelet[2300]: I0129 12:01:34.181743 2300 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 12:01:34.809226 kubelet[2300]: I0129 12:01:34.809165 2300 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-0-9-89f64f6996" Jan 29 12:01:34.874467 systemd[1]: Reloading requested from client PID 2577 ('systemctl') (unit session-5.scope)... Jan 29 12:01:34.874903 systemd[1]: Reloading... Jan 29 12:01:34.990135 zram_generator::config[2617]: No configuration found. Jan 29 12:01:35.092855 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:01:35.176494 systemd[1]: Reloading finished in 301 ms. Jan 29 12:01:35.222894 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:01:35.236247 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 12:01:35.236780 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:01:35.236871 systemd[1]: kubelet.service: Consumed 1.645s CPU time, 122.8M memory peak, 0B memory swap peak. Jan 29 12:01:35.252911 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:01:35.363936 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:01:35.378534 (kubelet)[2662]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 12:01:35.431950 kubelet[2662]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:01:35.432813 kubelet[2662]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 29 12:01:35.432813 kubelet[2662]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:01:35.432988 kubelet[2662]: I0129 12:01:35.432950 2662 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 12:01:35.441697 kubelet[2662]: I0129 12:01:35.441570 2662 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 29 12:01:35.441697 kubelet[2662]: I0129 12:01:35.441629 2662 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 12:01:35.442108 kubelet[2662]: I0129 12:01:35.442019 2662 server.go:954] "Client rotation is on, will bootstrap in background" Jan 29 12:01:35.443522 kubelet[2662]: I0129 12:01:35.443472 2662 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 12:01:35.446143 kubelet[2662]: I0129 12:01:35.446101 2662 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 12:01:35.451119 kubelet[2662]: E0129 12:01:35.450356 2662 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 12:01:35.451119 kubelet[2662]: I0129 12:01:35.450388 2662 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 12:01:35.457148 kubelet[2662]: I0129 12:01:35.457102 2662 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 12:01:35.457400 kubelet[2662]: I0129 12:01:35.457343 2662 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 12:01:35.457565 kubelet[2662]: I0129 12:01:35.457379 2662 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-0-9-89f64f6996","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 12:01:35.457565 kubelet[2662]: I0129 12:01:35.457562 2662 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 12:01:35.457565 kubelet[2662]: I0129 12:01:35.457570 2662 container_manager_linux.go:304] "Creating device plugin manager" Jan 29 12:01:35.457801 kubelet[2662]: I0129 12:01:35.457657 2662 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:01:35.457844 kubelet[2662]: I0129 12:01:35.457805 2662 kubelet.go:446] "Attempting to sync node with API server" Jan 29 12:01:35.457844 kubelet[2662]: I0129 12:01:35.457816 2662 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 12:01:35.457844 kubelet[2662]: I0129 12:01:35.457834 2662 kubelet.go:352] "Adding apiserver pod source" Jan 29 12:01:35.457844 kubelet[2662]: I0129 12:01:35.457845 2662 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 12:01:35.462246 kubelet[2662]: I0129 12:01:35.462149 2662 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 29 12:01:35.465579 kubelet[2662]: I0129 12:01:35.465547 2662 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 12:01:35.478089 kubelet[2662]: I0129 12:01:35.477947 2662 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 29 12:01:35.478089 kubelet[2662]: I0129 12:01:35.477991 2662 server.go:1287] "Started kubelet" Jan 29 12:01:35.482680 kubelet[2662]: I0129 12:01:35.482535 2662 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 12:01:35.491883 kubelet[2662]: I0129 12:01:35.491836 2662 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 12:01:35.494819 kubelet[2662]: I0129 12:01:35.493464 2662 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 29 12:01:35.494819 kubelet[2662]: I0129 12:01:35.493549 2662 server.go:490] "Adding debug handlers to kubelet server" Jan 29 12:01:35.494819 kubelet[2662]: I0129 12:01:35.493958 2662 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 12:01:35.494819 kubelet[2662]: I0129 12:01:35.494132 2662 reconciler.go:26] "Reconciler: start to sync state" Jan 29 12:01:35.495577 kubelet[2662]: I0129 12:01:35.495511 2662 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 12:01:35.495834 kubelet[2662]: I0129 12:01:35.495818 2662 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 12:01:35.496329 kubelet[2662]: I0129 12:01:35.496308 2662 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 12:01:35.499256 kubelet[2662]: I0129 12:01:35.498528 2662 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 12:01:35.500260 kubelet[2662]: E0129 12:01:35.500232 2662 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 12:01:35.504304 kubelet[2662]: I0129 12:01:35.503690 2662 factory.go:221] Registration of the containerd container factory successfully Jan 29 12:01:35.504304 kubelet[2662]: I0129 12:01:35.503715 2662 factory.go:221] Registration of the systemd container factory successfully Jan 29 12:01:35.504304 kubelet[2662]: I0129 12:01:35.503971 2662 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 12:01:35.506293 kubelet[2662]: I0129 12:01:35.506264 2662 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 12:01:35.506730 kubelet[2662]: I0129 12:01:35.506428 2662 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 29 12:01:35.506730 kubelet[2662]: I0129 12:01:35.506454 2662 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 29 12:01:35.506730 kubelet[2662]: I0129 12:01:35.506461 2662 kubelet.go:2388] "Starting kubelet main sync loop" Jan 29 12:01:35.506730 kubelet[2662]: E0129 12:01:35.506503 2662 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 12:01:35.559561 kubelet[2662]: I0129 12:01:35.559530 2662 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 29 12:01:35.560465 kubelet[2662]: I0129 12:01:35.559875 2662 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 29 12:01:35.560465 kubelet[2662]: I0129 12:01:35.559908 2662 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:01:35.560465 kubelet[2662]: I0129 12:01:35.560164 2662 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 12:01:35.560465 kubelet[2662]: I0129 12:01:35.560181 2662 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 12:01:35.560465 kubelet[2662]: I0129 12:01:35.560205 2662 policy_none.go:49] "None policy: Start" Jan 29 12:01:35.560465 kubelet[2662]: I0129 12:01:35.560219 2662 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 29 12:01:35.560465 kubelet[2662]: I0129 12:01:35.560232 2662 state_mem.go:35] "Initializing new in-memory state store" Jan 29 12:01:35.560465 kubelet[2662]: I0129 12:01:35.560376 2662 state_mem.go:75] "Updated machine memory state" Jan 29 12:01:35.566189 kubelet[2662]: I0129 12:01:35.566164 2662 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 12:01:35.566813 kubelet[2662]: I0129 12:01:35.566795 2662 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 12:01:35.566929 kubelet[2662]: I0129 12:01:35.566893 2662 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 12:01:35.567486 kubelet[2662]: I0129 12:01:35.567402 2662 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 12:01:35.568651 kubelet[2662]: E0129 12:01:35.568619 2662 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 29 12:01:35.608319 kubelet[2662]: I0129 12:01:35.608203 2662 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-0-9-89f64f6996" Jan 29 12:01:35.609201 kubelet[2662]: I0129 12:01:35.608202 2662 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-0-9-89f64f6996" Jan 29 12:01:35.611907 kubelet[2662]: I0129 12:01:35.611874 2662 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-0-9-89f64f6996" Jan 29 12:01:35.622467 kubelet[2662]: E0129 12:01:35.621669 2662 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-0-9-89f64f6996\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-0-9-89f64f6996" Jan 29 12:01:35.674129 kubelet[2662]: I0129 12:01:35.674068 2662 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081-3-0-9-89f64f6996" Jan 29 12:01:35.683469 kubelet[2662]: I0129 12:01:35.683389 2662 kubelet_node_status.go:125] "Node was previously registered" node="ci-4081-3-0-9-89f64f6996" Jan 29 12:01:35.683469 kubelet[2662]: I0129 12:01:35.683481 2662 kubelet_node_status.go:79] "Successfully registered node" node="ci-4081-3-0-9-89f64f6996" Jan 29 12:01:35.695810 kubelet[2662]: I0129 12:01:35.695549 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6a5419136cb7bfc86cf2e6a6431e1d9f-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-9-89f64f6996\" (UID: \"6a5419136cb7bfc86cf2e6a6431e1d9f\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-9-89f64f6996" Jan 29 12:01:35.695810 kubelet[2662]: I0129 12:01:35.695616 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6a5419136cb7bfc86cf2e6a6431e1d9f-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-9-89f64f6996\" (UID: \"6a5419136cb7bfc86cf2e6a6431e1d9f\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-9-89f64f6996" Jan 29 12:01:35.695810 kubelet[2662]: I0129 12:01:35.695643 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6a5419136cb7bfc86cf2e6a6431e1d9f-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-9-89f64f6996\" (UID: \"6a5419136cb7bfc86cf2e6a6431e1d9f\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-9-89f64f6996" Jan 29 12:01:35.695810 kubelet[2662]: I0129 12:01:35.695663 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6a5419136cb7bfc86cf2e6a6431e1d9f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-9-89f64f6996\" (UID: \"6a5419136cb7bfc86cf2e6a6431e1d9f\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-9-89f64f6996" Jan 29 12:01:35.695810 kubelet[2662]: I0129 12:01:35.695683 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/21f311286ec187d4a317e32171965203-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-9-89f64f6996\" (UID: \"21f311286ec187d4a317e32171965203\") " pod="kube-system/kube-apiserver-ci-4081-3-0-9-89f64f6996" Jan 29 12:01:35.696049 kubelet[2662]: I0129 12:01:35.695704 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/21f311286ec187d4a317e32171965203-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-9-89f64f6996\" (UID: \"21f311286ec187d4a317e32171965203\") " pod="kube-system/kube-apiserver-ci-4081-3-0-9-89f64f6996" Jan 29 12:01:35.696049 kubelet[2662]: I0129 12:01:35.695722 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6a5419136cb7bfc86cf2e6a6431e1d9f-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-9-89f64f6996\" (UID: \"6a5419136cb7bfc86cf2e6a6431e1d9f\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-9-89f64f6996" Jan 29 12:01:35.696049 kubelet[2662]: I0129 12:01:35.695739 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5ca2ceaf4778ea915f76a4a82d4db39f-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-9-89f64f6996\" (UID: \"5ca2ceaf4778ea915f76a4a82d4db39f\") " pod="kube-system/kube-scheduler-ci-4081-3-0-9-89f64f6996" Jan 29 12:01:35.696049 kubelet[2662]: I0129 12:01:35.695758 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/21f311286ec187d4a317e32171965203-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-9-89f64f6996\" (UID: \"21f311286ec187d4a317e32171965203\") " pod="kube-system/kube-apiserver-ci-4081-3-0-9-89f64f6996" Jan 29 12:01:36.461696 kubelet[2662]: I0129 12:01:36.461625 2662 apiserver.go:52] "Watching apiserver" Jan 29 12:01:36.494327 kubelet[2662]: I0129 12:01:36.494276 2662 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 12:01:36.543254 kubelet[2662]: I0129 12:01:36.541358 2662 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-0-9-89f64f6996" Jan 29 12:01:36.561500 kubelet[2662]: E0129 12:01:36.561264 2662 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-0-9-89f64f6996\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-0-9-89f64f6996" Jan 29 12:01:36.595269 kubelet[2662]: I0129 12:01:36.595185 2662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-0-9-89f64f6996" podStartSLOduration=2.595165055 podStartE2EDuration="2.595165055s" podCreationTimestamp="2025-01-29 12:01:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:01:36.575282779 +0000 UTC m=+1.192370062" watchObservedRunningTime="2025-01-29 12:01:36.595165055 +0000 UTC m=+1.212252378" Jan 29 12:01:36.608440 kubelet[2662]: I0129 12:01:36.608374 2662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-0-9-89f64f6996" podStartSLOduration=1.608354573 podStartE2EDuration="1.608354573s" podCreationTimestamp="2025-01-29 12:01:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:01:36.595406574 +0000 UTC m=+1.212493897" watchObservedRunningTime="2025-01-29 12:01:36.608354573 +0000 UTC m=+1.225441856" Jan 29 12:01:36.959688 sudo[1820]: pam_unix(sudo:session): session closed for user root Jan 29 12:01:37.121506 sshd[1816]: pam_unix(sshd:session): session closed for user core Jan 29 12:01:37.126527 systemd[1]: sshd@4-91.107.217.81:22-139.178.89.65:48758.service: Deactivated successfully. Jan 29 12:01:37.128882 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 12:01:37.129433 systemd[1]: session-5.scope: Consumed 5.305s CPU time, 151.1M memory peak, 0B memory swap peak. Jan 29 12:01:37.130392 systemd-logind[1448]: Session 5 logged out. Waiting for processes to exit. Jan 29 12:01:37.131618 systemd-logind[1448]: Removed session 5. Jan 29 12:01:40.312452 kubelet[2662]: I0129 12:01:40.312261 2662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-0-9-89f64f6996" podStartSLOduration=5.312240886 podStartE2EDuration="5.312240886s" podCreationTimestamp="2025-01-29 12:01:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:01:36.610276441 +0000 UTC m=+1.227363724" watchObservedRunningTime="2025-01-29 12:01:40.312240886 +0000 UTC m=+4.929328169" Jan 29 12:01:41.522803 kubelet[2662]: I0129 12:01:41.522696 2662 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 12:01:41.523999 kubelet[2662]: I0129 12:01:41.523886 2662 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 12:01:41.524117 containerd[1455]: time="2025-01-29T12:01:41.523603476Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 12:01:42.562589 systemd[1]: Created slice kubepods-besteffort-podc3a57107_3c93_4156_945c_434f6f6726e5.slice - libcontainer container kubepods-besteffort-podc3a57107_3c93_4156_945c_434f6f6726e5.slice. Jan 29 12:01:42.563340 kubelet[2662]: W0129 12:01:42.562745 2662 reflector.go:569] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081-3-0-9-89f64f6996" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-0-9-89f64f6996' and this object Jan 29 12:01:42.563340 kubelet[2662]: E0129 12:01:42.562790 2662 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4081-3-0-9-89f64f6996\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-0-9-89f64f6996' and this object" logger="UnhandledError" Jan 29 12:01:42.564908 kubelet[2662]: I0129 12:01:42.562143 2662 status_manager.go:890] "Failed to get status for pod" podUID="c3a57107-3c93-4156-945c-434f6f6726e5" pod="kube-system/kube-proxy-vnvw2" err="pods \"kube-proxy-vnvw2\" is forbidden: User \"system:node:ci-4081-3-0-9-89f64f6996\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-0-9-89f64f6996' and this object" Jan 29 12:01:42.564908 kubelet[2662]: W0129 12:01:42.564290 2662 reflector.go:569] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4081-3-0-9-89f64f6996" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-0-9-89f64f6996' and this object Jan 29 12:01:42.564908 kubelet[2662]: E0129 12:01:42.564332 2662 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:ci-4081-3-0-9-89f64f6996\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-0-9-89f64f6996' and this object" logger="UnhandledError" Jan 29 12:01:42.590587 systemd[1]: Created slice kubepods-burstable-pod0d7bdf3d_532f_47a2_bd0d_14314d2ac2d4.slice - libcontainer container kubepods-burstable-pod0d7bdf3d_532f_47a2_bd0d_14314d2ac2d4.slice. Jan 29 12:01:42.636724 kubelet[2662]: I0129 12:01:42.635288 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/0d7bdf3d-532f-47a2-bd0d-14314d2ac2d4-cni-plugin\") pod \"kube-flannel-ds-cznfl\" (UID: \"0d7bdf3d-532f-47a2-bd0d-14314d2ac2d4\") " pod="kube-flannel/kube-flannel-ds-cznfl" Jan 29 12:01:42.636724 kubelet[2662]: I0129 12:01:42.635345 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4flq\" (UniqueName: \"kubernetes.io/projected/0d7bdf3d-532f-47a2-bd0d-14314d2ac2d4-kube-api-access-b4flq\") pod \"kube-flannel-ds-cznfl\" (UID: \"0d7bdf3d-532f-47a2-bd0d-14314d2ac2d4\") " pod="kube-flannel/kube-flannel-ds-cznfl" Jan 29 12:01:42.636724 kubelet[2662]: I0129 12:01:42.635391 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c3a57107-3c93-4156-945c-434f6f6726e5-xtables-lock\") pod \"kube-proxy-vnvw2\" (UID: \"c3a57107-3c93-4156-945c-434f6f6726e5\") " pod="kube-system/kube-proxy-vnvw2" Jan 29 12:01:42.636724 kubelet[2662]: I0129 12:01:42.635408 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0d7bdf3d-532f-47a2-bd0d-14314d2ac2d4-xtables-lock\") pod \"kube-flannel-ds-cznfl\" (UID: \"0d7bdf3d-532f-47a2-bd0d-14314d2ac2d4\") " pod="kube-flannel/kube-flannel-ds-cznfl" Jan 29 12:01:42.636724 kubelet[2662]: I0129 12:01:42.635433 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/0d7bdf3d-532f-47a2-bd0d-14314d2ac2d4-run\") pod \"kube-flannel-ds-cznfl\" (UID: \"0d7bdf3d-532f-47a2-bd0d-14314d2ac2d4\") " pod="kube-flannel/kube-flannel-ds-cznfl" Jan 29 12:01:42.636949 kubelet[2662]: I0129 12:01:42.635491 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/0d7bdf3d-532f-47a2-bd0d-14314d2ac2d4-cni\") pod \"kube-flannel-ds-cznfl\" (UID: \"0d7bdf3d-532f-47a2-bd0d-14314d2ac2d4\") " pod="kube-flannel/kube-flannel-ds-cznfl" Jan 29 12:01:42.636949 kubelet[2662]: I0129 12:01:42.635508 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/0d7bdf3d-532f-47a2-bd0d-14314d2ac2d4-flannel-cfg\") pod \"kube-flannel-ds-cznfl\" (UID: \"0d7bdf3d-532f-47a2-bd0d-14314d2ac2d4\") " pod="kube-flannel/kube-flannel-ds-cznfl" Jan 29 12:01:42.636949 kubelet[2662]: I0129 12:01:42.635524 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c3a57107-3c93-4156-945c-434f6f6726e5-kube-proxy\") pod \"kube-proxy-vnvw2\" (UID: \"c3a57107-3c93-4156-945c-434f6f6726e5\") " pod="kube-system/kube-proxy-vnvw2" Jan 29 12:01:42.636949 kubelet[2662]: I0129 12:01:42.635598 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c3a57107-3c93-4156-945c-434f6f6726e5-lib-modules\") pod \"kube-proxy-vnvw2\" (UID: \"c3a57107-3c93-4156-945c-434f6f6726e5\") " pod="kube-system/kube-proxy-vnvw2" Jan 29 12:01:42.636949 kubelet[2662]: I0129 12:01:42.635619 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cj8xp\" (UniqueName: \"kubernetes.io/projected/c3a57107-3c93-4156-945c-434f6f6726e5-kube-api-access-cj8xp\") pod \"kube-proxy-vnvw2\" (UID: \"c3a57107-3c93-4156-945c-434f6f6726e5\") " pod="kube-system/kube-proxy-vnvw2" Jan 29 12:01:42.896534 containerd[1455]: time="2025-01-29T12:01:42.896286171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-cznfl,Uid:0d7bdf3d-532f-47a2-bd0d-14314d2ac2d4,Namespace:kube-flannel,Attempt:0,}" Jan 29 12:01:42.924369 containerd[1455]: time="2025-01-29T12:01:42.924235389Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:01:42.924369 containerd[1455]: time="2025-01-29T12:01:42.924315670Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:01:42.924369 containerd[1455]: time="2025-01-29T12:01:42.924336390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:01:42.924952 containerd[1455]: time="2025-01-29T12:01:42.924865475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:01:42.949236 systemd[1]: Started cri-containerd-f13d28f04aeca03024d730c71595ce0c671b67cf6686e3e378cc338024c66ced.scope - libcontainer container f13d28f04aeca03024d730c71595ce0c671b67cf6686e3e378cc338024c66ced. Jan 29 12:01:42.982364 containerd[1455]: time="2025-01-29T12:01:42.982318687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-cznfl,Uid:0d7bdf3d-532f-47a2-bd0d-14314d2ac2d4,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"f13d28f04aeca03024d730c71595ce0c671b67cf6686e3e378cc338024c66ced\"" Jan 29 12:01:42.987012 containerd[1455]: time="2025-01-29T12:01:42.986958497Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Jan 29 12:01:43.775657 containerd[1455]: time="2025-01-29T12:01:43.775602367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vnvw2,Uid:c3a57107-3c93-4156-945c-434f6f6726e5,Namespace:kube-system,Attempt:0,}" Jan 29 12:01:43.800471 containerd[1455]: time="2025-01-29T12:01:43.799780285Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:01:43.800471 containerd[1455]: time="2025-01-29T12:01:43.799935207Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:01:43.800471 containerd[1455]: time="2025-01-29T12:01:43.799960487Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:01:43.800471 containerd[1455]: time="2025-01-29T12:01:43.800289892Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:01:43.821412 systemd[1]: run-containerd-runc-k8s.io-27606e51f1bc4d85302294cb7f66ce5838f91ba36913b5f33a7526980fc23d4b-runc.46ctgw.mount: Deactivated successfully. Jan 29 12:01:43.831429 systemd[1]: Started cri-containerd-27606e51f1bc4d85302294cb7f66ce5838f91ba36913b5f33a7526980fc23d4b.scope - libcontainer container 27606e51f1bc4d85302294cb7f66ce5838f91ba36913b5f33a7526980fc23d4b. Jan 29 12:01:43.855824 containerd[1455]: time="2025-01-29T12:01:43.855757502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vnvw2,Uid:c3a57107-3c93-4156-945c-434f6f6726e5,Namespace:kube-system,Attempt:0,} returns sandbox id \"27606e51f1bc4d85302294cb7f66ce5838f91ba36913b5f33a7526980fc23d4b\"" Jan 29 12:01:43.858810 containerd[1455]: time="2025-01-29T12:01:43.858672700Z" level=info msg="CreateContainer within sandbox \"27606e51f1bc4d85302294cb7f66ce5838f91ba36913b5f33a7526980fc23d4b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 12:01:43.872594 containerd[1455]: time="2025-01-29T12:01:43.872422041Z" level=info msg="CreateContainer within sandbox \"27606e51f1bc4d85302294cb7f66ce5838f91ba36913b5f33a7526980fc23d4b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6fa57e714ea07af9b9d040628c5214103d93d7c06738ff340fcf08e82decb737\"" Jan 29 12:01:43.876142 containerd[1455]: time="2025-01-29T12:01:43.874178025Z" level=info msg="StartContainer for \"6fa57e714ea07af9b9d040628c5214103d93d7c06738ff340fcf08e82decb737\"" Jan 29 12:01:43.903293 systemd[1]: Started cri-containerd-6fa57e714ea07af9b9d040628c5214103d93d7c06738ff340fcf08e82decb737.scope - libcontainer container 6fa57e714ea07af9b9d040628c5214103d93d7c06738ff340fcf08e82decb737. Jan 29 12:01:43.934000 containerd[1455]: time="2025-01-29T12:01:43.933909851Z" level=info msg="StartContainer for \"6fa57e714ea07af9b9d040628c5214103d93d7c06738ff340fcf08e82decb737\" returns successfully" Jan 29 12:01:45.477726 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1553499275.mount: Deactivated successfully. Jan 29 12:01:45.513520 containerd[1455]: time="2025-01-29T12:01:45.513462222Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:45.516600 containerd[1455]: time="2025-01-29T12:01:45.516544878Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3673531" Jan 29 12:01:45.517087 containerd[1455]: time="2025-01-29T12:01:45.517034646Z" level=info msg="ImageCreate event name:\"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:45.525090 containerd[1455]: time="2025-01-29T12:01:45.524869267Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:45.525770 containerd[1455]: time="2025-01-29T12:01:45.525732883Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3662650\" in 2.538550103s" Jan 29 12:01:45.526244 containerd[1455]: time="2025-01-29T12:01:45.525850045Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" Jan 29 12:01:45.528537 containerd[1455]: time="2025-01-29T12:01:45.528384450Z" level=info msg="CreateContainer within sandbox \"f13d28f04aeca03024d730c71595ce0c671b67cf6686e3e378cc338024c66ced\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Jan 29 12:01:45.544097 containerd[1455]: time="2025-01-29T12:01:45.544012971Z" level=info msg="CreateContainer within sandbox \"f13d28f04aeca03024d730c71595ce0c671b67cf6686e3e378cc338024c66ced\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"d6c2584a4664755e5c41595a1c94077b4f9eb7fd1c02df12d31b89b1f13cec89\"" Jan 29 12:01:45.544851 containerd[1455]: time="2025-01-29T12:01:45.544712423Z" level=info msg="StartContainer for \"d6c2584a4664755e5c41595a1c94077b4f9eb7fd1c02df12d31b89b1f13cec89\"" Jan 29 12:01:45.574384 systemd[1]: Started cri-containerd-d6c2584a4664755e5c41595a1c94077b4f9eb7fd1c02df12d31b89b1f13cec89.scope - libcontainer container d6c2584a4664755e5c41595a1c94077b4f9eb7fd1c02df12d31b89b1f13cec89. Jan 29 12:01:45.609450 systemd[1]: cri-containerd-d6c2584a4664755e5c41595a1c94077b4f9eb7fd1c02df12d31b89b1f13cec89.scope: Deactivated successfully. Jan 29 12:01:45.612604 containerd[1455]: time="2025-01-29T12:01:45.612460800Z" level=info msg="StartContainer for \"d6c2584a4664755e5c41595a1c94077b4f9eb7fd1c02df12d31b89b1f13cec89\" returns successfully" Jan 29 12:01:45.649366 containerd[1455]: time="2025-01-29T12:01:45.649277222Z" level=info msg="shim disconnected" id=d6c2584a4664755e5c41595a1c94077b4f9eb7fd1c02df12d31b89b1f13cec89 namespace=k8s.io Jan 29 12:01:45.649366 containerd[1455]: time="2025-01-29T12:01:45.649342503Z" level=warning msg="cleaning up after shim disconnected" id=d6c2584a4664755e5c41595a1c94077b4f9eb7fd1c02df12d31b89b1f13cec89 namespace=k8s.io Jan 29 12:01:45.649366 containerd[1455]: time="2025-01-29T12:01:45.649355063Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:01:46.580010 containerd[1455]: time="2025-01-29T12:01:46.579180967Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Jan 29 12:01:46.599315 kubelet[2662]: I0129 12:01:46.599241 2662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vnvw2" podStartSLOduration=4.599220773 podStartE2EDuration="4.599220773s" podCreationTimestamp="2025-01-29 12:01:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:01:44.580815981 +0000 UTC m=+9.197903264" watchObservedRunningTime="2025-01-29 12:01:46.599220773 +0000 UTC m=+11.216308016" Jan 29 12:01:49.094959 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3082375518.mount: Deactivated successfully. Jan 29 12:01:49.754879 containerd[1455]: time="2025-01-29T12:01:49.754814444Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:49.756376 containerd[1455]: time="2025-01-29T12:01:49.756327364Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26874261" Jan 29 12:01:49.757566 containerd[1455]: time="2025-01-29T12:01:49.757511116Z" level=info msg="ImageCreate event name:\"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:49.763698 containerd[1455]: time="2025-01-29T12:01:49.763635679Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:49.765521 containerd[1455]: time="2025-01-29T12:01:49.765477808Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26863435\" in 3.1862484s" Jan 29 12:01:49.765681 containerd[1455]: time="2025-01-29T12:01:49.765657253Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\"" Jan 29 12:01:49.770790 containerd[1455]: time="2025-01-29T12:01:49.770698468Z" level=info msg="CreateContainer within sandbox \"f13d28f04aeca03024d730c71595ce0c671b67cf6686e3e378cc338024c66ced\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 29 12:01:49.783827 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount327842108.mount: Deactivated successfully. Jan 29 12:01:49.787660 containerd[1455]: time="2025-01-29T12:01:49.787604039Z" level=info msg="CreateContainer within sandbox \"f13d28f04aeca03024d730c71595ce0c671b67cf6686e3e378cc338024c66ced\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"50c0da7aef90f38154078d9f04ee2af292a3dfa9943b55c19c3624223dafee25\"" Jan 29 12:01:49.788422 containerd[1455]: time="2025-01-29T12:01:49.788398580Z" level=info msg="StartContainer for \"50c0da7aef90f38154078d9f04ee2af292a3dfa9943b55c19c3624223dafee25\"" Jan 29 12:01:49.821422 systemd[1]: Started cri-containerd-50c0da7aef90f38154078d9f04ee2af292a3dfa9943b55c19c3624223dafee25.scope - libcontainer container 50c0da7aef90f38154078d9f04ee2af292a3dfa9943b55c19c3624223dafee25. Jan 29 12:01:49.848070 systemd[1]: cri-containerd-50c0da7aef90f38154078d9f04ee2af292a3dfa9943b55c19c3624223dafee25.scope: Deactivated successfully. Jan 29 12:01:49.852347 containerd[1455]: time="2025-01-29T12:01:49.851665149Z" level=info msg="StartContainer for \"50c0da7aef90f38154078d9f04ee2af292a3dfa9943b55c19c3624223dafee25\" returns successfully" Jan 29 12:01:49.859782 kubelet[2662]: I0129 12:01:49.858777 2662 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Jan 29 12:01:49.913592 systemd[1]: Created slice kubepods-burstable-pod442293ba_3154_46ad_8b97_b73f914b6cd8.slice - libcontainer container kubepods-burstable-pod442293ba_3154_46ad_8b97_b73f914b6cd8.slice. Jan 29 12:01:49.925166 systemd[1]: Created slice kubepods-burstable-pod2228a841_9bd9_4b1e_883a_f495c3cc4294.slice - libcontainer container kubepods-burstable-pod2228a841_9bd9_4b1e_883a_f495c3cc4294.slice. Jan 29 12:01:49.942296 containerd[1455]: time="2025-01-29T12:01:49.942148883Z" level=info msg="shim disconnected" id=50c0da7aef90f38154078d9f04ee2af292a3dfa9943b55c19c3624223dafee25 namespace=k8s.io Jan 29 12:01:49.942296 containerd[1455]: time="2025-01-29T12:01:49.942225525Z" level=warning msg="cleaning up after shim disconnected" id=50c0da7aef90f38154078d9f04ee2af292a3dfa9943b55c19c3624223dafee25 namespace=k8s.io Jan 29 12:01:49.942296 containerd[1455]: time="2025-01-29T12:01:49.942245286Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:01:49.990096 kubelet[2662]: I0129 12:01:49.988819 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prjn7\" (UniqueName: \"kubernetes.io/projected/442293ba-3154-46ad-8b97-b73f914b6cd8-kube-api-access-prjn7\") pod \"coredns-668d6bf9bc-rvc6n\" (UID: \"442293ba-3154-46ad-8b97-b73f914b6cd8\") " pod="kube-system/coredns-668d6bf9bc-rvc6n" Jan 29 12:01:49.990096 kubelet[2662]: I0129 12:01:49.988864 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2228a841-9bd9-4b1e-883a-f495c3cc4294-config-volume\") pod \"coredns-668d6bf9bc-9jjll\" (UID: \"2228a841-9bd9-4b1e-883a-f495c3cc4294\") " pod="kube-system/coredns-668d6bf9bc-9jjll" Jan 29 12:01:49.990096 kubelet[2662]: I0129 12:01:49.988890 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/442293ba-3154-46ad-8b97-b73f914b6cd8-config-volume\") pod \"coredns-668d6bf9bc-rvc6n\" (UID: \"442293ba-3154-46ad-8b97-b73f914b6cd8\") " pod="kube-system/coredns-668d6bf9bc-rvc6n" Jan 29 12:01:49.990096 kubelet[2662]: I0129 12:01:49.988931 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zd8h\" (UniqueName: \"kubernetes.io/projected/2228a841-9bd9-4b1e-883a-f495c3cc4294-kube-api-access-5zd8h\") pod \"coredns-668d6bf9bc-9jjll\" (UID: \"2228a841-9bd9-4b1e-883a-f495c3cc4294\") " pod="kube-system/coredns-668d6bf9bc-9jjll" Jan 29 12:01:49.990419 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-50c0da7aef90f38154078d9f04ee2af292a3dfa9943b55c19c3624223dafee25-rootfs.mount: Deactivated successfully. Jan 29 12:01:50.222900 containerd[1455]: time="2025-01-29T12:01:50.222814619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rvc6n,Uid:442293ba-3154-46ad-8b97-b73f914b6cd8,Namespace:kube-system,Attempt:0,}" Jan 29 12:01:50.231178 containerd[1455]: time="2025-01-29T12:01:50.231124257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9jjll,Uid:2228a841-9bd9-4b1e-883a-f495c3cc4294,Namespace:kube-system,Attempt:0,}" Jan 29 12:01:50.283535 containerd[1455]: time="2025-01-29T12:01:50.283306275Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rvc6n,Uid:442293ba-3154-46ad-8b97-b73f914b6cd8,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ad55b0be1382e790cbaa7b866bde4a9363913752e171bb1c1accb587164595bf\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 29 12:01:50.284142 kubelet[2662]: E0129 12:01:50.283921 2662 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad55b0be1382e790cbaa7b866bde4a9363913752e171bb1c1accb587164595bf\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 29 12:01:50.284142 kubelet[2662]: E0129 12:01:50.284002 2662 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad55b0be1382e790cbaa7b866bde4a9363913752e171bb1c1accb587164595bf\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-rvc6n" Jan 29 12:01:50.284467 kubelet[2662]: E0129 12:01:50.284274 2662 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad55b0be1382e790cbaa7b866bde4a9363913752e171bb1c1accb587164595bf\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-rvc6n" Jan 29 12:01:50.284467 kubelet[2662]: E0129 12:01:50.284363 2662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-rvc6n_kube-system(442293ba-3154-46ad-8b97-b73f914b6cd8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-rvc6n_kube-system(442293ba-3154-46ad-8b97-b73f914b6cd8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ad55b0be1382e790cbaa7b866bde4a9363913752e171bb1c1accb587164595bf\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-668d6bf9bc-rvc6n" podUID="442293ba-3154-46ad-8b97-b73f914b6cd8" Jan 29 12:01:50.287281 containerd[1455]: time="2025-01-29T12:01:50.287215787Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9jjll,Uid:2228a841-9bd9-4b1e-883a-f495c3cc4294,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4282af5436d15b170e124385c53c244467e07dbeea7e343caf19ccf06a670c26\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 29 12:01:50.287725 kubelet[2662]: E0129 12:01:50.287691 2662 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4282af5436d15b170e124385c53c244467e07dbeea7e343caf19ccf06a670c26\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 29 12:01:50.287797 kubelet[2662]: E0129 12:01:50.287742 2662 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4282af5436d15b170e124385c53c244467e07dbeea7e343caf19ccf06a670c26\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-9jjll" Jan 29 12:01:50.287797 kubelet[2662]: E0129 12:01:50.287760 2662 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4282af5436d15b170e124385c53c244467e07dbeea7e343caf19ccf06a670c26\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-9jjll" Jan 29 12:01:50.287877 kubelet[2662]: E0129 12:01:50.287800 2662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-9jjll_kube-system(2228a841-9bd9-4b1e-883a-f495c3cc4294)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-9jjll_kube-system(2228a841-9bd9-4b1e-883a-f495c3cc4294)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4282af5436d15b170e124385c53c244467e07dbeea7e343caf19ccf06a670c26\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-668d6bf9bc-9jjll" podUID="2228a841-9bd9-4b1e-883a-f495c3cc4294" Jan 29 12:01:50.590818 containerd[1455]: time="2025-01-29T12:01:50.590761179Z" level=info msg="CreateContainer within sandbox \"f13d28f04aeca03024d730c71595ce0c671b67cf6686e3e378cc338024c66ced\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Jan 29 12:01:50.604783 containerd[1455]: time="2025-01-29T12:01:50.603911596Z" level=info msg="CreateContainer within sandbox \"f13d28f04aeca03024d730c71595ce0c671b67cf6686e3e378cc338024c66ced\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"c592ac1148a368763ab1662aa36ed51aa4078d377d4c13b0d7be06ecd0119ac4\"" Jan 29 12:01:50.604996 containerd[1455]: time="2025-01-29T12:01:50.604902265Z" level=info msg="StartContainer for \"c592ac1148a368763ab1662aa36ed51aa4078d377d4c13b0d7be06ecd0119ac4\"" Jan 29 12:01:50.634265 systemd[1]: Started cri-containerd-c592ac1148a368763ab1662aa36ed51aa4078d377d4c13b0d7be06ecd0119ac4.scope - libcontainer container c592ac1148a368763ab1662aa36ed51aa4078d377d4c13b0d7be06ecd0119ac4. Jan 29 12:01:50.664302 containerd[1455]: time="2025-01-29T12:01:50.664255608Z" level=info msg="StartContainer for \"c592ac1148a368763ab1662aa36ed51aa4078d377d4c13b0d7be06ecd0119ac4\" returns successfully" Jan 29 12:01:50.990899 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4282af5436d15b170e124385c53c244467e07dbeea7e343caf19ccf06a670c26-shm.mount: Deactivated successfully. Jan 29 12:01:50.990994 systemd[1]: run-netns-cni\x2d923612b8\x2d5c0d\x2dab4d\x2d00aa\x2ddd9a6154bd54.mount: Deactivated successfully. Jan 29 12:01:50.991043 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ad55b0be1382e790cbaa7b866bde4a9363913752e171bb1c1accb587164595bf-shm.mount: Deactivated successfully. Jan 29 12:01:51.609087 kubelet[2662]: I0129 12:01:51.608439 2662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-cznfl" podStartSLOduration=2.826057478 podStartE2EDuration="9.608414851s" podCreationTimestamp="2025-01-29 12:01:42 +0000 UTC" firstStartedPulling="2025-01-29 12:01:42.98442155 +0000 UTC m=+7.601508833" lastFinishedPulling="2025-01-29 12:01:49.766778923 +0000 UTC m=+14.383866206" observedRunningTime="2025-01-29 12:01:51.608139083 +0000 UTC m=+16.225226366" watchObservedRunningTime="2025-01-29 12:01:51.608414851 +0000 UTC m=+16.225502174" Jan 29 12:01:51.738784 systemd-networkd[1369]: flannel.1: Link UP Jan 29 12:01:51.738796 systemd-networkd[1369]: flannel.1: Gained carrier Jan 29 12:01:53.730516 systemd-networkd[1369]: flannel.1: Gained IPv6LL Jan 29 12:02:02.508250 containerd[1455]: time="2025-01-29T12:02:02.508123342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9jjll,Uid:2228a841-9bd9-4b1e-883a-f495c3cc4294,Namespace:kube-system,Attempt:0,}" Jan 29 12:02:02.533928 systemd-networkd[1369]: cni0: Link UP Jan 29 12:02:02.533936 systemd-networkd[1369]: cni0: Gained carrier Jan 29 12:02:02.537361 systemd-networkd[1369]: cni0: Lost carrier Jan 29 12:02:02.542656 systemd-networkd[1369]: veth633f4183: Link UP Jan 29 12:02:02.545128 kernel: cni0: port 1(veth633f4183) entered blocking state Jan 29 12:02:02.545208 kernel: cni0: port 1(veth633f4183) entered disabled state Jan 29 12:02:02.545235 kernel: veth633f4183: entered allmulticast mode Jan 29 12:02:02.546163 kernel: veth633f4183: entered promiscuous mode Jan 29 12:02:02.547117 kernel: cni0: port 1(veth633f4183) entered blocking state Jan 29 12:02:02.547151 kernel: cni0: port 1(veth633f4183) entered forwarding state Jan 29 12:02:02.551120 kernel: cni0: port 1(veth633f4183) entered disabled state Jan 29 12:02:02.557769 systemd-networkd[1369]: veth633f4183: Gained carrier Jan 29 12:02:02.558095 kernel: cni0: port 1(veth633f4183) entered blocking state Jan 29 12:02:02.558156 kernel: cni0: port 1(veth633f4183) entered forwarding state Jan 29 12:02:02.558008 systemd-networkd[1369]: cni0: Gained carrier Jan 29 12:02:02.562931 containerd[1455]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x400001a938), "name":"cbr0", "type":"bridge"} Jan 29 12:02:02.562931 containerd[1455]: delegateAdd: netconf sent to delegate plugin: Jan 29 12:02:02.588710 containerd[1455]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-01-29T12:02:02.588585082Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:02:02.588710 containerd[1455]: time="2025-01-29T12:02:02.588670486Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:02:02.588710 containerd[1455]: time="2025-01-29T12:02:02.588686287Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:02:02.589091 containerd[1455]: time="2025-01-29T12:02:02.589002982Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:02:02.608317 systemd[1]: run-containerd-runc-k8s.io-a8913d980025fed5b71818a382de0636eb9aae0f3b18f6cee72365fa004d19c5-runc.nir9oX.mount: Deactivated successfully. Jan 29 12:02:02.618387 systemd[1]: Started cri-containerd-a8913d980025fed5b71818a382de0636eb9aae0f3b18f6cee72365fa004d19c5.scope - libcontainer container a8913d980025fed5b71818a382de0636eb9aae0f3b18f6cee72365fa004d19c5. Jan 29 12:02:02.653236 containerd[1455]: time="2025-01-29T12:02:02.653177693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9jjll,Uid:2228a841-9bd9-4b1e-883a-f495c3cc4294,Namespace:kube-system,Attempt:0,} returns sandbox id \"a8913d980025fed5b71818a382de0636eb9aae0f3b18f6cee72365fa004d19c5\"" Jan 29 12:02:02.657892 containerd[1455]: time="2025-01-29T12:02:02.657728514Z" level=info msg="CreateContainer within sandbox \"a8913d980025fed5b71818a382de0636eb9aae0f3b18f6cee72365fa004d19c5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 12:02:02.670666 containerd[1455]: time="2025-01-29T12:02:02.670604818Z" level=info msg="CreateContainer within sandbox \"a8913d980025fed5b71818a382de0636eb9aae0f3b18f6cee72365fa004d19c5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"41f46363793f76412d049e8db08f56ea635e9aaec940a92a942dd7e8c29dbf97\"" Jan 29 12:02:02.671338 containerd[1455]: time="2025-01-29T12:02:02.671310532Z" level=info msg="StartContainer for \"41f46363793f76412d049e8db08f56ea635e9aaec940a92a942dd7e8c29dbf97\"" Jan 29 12:02:02.702399 systemd[1]: Started cri-containerd-41f46363793f76412d049e8db08f56ea635e9aaec940a92a942dd7e8c29dbf97.scope - libcontainer container 41f46363793f76412d049e8db08f56ea635e9aaec940a92a942dd7e8c29dbf97. Jan 29 12:02:02.733182 containerd[1455]: time="2025-01-29T12:02:02.733134409Z" level=info msg="StartContainer for \"41f46363793f76412d049e8db08f56ea635e9aaec940a92a942dd7e8c29dbf97\" returns successfully" Jan 29 12:02:03.518320 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3164571100.mount: Deactivated successfully. Jan 29 12:02:03.652151 kubelet[2662]: I0129 12:02:03.650992 2662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-9jjll" podStartSLOduration=21.650966602 podStartE2EDuration="21.650966602s" podCreationTimestamp="2025-01-29 12:01:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:02:03.63448054 +0000 UTC m=+28.251567863" watchObservedRunningTime="2025-01-29 12:02:03.650966602 +0000 UTC m=+28.268053885" Jan 29 12:02:04.226412 systemd-networkd[1369]: veth633f4183: Gained IPv6LL Jan 29 12:02:04.290329 systemd-networkd[1369]: cni0: Gained IPv6LL Jan 29 12:02:05.509494 containerd[1455]: time="2025-01-29T12:02:05.508856112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rvc6n,Uid:442293ba-3154-46ad-8b97-b73f914b6cd8,Namespace:kube-system,Attempt:0,}" Jan 29 12:02:05.544285 systemd-networkd[1369]: veth45c4c872: Link UP Jan 29 12:02:05.545616 kernel: cni0: port 2(veth45c4c872) entered blocking state Jan 29 12:02:05.545650 kernel: cni0: port 2(veth45c4c872) entered disabled state Jan 29 12:02:05.545668 kernel: veth45c4c872: entered allmulticast mode Jan 29 12:02:05.545685 kernel: veth45c4c872: entered promiscuous mode Jan 29 12:02:05.551256 kernel: cni0: port 2(veth45c4c872) entered blocking state Jan 29 12:02:05.551380 kernel: cni0: port 2(veth45c4c872) entered forwarding state Jan 29 12:02:05.551485 systemd-networkd[1369]: veth45c4c872: Gained carrier Jan 29 12:02:05.553783 containerd[1455]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40000948e8), "name":"cbr0", "type":"bridge"} Jan 29 12:02:05.553783 containerd[1455]: delegateAdd: netconf sent to delegate plugin: Jan 29 12:02:05.583508 containerd[1455]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-01-29T12:02:05.583332171Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:02:05.584123 containerd[1455]: time="2025-01-29T12:02:05.583840157Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:02:05.584123 containerd[1455]: time="2025-01-29T12:02:05.583861478Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:02:05.584123 containerd[1455]: time="2025-01-29T12:02:05.584022167Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:02:05.607306 systemd[1]: run-containerd-runc-k8s.io-ac50d3e230fe78dd273689e330b686429745c13ba888e1cd63ee2669bdf12b54-runc.LnCB1Z.mount: Deactivated successfully. Jan 29 12:02:05.617482 systemd[1]: Started cri-containerd-ac50d3e230fe78dd273689e330b686429745c13ba888e1cd63ee2669bdf12b54.scope - libcontainer container ac50d3e230fe78dd273689e330b686429745c13ba888e1cd63ee2669bdf12b54. Jan 29 12:02:05.656352 containerd[1455]: time="2025-01-29T12:02:05.656267389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rvc6n,Uid:442293ba-3154-46ad-8b97-b73f914b6cd8,Namespace:kube-system,Attempt:0,} returns sandbox id \"ac50d3e230fe78dd273689e330b686429745c13ba888e1cd63ee2669bdf12b54\"" Jan 29 12:02:05.660383 containerd[1455]: time="2025-01-29T12:02:05.660071788Z" level=info msg="CreateContainer within sandbox \"ac50d3e230fe78dd273689e330b686429745c13ba888e1cd63ee2669bdf12b54\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 12:02:05.674091 containerd[1455]: time="2025-01-29T12:02:05.674029799Z" level=info msg="CreateContainer within sandbox \"ac50d3e230fe78dd273689e330b686429745c13ba888e1cd63ee2669bdf12b54\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"49e2ea62eadaf9f29a2f139cb3002ed6835123e7b520375d16b98226e655c866\"" Jan 29 12:02:05.674815 containerd[1455]: time="2025-01-29T12:02:05.674790598Z" level=info msg="StartContainer for \"49e2ea62eadaf9f29a2f139cb3002ed6835123e7b520375d16b98226e655c866\"" Jan 29 12:02:05.703337 systemd[1]: Started cri-containerd-49e2ea62eadaf9f29a2f139cb3002ed6835123e7b520375d16b98226e655c866.scope - libcontainer container 49e2ea62eadaf9f29a2f139cb3002ed6835123e7b520375d16b98226e655c866. Jan 29 12:02:05.735081 containerd[1455]: time="2025-01-29T12:02:05.734937267Z" level=info msg="StartContainer for \"49e2ea62eadaf9f29a2f139cb3002ed6835123e7b520375d16b98226e655c866\" returns successfully" Jan 29 12:02:06.524144 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2957080835.mount: Deactivated successfully. Jan 29 12:02:06.662776 kubelet[2662]: I0129 12:02:06.662257 2662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-rvc6n" podStartSLOduration=24.662237884 podStartE2EDuration="24.662237884s" podCreationTimestamp="2025-01-29 12:01:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:02:06.646142662 +0000 UTC m=+31.263229945" watchObservedRunningTime="2025-01-29 12:02:06.662237884 +0000 UTC m=+31.279325207" Jan 29 12:02:06.850652 systemd-networkd[1369]: veth45c4c872: Gained IPv6LL Jan 29 12:05:36.975807 update_engine[1449]: I20250129 12:05:36.975272 1449 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 29 12:05:36.975807 update_engine[1449]: I20250129 12:05:36.975354 1449 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 29 12:05:36.975807 update_engine[1449]: I20250129 12:05:36.975672 1449 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 29 12:05:36.979888 update_engine[1449]: I20250129 12:05:36.978093 1449 omaha_request_params.cc:62] Current group set to lts Jan 29 12:05:36.979888 update_engine[1449]: I20250129 12:05:36.978430 1449 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 29 12:05:36.979888 update_engine[1449]: I20250129 12:05:36.978485 1449 update_attempter.cc:643] Scheduling an action processor start. Jan 29 12:05:36.979888 update_engine[1449]: I20250129 12:05:36.978520 1449 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 29 12:05:36.979888 update_engine[1449]: I20250129 12:05:36.978709 1449 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 29 12:05:36.979888 update_engine[1449]: I20250129 12:05:36.978888 1449 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 29 12:05:36.979888 update_engine[1449]: I20250129 12:05:36.978904 1449 omaha_request_action.cc:272] Request: Jan 29 12:05:36.979888 update_engine[1449]: Jan 29 12:05:36.979888 update_engine[1449]: Jan 29 12:05:36.979888 update_engine[1449]: Jan 29 12:05:36.979888 update_engine[1449]: Jan 29 12:05:36.979888 update_engine[1449]: Jan 29 12:05:36.979888 update_engine[1449]: Jan 29 12:05:36.979888 update_engine[1449]: Jan 29 12:05:36.979888 update_engine[1449]: Jan 29 12:05:36.979888 update_engine[1449]: I20250129 12:05:36.978911 1449 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 12:05:36.982132 update_engine[1449]: I20250129 12:05:36.982098 1449 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 12:05:36.982617 update_engine[1449]: I20250129 12:05:36.982555 1449 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 12:05:36.982782 locksmithd[1491]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 29 12:05:36.986121 update_engine[1449]: E20250129 12:05:36.986088 1449 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 12:05:36.986293 update_engine[1449]: I20250129 12:05:36.986263 1449 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 29 12:05:46.884030 update_engine[1449]: I20250129 12:05:46.883503 1449 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 12:05:46.884030 update_engine[1449]: I20250129 12:05:46.883915 1449 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 12:05:46.884756 update_engine[1449]: I20250129 12:05:46.884253 1449 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 12:05:46.885198 update_engine[1449]: E20250129 12:05:46.885136 1449 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 12:05:46.885310 update_engine[1449]: I20250129 12:05:46.885216 1449 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 29 12:05:56.885073 update_engine[1449]: I20250129 12:05:56.884958 1449 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 12:05:56.885609 update_engine[1449]: I20250129 12:05:56.885318 1449 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 12:05:56.885609 update_engine[1449]: I20250129 12:05:56.885568 1449 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 12:05:56.886455 update_engine[1449]: E20250129 12:05:56.886392 1449 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 12:05:56.886560 update_engine[1449]: I20250129 12:05:56.886471 1449 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 29 12:06:06.883401 update_engine[1449]: I20250129 12:06:06.883270 1449 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 12:06:06.884274 update_engine[1449]: I20250129 12:06:06.883714 1449 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 12:06:06.884274 update_engine[1449]: I20250129 12:06:06.884032 1449 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 12:06:06.884882 update_engine[1449]: E20250129 12:06:06.884814 1449 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 12:06:06.884967 update_engine[1449]: I20250129 12:06:06.884913 1449 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 29 12:06:06.884967 update_engine[1449]: I20250129 12:06:06.884936 1449 omaha_request_action.cc:617] Omaha request response: Jan 29 12:06:06.885110 update_engine[1449]: E20250129 12:06:06.885077 1449 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 29 12:06:06.885168 update_engine[1449]: I20250129 12:06:06.885118 1449 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 29 12:06:06.885168 update_engine[1449]: I20250129 12:06:06.885131 1449 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 29 12:06:06.885168 update_engine[1449]: I20250129 12:06:06.885142 1449 update_attempter.cc:306] Processing Done. Jan 29 12:06:06.885280 update_engine[1449]: E20250129 12:06:06.885166 1449 update_attempter.cc:619] Update failed. Jan 29 12:06:06.885280 update_engine[1449]: I20250129 12:06:06.885178 1449 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 29 12:06:06.885280 update_engine[1449]: I20250129 12:06:06.885190 1449 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 29 12:06:06.885280 update_engine[1449]: I20250129 12:06:06.885201 1449 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 29 12:06:06.885448 update_engine[1449]: I20250129 12:06:06.885317 1449 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 29 12:06:06.885448 update_engine[1449]: I20250129 12:06:06.885356 1449 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 29 12:06:06.885448 update_engine[1449]: I20250129 12:06:06.885370 1449 omaha_request_action.cc:272] Request: Jan 29 12:06:06.885448 update_engine[1449]: Jan 29 12:06:06.885448 update_engine[1449]: Jan 29 12:06:06.885448 update_engine[1449]: Jan 29 12:06:06.885448 update_engine[1449]: Jan 29 12:06:06.885448 update_engine[1449]: Jan 29 12:06:06.885448 update_engine[1449]: Jan 29 12:06:06.885448 update_engine[1449]: I20250129 12:06:06.885382 1449 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 12:06:06.885831 update_engine[1449]: I20250129 12:06:06.885668 1449 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 12:06:06.886078 update_engine[1449]: I20250129 12:06:06.885936 1449 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 12:06:06.886384 locksmithd[1491]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 29 12:06:06.886958 update_engine[1449]: E20250129 12:06:06.886873 1449 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 12:06:06.887038 update_engine[1449]: I20250129 12:06:06.886967 1449 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 29 12:06:06.887038 update_engine[1449]: I20250129 12:06:06.886987 1449 omaha_request_action.cc:617] Omaha request response: Jan 29 12:06:06.887038 update_engine[1449]: I20250129 12:06:06.887002 1449 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 29 12:06:06.887038 update_engine[1449]: I20250129 12:06:06.887013 1449 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 29 12:06:06.887038 update_engine[1449]: I20250129 12:06:06.887023 1449 update_attempter.cc:306] Processing Done. Jan 29 12:06:06.887461 update_engine[1449]: I20250129 12:06:06.887037 1449 update_attempter.cc:310] Error event sent. Jan 29 12:06:06.887461 update_engine[1449]: I20250129 12:06:06.887080 1449 update_check_scheduler.cc:74] Next update check in 40m8s Jan 29 12:06:06.887561 locksmithd[1491]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 29 12:06:19.751774 systemd[1]: Started sshd@5-91.107.217.81:22-139.178.89.65:47258.service - OpenSSH per-connection server daemon (139.178.89.65:47258). Jan 29 12:06:20.737838 sshd[4673]: Accepted publickey for core from 139.178.89.65 port 47258 ssh2: RSA SHA256:7wq88Y6mZHPWeloslPJpjPR/GjZkKRbv3BUAF2pnzNA Jan 29 12:06:20.739925 sshd[4673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:20.746856 systemd-logind[1448]: New session 6 of user core. Jan 29 12:06:20.755397 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 12:06:21.507880 sshd[4673]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:21.514480 systemd[1]: sshd@5-91.107.217.81:22-139.178.89.65:47258.service: Deactivated successfully. Jan 29 12:06:21.518336 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 12:06:21.520743 systemd-logind[1448]: Session 6 logged out. Waiting for processes to exit. Jan 29 12:06:21.522256 systemd-logind[1448]: Removed session 6. Jan 29 12:06:26.691180 systemd[1]: Started sshd@6-91.107.217.81:22-139.178.89.65:50026.service - OpenSSH per-connection server daemon (139.178.89.65:50026). Jan 29 12:06:27.659752 sshd[4712]: Accepted publickey for core from 139.178.89.65 port 50026 ssh2: RSA SHA256:7wq88Y6mZHPWeloslPJpjPR/GjZkKRbv3BUAF2pnzNA Jan 29 12:06:27.662386 sshd[4712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:27.667771 systemd-logind[1448]: New session 7 of user core. Jan 29 12:06:27.677361 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 12:06:28.405605 sshd[4712]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:28.411701 systemd[1]: sshd@6-91.107.217.81:22-139.178.89.65:50026.service: Deactivated successfully. Jan 29 12:06:28.414682 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 12:06:28.416031 systemd-logind[1448]: Session 7 logged out. Waiting for processes to exit. Jan 29 12:06:28.418736 systemd-logind[1448]: Removed session 7. Jan 29 12:06:33.584404 systemd[1]: Started sshd@7-91.107.217.81:22-139.178.89.65:59220.service - OpenSSH per-connection server daemon (139.178.89.65:59220). Jan 29 12:06:34.571480 sshd[4768]: Accepted publickey for core from 139.178.89.65 port 59220 ssh2: RSA SHA256:7wq88Y6mZHPWeloslPJpjPR/GjZkKRbv3BUAF2pnzNA Jan 29 12:06:34.573553 sshd[4768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:34.578562 systemd-logind[1448]: New session 8 of user core. Jan 29 12:06:34.592434 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 12:06:35.323805 sshd[4768]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:35.329211 systemd[1]: sshd@7-91.107.217.81:22-139.178.89.65:59220.service: Deactivated successfully. Jan 29 12:06:35.332368 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 12:06:35.334019 systemd-logind[1448]: Session 8 logged out. Waiting for processes to exit. Jan 29 12:06:35.335612 systemd-logind[1448]: Removed session 8. Jan 29 12:06:35.493546 systemd[1]: Started sshd@8-91.107.217.81:22-139.178.89.65:59228.service - OpenSSH per-connection server daemon (139.178.89.65:59228). Jan 29 12:06:36.491038 sshd[4782]: Accepted publickey for core from 139.178.89.65 port 59228 ssh2: RSA SHA256:7wq88Y6mZHPWeloslPJpjPR/GjZkKRbv3BUAF2pnzNA Jan 29 12:06:36.493428 sshd[4782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:36.498512 systemd-logind[1448]: New session 9 of user core. Jan 29 12:06:36.508358 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 12:06:37.278625 sshd[4782]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:37.282217 systemd[1]: sshd@8-91.107.217.81:22-139.178.89.65:59228.service: Deactivated successfully. Jan 29 12:06:37.284694 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 12:06:37.285447 systemd-logind[1448]: Session 9 logged out. Waiting for processes to exit. Jan 29 12:06:37.286651 systemd-logind[1448]: Removed session 9. Jan 29 12:06:37.455587 systemd[1]: Started sshd@9-91.107.217.81:22-139.178.89.65:59232.service - OpenSSH per-connection server daemon (139.178.89.65:59232). Jan 29 12:06:38.425094 sshd[4795]: Accepted publickey for core from 139.178.89.65 port 59232 ssh2: RSA SHA256:7wq88Y6mZHPWeloslPJpjPR/GjZkKRbv3BUAF2pnzNA Jan 29 12:06:38.427383 sshd[4795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:38.432253 systemd-logind[1448]: New session 10 of user core. Jan 29 12:06:38.438280 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 12:06:39.169045 sshd[4795]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:39.175414 systemd-logind[1448]: Session 10 logged out. Waiting for processes to exit. Jan 29 12:06:39.175446 systemd[1]: sshd@9-91.107.217.81:22-139.178.89.65:59232.service: Deactivated successfully. Jan 29 12:06:39.178748 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 12:06:39.179915 systemd-logind[1448]: Removed session 10. Jan 29 12:06:44.351464 systemd[1]: Started sshd@10-91.107.217.81:22-139.178.89.65:57372.service - OpenSSH per-connection server daemon (139.178.89.65:57372). Jan 29 12:06:45.318541 sshd[4852]: Accepted publickey for core from 139.178.89.65 port 57372 ssh2: RSA SHA256:7wq88Y6mZHPWeloslPJpjPR/GjZkKRbv3BUAF2pnzNA Jan 29 12:06:45.320759 sshd[4852]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:45.327249 systemd-logind[1448]: New session 11 of user core. Jan 29 12:06:45.338381 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 12:06:46.066290 sshd[4852]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:46.071319 systemd-logind[1448]: Session 11 logged out. Waiting for processes to exit. Jan 29 12:06:46.071346 systemd[1]: sshd@10-91.107.217.81:22-139.178.89.65:57372.service: Deactivated successfully. Jan 29 12:06:46.074400 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 12:06:46.076972 systemd-logind[1448]: Removed session 11. Jan 29 12:06:46.245634 systemd[1]: Started sshd@11-91.107.217.81:22-139.178.89.65:57388.service - OpenSSH per-connection server daemon (139.178.89.65:57388). Jan 29 12:06:47.226919 sshd[4865]: Accepted publickey for core from 139.178.89.65 port 57388 ssh2: RSA SHA256:7wq88Y6mZHPWeloslPJpjPR/GjZkKRbv3BUAF2pnzNA Jan 29 12:06:47.229644 sshd[4865]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:47.234660 systemd-logind[1448]: New session 12 of user core. Jan 29 12:06:47.243447 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 12:06:48.017450 sshd[4865]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:48.021870 systemd[1]: sshd@11-91.107.217.81:22-139.178.89.65:57388.service: Deactivated successfully. Jan 29 12:06:48.023805 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 12:06:48.026488 systemd-logind[1448]: Session 12 logged out. Waiting for processes to exit. Jan 29 12:06:48.027608 systemd-logind[1448]: Removed session 12. Jan 29 12:06:48.195510 systemd[1]: Started sshd@12-91.107.217.81:22-139.178.89.65:57390.service - OpenSSH per-connection server daemon (139.178.89.65:57390). Jan 29 12:06:49.175552 sshd[4884]: Accepted publickey for core from 139.178.89.65 port 57390 ssh2: RSA SHA256:7wq88Y6mZHPWeloslPJpjPR/GjZkKRbv3BUAF2pnzNA Jan 29 12:06:49.177846 sshd[4884]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:49.183259 systemd-logind[1448]: New session 13 of user core. Jan 29 12:06:49.189234 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 12:06:50.927818 sshd[4884]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:50.934466 systemd-logind[1448]: Session 13 logged out. Waiting for processes to exit. Jan 29 12:06:50.935560 systemd[1]: sshd@12-91.107.217.81:22-139.178.89.65:57390.service: Deactivated successfully. Jan 29 12:06:50.937867 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 12:06:50.941578 systemd-logind[1448]: Removed session 13. Jan 29 12:06:51.106418 systemd[1]: Started sshd@13-91.107.217.81:22-139.178.89.65:57394.service - OpenSSH per-connection server daemon (139.178.89.65:57394). Jan 29 12:06:52.093370 sshd[4915]: Accepted publickey for core from 139.178.89.65 port 57394 ssh2: RSA SHA256:7wq88Y6mZHPWeloslPJpjPR/GjZkKRbv3BUAF2pnzNA Jan 29 12:06:52.095832 sshd[4915]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:52.101640 systemd-logind[1448]: New session 14 of user core. Jan 29 12:06:52.110340 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 12:06:52.980395 sshd[4915]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:52.986109 systemd[1]: sshd@13-91.107.217.81:22-139.178.89.65:57394.service: Deactivated successfully. Jan 29 12:06:52.988854 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 12:06:52.991338 systemd-logind[1448]: Session 14 logged out. Waiting for processes to exit. Jan 29 12:06:52.992927 systemd-logind[1448]: Removed session 14. Jan 29 12:06:53.161525 systemd[1]: Started sshd@14-91.107.217.81:22-139.178.89.65:42138.service - OpenSSH per-connection server daemon (139.178.89.65:42138). Jan 29 12:06:54.138755 sshd[4932]: Accepted publickey for core from 139.178.89.65 port 42138 ssh2: RSA SHA256:7wq88Y6mZHPWeloslPJpjPR/GjZkKRbv3BUAF2pnzNA Jan 29 12:06:54.140878 sshd[4932]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:54.145845 systemd-logind[1448]: New session 15 of user core. Jan 29 12:06:54.154370 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 12:06:54.882131 sshd[4932]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:54.887759 systemd[1]: sshd@14-91.107.217.81:22-139.178.89.65:42138.service: Deactivated successfully. Jan 29 12:06:54.889814 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 12:06:54.890724 systemd-logind[1448]: Session 15 logged out. Waiting for processes to exit. Jan 29 12:06:54.891729 systemd-logind[1448]: Removed session 15. Jan 29 12:07:00.056668 systemd[1]: Started sshd@15-91.107.217.81:22-139.178.89.65:42142.service - OpenSSH per-connection server daemon (139.178.89.65:42142). Jan 29 12:07:01.032574 sshd[4983]: Accepted publickey for core from 139.178.89.65 port 42142 ssh2: RSA SHA256:7wq88Y6mZHPWeloslPJpjPR/GjZkKRbv3BUAF2pnzNA Jan 29 12:07:01.034500 sshd[4983]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:07:01.041568 systemd-logind[1448]: New session 16 of user core. Jan 29 12:07:01.046292 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 12:07:01.782457 sshd[4983]: pam_unix(sshd:session): session closed for user core Jan 29 12:07:01.787267 systemd[1]: sshd@15-91.107.217.81:22-139.178.89.65:42142.service: Deactivated successfully. Jan 29 12:07:01.789389 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 12:07:01.790592 systemd-logind[1448]: Session 16 logged out. Waiting for processes to exit. Jan 29 12:07:01.791933 systemd-logind[1448]: Removed session 16. Jan 29 12:07:06.956503 systemd[1]: Started sshd@16-91.107.217.81:22-139.178.89.65:37658.service - OpenSSH per-connection server daemon (139.178.89.65:37658). Jan 29 12:07:07.939032 sshd[5017]: Accepted publickey for core from 139.178.89.65 port 37658 ssh2: RSA SHA256:7wq88Y6mZHPWeloslPJpjPR/GjZkKRbv3BUAF2pnzNA Jan 29 12:07:07.942658 sshd[5017]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:07:07.948028 systemd-logind[1448]: New session 17 of user core. Jan 29 12:07:07.956525 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 12:07:08.686530 sshd[5017]: pam_unix(sshd:session): session closed for user core Jan 29 12:07:08.690912 systemd[1]: sshd@16-91.107.217.81:22-139.178.89.65:37658.service: Deactivated successfully. Jan 29 12:07:08.693567 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 12:07:08.694665 systemd-logind[1448]: Session 17 logged out. Waiting for processes to exit. Jan 29 12:07:08.695926 systemd-logind[1448]: Removed session 17.